CN102572265B - The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control - Google Patents

The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control Download PDF

Info

Publication number
CN102572265B
CN102572265B CN201110399082.8A CN201110399082A CN102572265B CN 102572265 B CN102572265 B CN 102572265B CN 201110399082 A CN201110399082 A CN 201110399082A CN 102572265 B CN102572265 B CN 102572265B
Authority
CN
China
Prior art keywords
focusing
auto
pixel
rough
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110399082.8A
Other languages
Chinese (zh)
Other versions
CN102572265A (en
Inventor
G·科泰
J·E·弗雷德里克森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Computer Inc filed Critical Apple Computer Inc
Publication of CN102572265A publication Critical patent/CN102572265A/en
Application granted granted Critical
Publication of CN102572265B publication Critical patent/CN102572265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B3/00Focusing arrangements of general interest for cameras, projectors or printers
    • G03B3/10Power-operated focusing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)

Abstract

Provide multiple use auto-focusing statistical information and determine the technology of best focusing position.In one embodiment, these technology can comprise for determining that best focusing length generates rough and meticulous auto-focusing mark, wherein place at this best focusing length place the camera lens (88) associated with imageing sensor (90).Such as, statistic logic (680) can determine the rough position indicating best focusing area, in one embodiment, this best focusing area is by searching for the first rough position to determine, at this first rough position, rough auto-focusing mark declines to some extent relative to the rough auto-focusing mark of last position.Using the starting point that this position is searched for as meticulous mark, best focusing position is determined by the peak value searching for meticulous auto-focusing mark.In another embodiment, also can determine auto-focusing statistical information based on each color of Bayer RGB, even if to make there is aberration, the relative auto-focusing mark of each color also can be used to determine direction of focusing.

Description

The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control
Technical field
The disclosure relates generally to digital imaging apparatus, in particular to the system and method for the view data obtained for the treatment of using the imageing sensor of digital imaging apparatus.
Background technology
This part is used for the many aspects in the field introducing the many aspects of this technology relating to following described and/or request protection to reader.Believe that this discussion contributes to providing background information to understand many aspects of the present disclosure better to make it to reader.Therefore, it will be appreciated that what these statements were in this connection used to read, but not admit that they are prior aries.
In recent years, digital imaging apparatus had become more universal, and this at least has benefited from these equipment and more and more afford to consume for becoming ordinary consumer.In addition, except the independent digital camera of some just listings, the part that digital imaging apparatus is integrated into another electronic equipment (such as above-knee or notebook computer, cell phone or portable media player) is also very common.
In order to obtain view data, most of digital imaging apparatus comprises imageing sensor, and it provides multiple light being configured to be detected by imageing sensor to be converted to the photodetector of the signal of telecommunication (such as photoelectric detector).Imageing sensor also can comprise color filter array, and it carries out filtering to obtain colouring information to the light by image capture sensor.Then process by the view data of image capture sensor by image processing pipeline, this image processing pipeline can apply multiple image processing operations to generate the full color image that can be displayed on for viewing on display device (such as monitor) to view data.
Traditional images treatment technology is generally devoted to produce subjective and objectively all make the image watched that beholder is joyful, and such conventional art possibly cannot solve in view data fully by the mistake of imaging device and/or imageing sensor introducing and/or distortion.Such as, defect pixel in the imageing sensor that may cause because of manufacturing defect or operating mistake may light sensing level exactly, and if not correction up, may produce artifact (artifact) in the treated image obtained.In addition, the luminous intensity of the edge at imageing sensor that may cause because of the imperfect of camera lens manufacture declines, and may have adverse effect, and the overall light intensity in an image may be caused uneven to feature measurement.Image processing pipeline also can perform one or more process with sharpening image.But conventional sharpening technology may not take into account the noise existed in picture signal fully, or possibly the edge in noise and image and texture region cannot be distinguished.In these cases, conventional sharpening technology may actually increase manifesting of noise in image, and this normally undesirably occurs.In addition, can also perform multiple additional image treatment step, some in these steps may rely on collects image statistics data collected by engine by statistical information.
Another image processing operations that can be applied to the view data obtained by imageing sensor separates mosaic (demosaicing) operation.Because color filter array is generally the color data that each sensor pixel provides a wavelength, so generally carry out for each Color Channel the full set that interpolation obtains color data, to reproduce full color image (such as RGB image).Usually, tradition separates mosaic technology generally according to the fixed threshold of certain type, in the horizontal or vertical directions the value of the color data of interpolation loss.But, these tradition separate position and the direction that mosaic technology fully may not take into account each edge in image, this may cause being incorporated in full color image particularly along the edge artifact at the edge, diagonal angle in image, such as aliasing (aliasing), gridiron pattern artifact or rainbow artifact.
Therefore, when processing the digital picture obtained by digital camera or other imaging devices, should be noted that multiple consideration is to improve the outward appearance of result images.Specifically, the particular aspects of following discloses can solve the above one or more shortcomings briefly mentioned.
Summary of the invention
The general introduction of specific embodiment disclosed herein will be set forth below.Be understandable that and only present these aspects here to provide the brief overview of these specific embodiments to reader, and these aspects do not limit the scope of the present disclosure.In fact, the disclosure can contain the many aspects having elaboration.
Present disclose provides multiple for collecting in the image-signal processor (ISP) and processing the technology of statistics.In one embodiment, statistical information can be realized collect engine in the front-end processing unit of ISP, statistical information can be collected prior to the ISP pipeline processes in front-end processing unit downstream like this.According to disclosure aspect, this statistical information is collected engine and can be configured to obtain the statistical information relevant to Automatic white balance, automatic exposure and auto-focusing.In one embodiment, this statistical information collects engine can receive original Bayer (Bayer) RGB data obtained by imageing sensor, and can be configured to perform one or more color space conversion to obtain the pixel data in other color spaces.One group of pixel filter can be configured to based on YC1C2 characteristic conditionally (by pixel filter by pixel conditional definition) accumulates pixel data sum.Depend on selected color space, pixel filter can generate color and, described color and current luminous element can be used to be mated by one group of reference illuminant of its preceding calibration with imageing sensor.
According to another aspect of the disclosure, auto-focusing statistical information can be used to produce to focus the rough of length and meticulous auto-focusing mark for determining to locate the best of camera lens associated with imageing sensor.Such as, this statistic logic can determine the rough position indicating best focusing area, and in one embodiment, the first rough position that this best focusing area can be reduced relative to its rough auto-focusing mark of last position by search is determined.Use the starting point that this position is searched for as meticulous mark, best focusing position can be determined by the peak value searching for meticulous auto-focusing mark.Can also determine auto-focusing statistical information based on each color of Bayer RGB, even if there is aberration, the relative auto-focusing mark of each color can be used for determining the direction of focusing.In addition, the statistical information of collection can be outputted to memory, and use the statistical information of this collection to process the view data obtained by ISP.
The multiple refinement of shown feature is present in the many aspects that the disclosure relates to above.Other features also can be incorporated into these aspects.These refinements and supplementary features can independently exist or exist in combination.Such as, the various features relating to one or more example embodiment discussed below can be incorporated in above-mentioned aspect of the present disclosure by independent or combination in any.Again, the brief overview more than presented only is intended to make reader be familiar with particular aspects and the content of embodiment of the present disclosure, but not the restriction to claimed theme.
Accompanying drawing explanation
This patent or application documents comprise the accompanying drawing of at least one colour print.By by official according to request and pay necessary expenses provide there is color drawings this patent or patent application disclosed in copy.
Read following detailed description and can better understand many aspects of the present disclosure with reference to accompanying drawing, in the accompanying drawings:
Fig. 1 describes the simple block diagram comprising the parts of the example being configured to the imaging device of one or more image processing techniquess and the electronic equipment of image processing circuit realizing setting forth in the disclosure;
Fig. 2 shows the schematic diagram of 2 × 2 block of pixels of the Bayer color filters array realized in the imaging device of Fig. 1;
Fig. 3 is the perspective view of the electronic equipment of Fig. 1 of lap-top computing devices form according to aspects more of the present disclosure;
Fig. 4 is the front view of the electronic equipment of Fig. 1 of desktop computing equipment form according to aspects more of the present disclosure;
Fig. 5 is the front view of the electronic equipment of Fig. 1 of hand-portable type electronic equipment form according to aspects more of the present disclosure;
Fig. 6 is the rearview of the electronic equipment shown in Fig. 5;
Fig. 7 is exemplified with according to front-end image signal transacting (ISP) logic that can realize in the image processing circuit of Fig. 1 of aspects more of the present disclosure and the block diagram of ISP stream treatment logic;
Fig. 8 shows the more detailed block diagram of the embodiment of the ISP front end logic of the Fig. 7 according to aspects more of the present disclosure;
Fig. 9 depicts the flow chart for the method for image data processing in the ISP front end logic of Fig. 8 according to an embodiment;
Figure 10 is exemplified with the block diagram being used to the double buffering register of image data processing in ISP front end logic and the structure of control register according to an embodiment;
Figure 11-13 is sequential charts of the different mode of the process for triggered image frames of the embodiment depicted according to this technology;
Figure 14 is the detailed diagram of the control register depicted according to an embodiment;
Figure 15 depicts when the ISP front end logic in Fig. 8 operates in single-sensor pattern for using front end pixel processing unit to process the flow chart of the method for picture frame;
Figure 16 depicts when the ISP front end logic in Fig. 8 operates in dual sensor pattern for using front end pixel processing unit to process the flow chart of the method for picture frame;
Figure 17 depicts when the ISP front end logic in Fig. 8 operates in dual sensor pattern for using front end pixel processing unit to process the flow chart of the method for picture frame;
Figure 18 be depict according to an embodiment the flow chart of method, wherein two imageing sensors are all movable, but wherein the first imageing sensor is sending picture frame to front end pixel processing unit, and the second imageing sensor is sending picture frame to statistical disposition unit, therefore, when the second imageing sensor continues to send picture frame to front end pixel processing unit after a while, the imaging statistics of the second transducer can be obtained at once;
Figure 19 is the graphic extension of the various imaging regions that can define in by the source image frame of image capture sensor according to aspects more of the present disclosure;
Figure 20 is the block diagram providing the more detailed view of an embodiment to the ISP front end pixel processing unit as shown in the ISP front end logic of Fig. 8 according to aspects more of the present disclosure;
Figure 21 is the process block diagram exemplified with how, time-domain filtering being applied to the image pixel data that ISP front end pixel processing unit as shown in Figure 20 receives according to an embodiment;
Figure 22 is exemplified with current picture corresponding to one group of reference picture pixel and one group that can be used to the one or more parameters determining the time-domain filtering process shown in Figure 21;
Figure 23 is the flow chart of the process exemplified with the current picture for time-domain filtering being applied to one group of view data according to an embodiment;
Figure 24 shows the flow chart of the technology of the motion delta value used for the time-domain filtering of the current picture calculating Figure 23 according to an embodiment;
Figure 25 is the flow chart of another process exemplified with the current picture for time-domain filtering being applied to one group of view data according to another embodiment, and it comprises each color component use different gains to view data;
Figure 26 is the process block diagram how using point other motion and brightness (luma) to show to each color component of the image pixel data that ISP front end pixel processing unit as shown in Figure 20 receives exemplified with the temporal filtering technique according to another embodiment;
Figure 27 is the flow chart exemplified with the motion as shown in figure 26 of the use according to another embodiment and illuminometer, time-domain filtering being applied to the process of the current picture of one group of view data;
Figure 28 depict according to aspects more of the present disclosure by the sampling of the full resolution of image capture sensor original (raw) view data;
Figure 29 exemplified with the full resolution raw image data being configured to potting gum (binning) to be applied to Figure 28 according to an embodiment of the present disclosure to export the imageing sensor through the sampling of the raw image data of potting gum;
Figure 30 depicts the sampling of the raw image data through potting gum that can be provided by the imageing sensor of Figure 29 according to aspects more of the present disclosure;
Figure 31 depicts the data provided after by potting gum compensating filter resampling from the raw image data through potting gum of Figure 30 according to some aspects of the disclosure;
Figure 32 depicts the potting gum compensating filter that can realize in the ISP front end pixel processing unit of Figure 20 according to an embodiment;
Figure 33 describes to select the diagram for the center input pixel of potting gum compensation filter and the multiple step size (stepsize) of index/phase place according to the difference analysis device that is applied to of some aspects of the disclosure;
Figure 34 is the flow chart carrying out the process of zoomed image data exemplified with the potting gum compensating filter of the use Figure 32 according to an embodiment;
Figure 35 is the flow chart of the process of current input source center pixel exemplified with the horizontal and vertical filtering of determination performed by the potting gum compensating filter of Figure 32 according to an embodiment;
Figure 36 is exemplified with the flow chart determining the process of the index of the filter factor of the horizontal and vertical filtering selected performed by the potting gum compensating filter of Figure 32 according to an embodiment;
Figure 37 shows the more more detailed block diagram of the embodiment of the statistical disposition unit that can realize in the ISP front-end processing logic shown in Fig. 8 according to some aspects of the disclosure;
Figure 38 shows the various image frame boundary situations considered possibly when applying the technology for detection and correct defective pixels when the statistical disposition unit of Figure 37 carries out statistical disposition according to some aspects of the disclosure;
Figure 39 is exemplified with the flow chart for performing the process that defect pixel detects and corrects in statistical disposition process according to an embodiment;
Figure 40 shows the distributed in three dimensions of luminous intensity relative to location of pixels of the conventional lenses describing imaging device;
Figure 41 is the colored graph of uneven luminous intensity in displaying chart picture, and this may be camera lens light and shade (lensshading) irregular result;
Figure 42 is the diagrammatic illustration comprising the original image frame of camera lens light and shade correction region and gain grid (gaingrid) according to some aspects of the disclosure;
Figure 43 is exemplified with the interpolation of the yield value of image pixel for being surrounded by surrounding four grid gain points according to some aspects of the disclosure;
Figure 44 is the flow chart exemplified with the process for determining the interpolation yield value that can be applied to imaging pixel during camera lens light and shade correct operation according to this technology embodiment;
Figure 45 depicts to perform camera lens light and shade timing according to working as of some aspects of the disclosure, can be applied to the distributed in three dimensions of the interpolation yield value of the image showing the light intensity characteristics shown in Figure 40;
Figure 46 colored graph from Figure 41 shown according to some aspects of the disclosure shows the colored graph of the light intensity uniform of improvement after applying camera lens light and shade correct operation;
Figure 47 diagrammatic illustration is according to the radial gain component that can how calculate and use the radial distance between current pixel and the center of image to correct to determine camera lens light and shade of an embodiment;
Figure 48 be exemplified with the use according to this technology embodiment from the radial gain of gain grid and interpolation gain to determine the flow chart of the process of overall gain, during camera lens light and shade correct operation, this overall gain can be applied to imaging pixel;
Figure 49 shows white portion in color space and low and figure that is high color temperature axle;
How Figure 50 shows can be the table of various reference illuminant condition configuration white balance gains according to an embodiment;
Figure 51 shows the block diagram of the statistical information that can realize in the ISP front-end processing logic collection engine according to a disclosure embodiment;
Figure 52 is exemplified with the down-sampling of the original Bayer RGB data according to some aspects of the disclosure;
Figure 53 depicts the two-dimensional color histogram can being collected engine collection by the statistical information of Figure 51 according to an embodiment;
Figure 54 depicts zoom in two-dimensional color histogram and lens moving (panning);
Figure 55 shows the more detailed view collecting the logic of the pixel filter of engine for realizing statistical information according to an embodiment;
According to the pixel condition that how can be based upon pixel filter definition of an embodiment, Figure 56 estimates that the diagram of the location of pixels in C1-C2 color space is described;
According to the pixel condition that how can be based upon pixel filter definition of another embodiment, Figure 57 estimates that the diagram of the location of pixels in C1-C2 color space is described;
According to the pixel condition that how can be based upon pixel filter definition of another embodiment, Figure 58 estimates that the diagram of the location of pixels in C1-C2 color space is described;
Figure 59 shows the figure compensating flicker according to how can the determining image sensor integration time an of embodiment;
Figure 60 shows can collect realization in engine in the statistical information of Figure 51 and be configured to the more detailed block diagram of the logic of collecting auto-focusing statistics according to an embodiment;
Figure 61 is the figure of the technology of auto-focusing that performs for using rough and meticulous auto-focusing score value depicted according to an embodiment;
Figure 62 depicts the flow chart for using rough and meticulous auto-focusing score value to perform the process of auto-focusing according to an embodiment;
Figure 63 and 64 shows the original bayer data of selection to obtain white balance brightness value;
Figure 65 show according to an embodiment for using the relative auto-focusing score value of each color component to perform the technology of auto-focusing;
Figure 66 is the more detailed view of statistical disposition unit of the Figure 37 according to an embodiment, it illustrates and can how to use Bayer RGB histogram data to help blackness (blacklevel) compensation;
Figure 67 shows the block diagram of an embodiment of the ISP stream treatment logic of the Fig. 7 according to some aspects of the disclosure;
Figure 68 shows the more detailed view of an embodiment of the original pixels processing block that can realize in the ISP stream treatment logic of Figure 67 according to some aspects of the disclosure;
The various image frame boundary situations that Figure 69 will consider when showing the technology applied during performing process at the original pixels processing block shown in Figure 68 for detecting with correct defective pixels according to some aspects of the disclosure;
Figure 70-72 be according to an embodiment description can by the original pixels processing block of Figure 68 perform for detecting the flow chart with the multiple process of correct defective pixels;
Two when applying green Nonuniformity Correction during Figure 73 original pixels processing logic at Figure 68 shown according to some aspects of the disclosure performs process in 2 × 2 block of pixels of Bayer image sensor can the position of green pixel of interpolation;
Figure 74 is exemplified with the pixel set being used as a part for the horizontal filtering process for noise reduction comprising center pixel and the horizontal adjacent pixels associated according to some aspects of the disclosure;
Figure 75 is exemplified with the pixel set being used as a part for the vertical filtering process for noise reduction comprising center pixel and the vertical adjacent pixels associated according to some aspects of the disclosure;
Figure 76 depicts how solution mosaic to be applied to original bayer images pattern (pattern) to produce the simplified flow chart of full-color RGB image;
Figure 77 represents the pixel set of the bayer images pattern according to an embodiment, during the solution mosaic of this bayer images pattern, and can from wherein deriving the horizontal and vertical energy component being used for green value being carried out to interpolation;
Figure 78 shows the horizontal pixel set according to some aspects of this technology, during the solution mosaic of bayer images pattern, can apply filtering to determine the horizontal component of the green value obtained through interpolation to it;
Figure 79 shows the vertical pixel set according to some aspects of this technology, during the solution mosaic of bayer images pattern, can apply filtering to determine the vertical component of the green value obtained through interpolation to it;
Figure 80 shows the multiple 3x3 block of pixels according to some aspects of this technology, during the solution mosaic of bayer images pattern, can apply filtering to determine the redness that obtains through interpolation and blue valve to it;
For the flow chart of the various process of Interpolate Green, redness and blue valve during Figure 81-84 provides the solution mosaic according to the bayer images pattern that is depicted in of an embodiment;
Figure 85 show can by image capture sensor and the colored graph of the initial pictures scene processed according to some aspects of solution mosaic technology disclosed herein;
Figure 86 shows the colored graph of the bayer images pattern of the image scene as shown in Figure 85;
Figure 87 shows the colored graph using tradition to separate the RGB image that mosaic technology reconstructs based on the bayer images pattern of Figure 86;
Figure 88 shows some aspects according to solution mosaic technology disclosed herein from the colored graph of the RGB image of the bayer images pattern refactoring of Figure 86;
Figure 89 shows the more detailed view of an embodiment of the RGB processing block that can realize in the ISP stream treatment logic of Figure 67 according to some aspects of the disclosure;
Figure 90 shows the more detailed view of an embodiment of the YCbCr processing block that can realize in the ISP stream treatment logic of Figure 67 according to some aspects of the disclosure;
Figure 91 describes according to the diagram in the active source region for brightness and colourity defined in the source buffer of use 1 planar format of some aspects of the disclosure;
Figure 92 describes according to the diagram in the active source region for brightness and colourity (chroma) defined in the source buffer of use 2 planar format of some aspects of the disclosure;
Figure 93 is the block diagram exemplified with the image sharpening logic that can realize in YCbCr processing block as shown in Figure 90 according to an embodiment;
Figure 94 is the block diagram strengthening logic exemplified with the edge that can realize in YCbCr processing block as shown in Figure 90 according to an embodiment;
Figure 95 shows the figure according to the relation between the colourity decay factor of some aspects of the disclosure and the brightness value of sharpening;
Figure 96 is the block diagram exemplified with adjusting logic according to the image lightness (brightness) that can realize in YCbCr processing block as shown in Figure 90 of an embodiment, contrast (contrast) and color (BCC); And
The form and aspect that Figure 97 can apply between showing the BCC such as shown in Figure 96 adjusts logic execution color adjustment period in the YCbCr color space defining various form and aspect (hue) angle and saturation (saturation) value and saturation color wheel (colorwheel).
Embodiment
One or more specific embodiment of the present disclosure below will be described.These embodiments described are all only the examples of current disclosed technology.In addition, in order to provide the simple and clear description of these embodiments, whole features of actual realization may can not be described in the description.What should understand is in the exploitation of these actual realizations arbitrarily, as in any engineering or design object, all must make numerous decision specific to realizing to reach the specific purpose of developer, be such as obedient to the difference because realizing and change relate to system and the restriction that relates to business.In addition, to be such development effort may the be complicated and very time-consuming that should understand, but still be the daily design of the those skilled in the art benefited from the disclosure, production and manufacturing operation.
When introducing the key element of multiple embodiment of the present disclosure, article " a ", " an " and " the " are intended to represent one or more key element.Term " comprises ", " comprising " and " having " be intended to be open, and represents also may have extra key element except listed key element.In addition, should be understood that, the existence not being intended to be interpreted as getting rid of other embodiments equally with cited feature is mentioned for the disclosure " embodiment " or " embodiment ".
As will be described, the disclosure relates generally to the technology for the treatment of the view data obtained via one or more image sensing apparatus.Specifically, particular aspects of the present disclosure can relate to for detecting the technology with correct defective pixels, for carrying out the technology of separating mosaic to original image pattern, for the technology using multiple dimensioned Unsharp Mask (multi-scaleunsharpmask) to carry out sharpening luminance image, and for applying the gain of camera lens light and shade with the irregular technology of corrective lens light and shade.In addition, should be understood that technology disclosed herein can be applied to both still image and moving image (such as video), and the imaging applications of any suitable type can be used to, such as digital camera, the electronic equipment with integrated digital camera, safety or video monitoring system, medical imaging system etc.
Notice above main points, Fig. 1 is the block diagram of the example illustrating electronic equipment 10, and described electronic equipment 10 can provide the process using the above one or more image processing techniquess briefly touched upon to carry out view data.Electronic equipment 10 can be the electronic equipment of any type, such as above-knee or desktop PC, mobile phone, digital media player etc., and it is configured to receive and image data processing, such as, use the data of one or more image sensing component retrieval.Only exemplarily, electronic equipment 10 can be portable electric appts, such as, can obtain from storehouse, California than the Apple of Dinon or model.In addition, electronic equipment 10 can be on table or laptop computer, such as, can obtain from Apple pro, MacBook mini or Mac model.In a further embodiment, electronic equipment 10 can also be can obtain and the model of the electronic equipment of image data processing from other manufacturers.
Irrelevant its form (such as portable or non-portable), should be understood that electronic equipment 10 can provide the process using one or more image processing techniquess of above Brief Discussion to carry out view data, these technology can comprise defect pixel correction and/or detection technique, camera lens light and shade alignment technique, separate mosaic technology or image sharpening techniques, etc.In certain embodiments, such image processing techniques can be applied to the view data in the memory being stored in electronic equipment 10 by electronic equipment 10.In other embodiments, electronic equipment 10 can comprise one or more imaging device being configured to obtain view data, such as integrated or external digital camera, so electronic equipment 10 can use one or more above-mentioned image processing techniques to process these view data.The embodiment that the portable and non-portable embodiment of electronic equipment 10 is shown will be discussed in Fig. 3-6 further below.
As shown in Figure 1, electronic equipment 10 can comprise the built-in of the various function for realizing equipment 10 and/or external parts.It will be appreciated by those skilled in the art that various functional blocks as shown in Figure 1 can comprise hardware elements (comprising circuit), software elements (comprising storage computer code on a computer-readable medium) or the combination both hardware and software key element.Such as, in embodiment shown in current, electronic equipment 10 can comprise I/O (I/O) port one 2, input structure 14, one or more processor 16, memory devices 18, nonvolatile memory 20, (one or more) spread card 22, networked devices 24, power supply 26 and display 28.In addition, electronic equipment 10 can comprise one or more imaging device 30 (such as digital camera) and image processing circuit 32.Following will discuss further, image processing circuit 32 can be configured to perform one or more image processing techniques discussed above when image data processing.Can understand, can retrieve or can use imaging device 30 to obtain the view data processed for image processing circuit 32 from memory 18 and/or non-volatile memory device 20.
Before proceeding, should be understood that the system block diagram of equipment 10 is as shown in Figure 1 intended to represent the senior control block diagram describing each parts that may be included in this equipment 10.That is, the connecting line between each independent component be not as shown in Figure 1 necessarily represent between each parts of equipment 10 data flowing or transmission path or direction.In fact, as discussed below, in certain embodiments, (one or more) processor 16 of description can comprise multiple processor, such as primary processor (such as CPU) and special image and/or video processor.In such embodiments, process to view data can be performed primarily of these application specific processors, effectively make host processor (CPU) avoid loading such task.
About each element shown in Fig. 1, I/O port one 2 can comprise the port being configured to connect multiple external device, and these external devices are such as power supply, audio output apparatus (such as earphone or head-telephone) or other electronic equipments (such as handheld device and/or computer, printer, projecting apparatus, remote data indicator, modulator-demodulator, docking station (dockingstation) etc.).In one embodiment, I/O port one 2 can be configured to be connected to external imaging device, such as digital camera, in order to obtain the view data that image processing circuit 32 can be used to process.I/O port one 2 can support the interface of any applicable type, such as USB (USB) port, serial connection port, IEEE-1394 (FireWire) port, Ethernet or modem port and/or AC/DC power connector end mouth.
In certain embodiments, specific I/O port one 2 can be configured to provide more than a kind of function.Such as, in one embodiment, I/O port one 2 can comprise the proprietary port of Apple, its function is not only conveniently between electronic equipment 10 and external source, to carry out transfer of data, also equipment 10 is coupled to power source charges interface, such as be designed to the power supply adaptor that electric power is provided from wall socket, or be coupled to be configured to from such as table or another electronic equipment draw power of laptop computer for the interface cable charged to power supply 26 (it can comprise one or more rechargeable battery).In this way, such as, be determined by the external element that I/O port one 2 is coupled to equipment 10, I/O port one 2 can be configured to the dual-use function with data transmission port and AC/DC power connector end mouth.
Input structure 14 can provide user to input to (one or more) processor 16 or feed back.Such as, input structure 14 can be configured to the one or more functions controlling electronic equipment 10, the application such as run in electronic equipment 10.Only exemplarily, input structure 14 can comprise button, slide block, switch, control board, button, knob, roller, keyboard, mouse, touch pad etc., or more these some combination.In one embodiment, input structure 14 graphic user interface (GUI) that user can be allowed to navigate show on device 10.In addition, input structure 14 can comprise the touch-sensitive mechanism provided together with display 28.In these embodiments, user selects shown interface element or mutual with it by touch-sensitive mechanism.
Input structure 14 can comprise plurality of devices, circuit and path, by these, input of user or feedback is supplied to one or more processor 16.The function that such input structure 14 can be configured to control appliance 10, the application run on device 10 and/or be connected to electronic equipment 10 or any interface of being used by electronic equipment 10 or equipment.Such as, input structure 14 can allow user navigate display user interface or application interface.The example of input structure 14 can comprise button, slide block, switch, control board, button, knob, roller, keyboard, mouse, touch pad etc.
In a particular embodiment, input structure 14 and display 28 can be provided in the lump, such as the situation of " touch-screen ", and taking this provides touch-sensitive mechanism in the lump with display 28.In such embodiments, user selects shown interface element or mutual with it by touch-sensitive mechanism.In this way, the interface of display can provide interactive function, allow user by touch display 28 navigate display interface.Such as, utilize the user interactions that input structure 14 carries out, such as, with user interactions or mutual with display application interface on the display 28, the signal of telecommunication of indicating user input can be generated.These input signals can be routed to one or more processor 16 to do further process by suitable path (such as inputting hub or data/address bus).
Except the various input signals that process is received by (one or more) input structure 14, the general operation of control appliance 10 gone back by (one or more) processor 16.Such as, processor 16 can provide disposal ability with any other function of operation system, program, user and application interface and electronic equipment 10.(one or more) processor 16 can comprise one or more microprocessor, such as one or more " general " microprocessor, one or more special microprocessor and/or specific to the microprocessor (ASIC) of application or the combination of these processing unit.Such as, (one or more) processor 16 can comprise one or more instruction set (such as RISC) processor, and graphic process unit (GPU), video processor, audio process and/or related chip group.It will be apparent that, (one or more) processor 16 can be coupled to one or more data/address bus for transmitting data and instruction between all parts of equipment 10.In a particular embodiment, (one or more) processor 16 can provide disposal ability to run imaging applications in electronic equipment 10, the Photo that such as can obtain from Apple or or " camera " and/or " photo " application to be provided by Apple, these are all model on available.
The instruction processed by (one or more) processor 16 or data can be stored in computer-readable medium, such as, in memory devices 18.Memory devices 18 may be provided in volatile memory, such as random access memory (RAM), or nonvolatile memory, such as read-only memory (ROM), or the combination of one or more RAM and ROM equipment.Memory 18 can store much information and be used to various uses.Such as, memory 18 can store the firmware for electronic equipment 10, such as basic input/output (BIOS), operating system, various program, application or can in electronic equipment 10 on run any other routine, comprise user interface capabilities, functional processor etc.In addition, during the operation of electronic equipment 10, memory 18 can be used to buffering or high-speed cache.Such as, in one embodiment, memory 18 comprises one or more frame buffer, will be output to the video data of display 28 in order to buffering.
Except memory devices 18, electronic equipment 10 also can comprise nonvolatile memory 20 for persistent storage data and/or instruction.Nonvolatile memory 20 can comprise flash memory, hard disk drive or any other optics, magnetic and/or solid storage medium, or more these combination.Therefore, although only depict individual equipment in FIG for purposes of clarity, should be understood that (one or more) nonvolatile memory 20 can comprise the combination of one or more memory device operated together with (one or more) processor 16 listed above.Nonvolatile memory 20 can be used to storing firmware, data file, view data, software program and application, wireless connection information, personal information, user preference and any other suitable data.According to some aspects of the disclosure, before outputting to display, process the view data be stored in nonvolatile memory 20 and/or memory devices 18 by image processing circuit 32.
Embodiment as shown in Figure 1 also can comprise one or more draw-in groove or expansion slot.Draw-in groove can be configured to receive expansion card 22, and it can be used to the function increasing electronic equipment 10, such as extra memory, I/O function or networked capabilities.Such expansion card 22 is connected on equipment by the suitable connector of any type, and internally or externally can be accessed relative to the housing of electronic equipment 10.Such as, in one embodiment, expansion card 22 can be flash card, such as secure digital (SD) card, mini or micro-SD, compact flash memory card etc., or can be pcmcia device.In addition, for the embodiment of electronic equipment 10 providing mobile phone ability, expansion card 22 can be subscriber identity module (SIM) card.
Electronic equipment 10 also comprises the network equipment 24, it can be such as, by wireless 802.11 standards or any other suitable networking standard (such as local area network (LAN) (LAN), wide area network (WAN), enhancing data transfer rate (EDGE) network of GSM evolution, 3G data network or internet) and the network controller providing network to connect or network interface unit (NIC).In a particular embodiment, the network equipment 24 can be provided to online digital media content provider and (such as can obtain from Apple music service) connection.
The power supply 26 of equipment 10 can be included in non-portable and portablely arrange lower ability of powering to equipment 10.Such as, under portable setting, equipment 10 can comprise one or more battery for powering to equipment 10, such as lithium ion battery.Again can charge to battery by equipment 10 is connected to external power source (such as wall socket).Under non-portable is arranged, power supply 26 can comprise power supply unit (PSU), and it is configured to obtain electric power from wall socket, and by described distributing electric power to all parts of non-portable electronic equipment (such as desktop computing systems).
Display 28 can be used to show the various images generated by equipment 10, such as, for the GUI of operating system, or the view data processed by image processing circuit 32 (comprising rest image and video data), below will be further discussed it.As mentioned above, view data can comprise the view data using imaging device 30 acquisition or the view data retrieved from memory 18 and/or nonvolatile memory 20.Display 28 can be the display of any suitable type, such as liquid crystal display (LCD), plasma display or Organic Light Emitting Diode (OLED) display.In addition, as discussed above, display 28 can have touch-sensitive mechanism (such as touch-screen) part for the control inerface that can be used as electronic equipment 10, as above.
Illustrated (one or more) imaging device 30 also can be provided as the form being configured to the digital camera obtaining still image and moving image (such as video).Camera 30 can comprise camera lens and one or more being configured to catches light and light be converted to the imageing sensor of the signal of telecommunication.Only exemplarily, imageing sensor can comprise cmos image sensor (such as CMOS CMOS active pixel sensor (APS)) or CCD (charge coupled device) transducer.In general, the imageing sensor in camera 30 comprises the integrated circuit with pel array, and wherein each pixel comprises a photoelectric detector for sensor light.It should be understood by those skilled in the art that the photoelectric detector in imaging pixel generally detects the intensity of the light caught by camera lens.But photoelectric detector generally oneself cannot detect the wavelength of the light caught, and therefore, cannot determine colouring information.
Therefore, imageing sensor can comprise color filter array (CFA) further, the pel array of its coverability graph image-position sensor or be placed on pel array, to catch colouring information.Color filter array can comprise the array of small-sized colour filter, and each small-sized colour filter can a respective pixel of overlay image transducer, and carries out filtering by wavelength to the light caught.Therefore, when using in the lump, color filter array and photoelectric detector can provide wavelength about the light obtained by camera and strength information, and this can represent caught image.
In one embodiment, color filter array can comprise Bayer color filters array, and it provides 50% green element, the filter mode of 25% red elemental and 25% blue light elements.Such as, Fig. 2 shows and comprises 2 green elements (Gr and Gb), 2 × 2 block of pixels of the Bayer CFA of 1 red elemental (R) and 1 blue light elements (B).Therefore, utilize the imageing sensor of Bayer color filters array can provide the information of the intensity of the light received at green, red and blue wavelength place about camera 30, each image pixel only records one of three kinds of colors (RGB) thus.This information being called as the data in " raw image data " or " original domain " is processed by one or more solution mosaic technology of use subsequently, thus raw image data is converted to full color image, this generally by for each pixel interpolating one group is red, green and blue valve and realizing.As will be described below, such solution mosaic technology can be performed by image processing circuit 32.
As mentioned above, image processing circuit 32 can provide various image processing step, and such as defect pixel detection/correction, camera lens light and shade correct, separate mosaic and image sharpening, noise reduction, gamma correction, image enhaucament, color notation conversion space, image compression, colourity sub-sampling and image scale operation etc.In certain embodiments, image processing circuit 32 can comprise multiple subassembly and/or discrete logical block, and its venue forms the image procossing " streamline " for performing each step in each image processing step.Hardware (such as digital signal processor or ASIC) or software can be used, or realize these subassemblies by the combination of hardware and software parts.Below by the various image processing operations that more detailed description can be provided by image processing circuit 32, especially relate to defect pixel detection/correction, camera lens light and shade correct, separate those process operations of mosaic and image sharpening.
Before proceeding, although should be noted that multiple embodiments of various image processing techniques discussed below can utilize Bayer CFA, current disclosed technology is not limited to this.In fact, those skilled in the art, by understanding that image processing techniques provided herein is applicable to the color filter array of any applicable type, comprise RGBW colour filter, CYGM colour filter etc.
Refer again to electronic equipment 10, Fig. 3-6 exemplified with the adoptable various forms of electronic equipment 10.As mentioned above, electronic equipment 10 can adopt the form of computer, comprise usually portable computer (such as on knee, notebook and flat computer) and usually not portable computer (such as desktop PC, work station and/or server), or the electronic equipment of other types, such as hand-portable electronic device (such as digital media player or mobile phone).Specifically, Fig. 3 and Fig. 4 respectively depict the electronic equipment 10 of laptop computer 40 form and desktop PC 50 form.Fig. 5 and Fig. 6 respectively illustrates front view and the rearview of the electronic equipment 10 of hand-held portable devices 60 form.
As shown in Figure 3, the laptop computer 40 of description comprises housing 42, display 28, I/O port one 2 and input structure 14.Input structure 14 can comprise the keyboard integrated with housing 42 and touch pad mouse.In addition, input structure 14 can comprise multiple can be used for and other buttons of computer 40 mutual (such as powering up to computer or start-up simulation machine) and/or switch, with the application operating GUI or run in computer 40, and adjust multiple other aspects (such as volume, display brightness etc.) relating to computer 40 and operate.Computer 40 also can comprise the multiple I/O port one 2 of the connection being provided to other equipment, as mentioned above, such as or USB port, high-definition media interface (HDMI) port or any other type be suitable for the port being connected to external equipment.In addition, the network that computer 40 can comprise as in figure 1 above connects (such as the network equipment 26), memory (such as memory 20) and storage capacity (such as memory device 22).
In addition, in an illustrated embodiment, laptop computer 40 can comprise integrated imaging device 30 (such as camera).In another embodiment, laptop computer 40 can utilize the external camera (such as external USB camera or " camera ") being connected to one or more I/O port ones 2 using substituting or adding as integrated camera 30.Such as, external camera can obtain from Apple camera.No matter integrated or external, camera 30 can provide seizure and the record of image.Then these images being user use image-watching should be used for watching, or should be able to be used for using by other, comprise video conference application, such as and picture editting/viewing application, the Photo that such as can obtain from Apple or in a particular embodiment, the laptop computer 40 described can obtain from Apple pro, or model.In addition, in one embodiment, computer 40 can be portable tablet computing device, such as, can obtain from Apple equally the model of flat computer.
Fig. 4 further illustrates the embodiment that electronic equipment 10 is provided as desktop PC 50.It will be apparent that, desktop PC 50 can comprise the multiple roughly similar feature provided with laptop computer 40 as shown in Figure 4, but also may have usually larger global shape.As shown in the figure, desktop PC 50 can be loaded in the shell 42 comprising the various miscellaneous parts discussed in display 28 and block diagram as shown in Figure 1.In addition, desktop PC 50 can comprise by one or more I/O port (such as USB) be coupled to this computer 50 or can with the external keyboard of this computer 50 radio communication (such as by RF, bluetooth etc.) and mouse (input structure 14).As mentioned above, desktop PC 50 also can comprise can be the imaging device 30 of integrated or external camera.In a particular embodiment, the desktop PC 50 described can obtain from Apple mini or Mac model.
As further shown, display 28 can be configured to generate the various images can watched by user.Such as, during the operation of computer 50, display 28 can display graphics user interface (" GUI ") 52 carry out with the operating system run in computer 50 and/or application to allow user alternately.GUI52 can comprise various layer, window, screen, template or can other graphic elements of display on display device 28 all or in part.Such as, in the embodiment depicted, operating system GUI52 can comprise various graphic icons 54, and each icon may correspond in can based on selection (such as by keyboard/mouse or touch-screen input) the various application that are opened or perform user being detected.Icon 54 can be displayed in one or more graphical window elements 58 of showing in dock 56 or on screen.In certain embodiments, hierarchical navigation process can be caused to the selection of icon 54, make to cause a screen to the selection of icon 54 or open another graphical window comprising one or more the additional icon or other GUI element.Only exemplarily, the operating system GUI52 of Fig. 4 display can be the Mac from obtaining from Apple a version of operating system.
Proceed to Fig. 5 and 6, electronic equipment 10 is illustrated as the form of portable hand-held electronic equipment 60 further, and it can obtain from Apple or model.In the embodiment depicted, handheld device 60 comprises shell 42, and it can play the effect that protection internal part avoids physical damnification simultaneously shield electromagnetic interference.Shell 42 can be formed by the combination of any suitable material or these materials, such as plastics, metal or composite material, and the electromagnetic radiation of characteristic frequency (such as Wireless Networking signal) can be allowed to penetrate the radio communication circuit (such as the network equipment 24) arrived in be placed in shell as shown in Figure 5 42.
Shell 42 also comprises various user's input structure 14, can be mutual with handheld device 60 by these input structure 14 users.Such as, each input structure 14 can be configured to be pressed or the one or more corresponding functions of the equipments of startup control system.Exemplarily, one or more input structure 14 can be configured to call " home " screen (main screen) 42 or menu to show, be configured in sleep, wake up or start shooting/shutdown mode between switch, be configured to the mute making cellular phone application, be configured to increase or reduce volume and export etc.Should be understood that illustrated input structure 14 is only exemplary, and handheld device 60 can comprise suitable user's input structure of any amount, it can have various forms, comprises button, switch, button, knob, roller etc.
As shown in Figure 5, handheld device 60 can comprise various I/O port one 2.Such as, the I/O port one 2 described can comprise for transmitting and received data file or the proprietary connectivity port 12a for charging to power supply 26, and for equipment 60 being connected to the audio frequency connectivity port 12b of audio output apparatus (such as headphone or loud speaker).In addition, provide in the embodiment of function of cellular phone in handheld device 60, equipment 60 can comprise the I/O port one 2c for receiving subscriber identity module (SIM) card (such as expansion card 22).
Display device 28 can be the display of LCD, OLED or any applicable type, and it can show the various images generated by handheld device 60.Such as, display 28 can show the various system indicator 64 of the feedback providing the one or more states (such as power supply status, signal strength signal intensity, external device connection etc.) about handheld device 60 to user.Discuss with reference to above Fig. 4, display also can show GUI52 with allow user and equipment 60 mutual.GUI52 can comprise graphic element, such as icon 54, and it can correspond to based on the various application that user opens the selection of respective icon 54 or performs being detected.Exemplarily, one of icon 54 can represent and can use to obtain the camera applications 66 of image in the lump with camera 30 (as shown in broken line in fig. 5).Simply with reference to figure 6, exemplified with the rearview of hand-hold electronic equipments 60 depicted in figure 5, it illustrates camera 30 and be integrated in housing 42 and be positioned at handheld device 60 back side.
As mentioned above, image processing circuit 32 can be used to process by the view data that camera 30 obtains, described image processing circuit 32 can comprise hardware (be such as arranged in shell 42 inner) and/or be stored in the software of one or more memory devices (such as memory 18 or nonvolatile memory 20) of equipment 60.The image using camera applications 66 and camera 30 to obtain can be stored in equipment 60 (such as in memory device 20), and photo viewing application 68 can be used to watch these images after a while.
Handheld device 60 also can comprise various audio frequency input and output element.Such as, the audio frequency I/O element usually described by reference marker 70 can comprise input sink, such as one or more microphone.Such as, when handheld device 60 comprises cellular telephone function, input sink can be configured to receive audio user input, such as user speech.In addition, audio frequency I/O element 70 can comprise one or more output reflector.These export reflector can comprise one or more loud speaker, and it such as can play when using media player applications 72 playback of music data effect audio signal being sent to user.In addition, comprise in the embodiment of cellular phone application in handheld device 60, as shown in Figure 5, additional audio frequency also can be provided to export reflector 74.Be similar to the output reflector of audio frequency I/O element 70, export reflector 74 and also can comprise one or more loud speaker being configured to audio signal (such as at the speech data that telephone call receives) to be sent to user.Therefore, audio frequency I/O element 70 and 74 jointly can operate and receive and radiated element using the audio frequency as phone.
Some backgrounds about the adoptable various ways of electronic equipment 10 are now provided, so the image processing circuit 32 that the disclosure will be paid close attention to Fig. 1 and describes.As mentioned above, image processing circuit 32 can use hardware and/or software part to realize, and can comprise the various processing units of definition picture signal process (ISP) streamline.Specifically, each side of the image processing techniques set forth in the disclosure is paid close attention in following discussion, especially relates to those aspects of defect pixel detection/correction technology, camera lens light and shade alignment technique, solution mosaic technology and image sharpening techniques.
With reference now to Fig. 7, exemplified with the simplification top layer block diagram depicting the multiple functional parts of the part that can be embodied as image processing circuit 32 of an embodiment according to current disclosed technology.Particularly, Fig. 7 is intended to how can flow through image processing circuit 32 according at least one embodiment to illustrate view data.In order to provide the general general survey to image processing circuit 32, here provide these functional parts with reference to figure 7 and how to operate general description with image data processing, meanwhile, each of shown functional parts and the description more specifically of respective subassembly thereof will be provided further after.
With reference to shown embodiment, image processing circuit 32 can comprise picture signal process (ISP) front-end processing logic 80, ISP stream treatment logic 82 and control logic 84.First the view data caught by imaging device 30 is processed by ISP front end logic 80, and analyzes to obtain the image statistics that can be used to the one or more controling parameters determining ISP flowing water logic 82 and/or imaging device 30 to it.ISP front end logic 80 can be configured to obtain view data from imageing sensor input signal.Such as, as shown in Figure 7, imaging device 30 can comprise the camera with one or more camera lens 88 and (one or more) imageing sensor 90.As mentioned above, (one or more) imageing sensor 90 can comprise color filter array (such as Bayer filter), and therefore the light intensity and wavelength information that are obtained by each imaging pixel of imageing sensor 90 can be provided, thus one group of raw image data that can be processed by ISP front end logic 80 is provided.Such as, the output 92 of imaging device 30 can be received by sensor interface 94, and subsequently such as based on sensor interface type, raw image data 96 can be supplied to ISP front end logic 80 by sensor interface 94.Exemplarily, sensor interface 94 usable criterion mobile imaging framework (SMIA) interface or other serial or parallel camera interfaces, or their some combination.In a particular embodiment, ISP front end logic 80 can operate in the clock zone of himself, and asynchronous interface can be provided to support to have the imageing sensor of different size and timing requirements to sensor interface 94.
Raw image data 96 can be provided to ISP front end logic 80, and is processed by by pixel in a variety of formats.Such as, can to have the position of 8,10,12 or 14 dark for each image pixel.ISP front end logic 80 can perform one or more image processing operations to raw image data 96, and collects the statistics about view data 96.The collection of carries out image process operation and statistics can be carried out with identical or the different dark precision in position.Such as, in one embodiment, the process of image pixel data 96 can be performed with 14 precision.In these embodiments, by having of receiving of ISP front end logic 80 be less than the dark raw pixel data in 14 (such as 8,10,12) positions sampled to 14 for image procossing object.In another embodiment, statistical disposition can be carried out with 8 precision, therefore there is more high-order dark raw pixel data and can be down-sampled to 8 bit formats in order to add up object.Can understand, down-sampling to 8 can reduce hardware size (such as area) and can also reduce the process/computation complexity for statistics.In addition, raw image data can by space average to allow statistics to noise more robust.
In addition, as shown in Figure 7, ISP front end logic 80 also can receive pixel data from memory 108.Such as, as shown in reference marker 98, raw pixel data can be sent to memory 108 from sensor interface 94.As shown in reference marker 100, the raw pixel data resided in memory 108 can be provided to ISP front end logic 80 subsequently for process.Memory 108 can be a part for memory device 20, memory devices 18, or can be independent private memory in electronic equipment 10 and can comprise direct memory access (DMA) feature.In addition, in a particular embodiment, ISP front end logic 80 can operate in the clock zone of itself, and provides asynchronous interface to support different size and to have the transducer of different timing demand to sensor interface 94.
Once receive raw image data 96 (from sensor interface 94) or 100 (from memories 108), ISP front end logic 80 can perform one or more image processing operations, such as time-domain filtering and/or potting gum compensation filter.Treated view data can be provided to ISP flowing water logic 82 (output signal 109) subsequently to carry out other process before shown (such as on display device 28), or can be sent to memory (output signal 110).ISP flowing water logic 82 directly receives " front end " treated data from ISP front end logic 80 or from memory 108 (input signal 112), and provides other process to the view data in original domain and in RGB and YCbCr color space.(signal 114) can be output subsequently by the view data after ISP flowing water logic 82 processes to watch for user to display 28 and/or can be further processed by graphics engine or GUP.In addition, the output of ISP flowing water logic 82 can be sent to memory 108 (signal 115), and display 28 can from being configured to memory 108 (signal 116) reads image data realizing one or more frame buffer in a particular embodiment.In addition, in some embodiments, the output of ISP flowing water logic 82 also can be provided to compression/de-compression engine 118 (signal 117) for encoding/decoding image data.Encoded view data can be stored, and is decompressing before being displayed on display 28 equipment (signal 119) subsequently.Exemplarily, compression engine or " encoder " 118 can be the JPEG compression engine for still image of encoding, or for the H.264 compression engine of encode video image, or their combination, the corresponding decompression engine for decode image data is also like this.Later with reference to the additional information about image processing operations that the more detailed discussion of Figure 67-97 can provide in ISP flowing water logic 82.Same, should be noted that ISP flowing water logic 82 also can receive raw image data from memory 108, it is described to input signal 112.
The statistics 102 determined by ISP front end logic 80 can be provided to control logic unit 84.Statistics 102 can comprise, such as, about the imageing sensor statistical information of automatic exposure, Automatic white balance, auto-focusing, flicker detection, blackness compensation (BLC), the correction of camera lens light and shade etc.Control logic 84 can comprise processor and/or microcontroller, it is configured to perform one or more routine (such as firmware), and described routine can be configured to determine the controling parameters 104 of imaging device 30 and the controling parameters 106 for ISP stream treatment logic 82 based on the statistics 102 received.Only exemplarily, controling parameters 104 can comprise the combination of transducer controling parameters (accumulated time of such as gain, spectrum assignment), camera flash-light controling parameters, lens control parameter (the focusing length of such as focusing or the focal length of zoom) or these parameters.ISP controling parameters 106 can comprise the gain level and color correction matrix (CCM) coefficient that adjust (such as during RGB process) for Automatic white balance and color, and the camera lens light and shade correction parameter can determined based on white point balance parameter as discussed below.In certain embodiments, except analysis statisticaling data 102, control logic 84 also can analyze the historical statistical information that can be stored in (in such as memory 18 or memory 20) on electronic equipment 10.
Because the overall complicated of image processing circuit 32 shown here designs, as follows, it can be useful that the discussion about ISP front end logic 80 and ISP stream treatment logic 82 is divided into independent part.Specifically, Fig. 8 to 66 of the application can relate to multiple embodiment of ISP front end logic 80 and the discussion of each side, and Figure 67 to 97 of the application can relate to multiple embodiment of ISP stream treatment logic 82 and the discussion of each side.
iSP front-end processing logic
Fig. 8 shows the more more detailed block diagram of the functional logic blocks that can realize in ISP front end logic 80 according to an embodiment.Based on the structure of imaging device 30 and/or sensor interface 94, as shown in Figure 7, by one or more imageing sensor 90, raw image data can be supplied to ISP front end logic 80.In the embodiment depicted, by the first imageing sensor 90a (transducer 0) and the second imageing sensor 90b (transducer 1), raw image data is supplied to ISP front end logic 80.Following by what discuss further, each imageing sensor 90a and 90b can be configured to potting gum to be applied to full resolution image data, to increase the signal to noise ratio of picture signal.Such as, can apply a kind of potting gum technology (as 2 × 2 potting gum), it can carry out interpolation based on four of a same color full resolution image pixel and obtain " through potting gum " original image pixels.In one embodiment, this can cause compared to single noise component(s), has four accumulating signal components be associated with the pixel through potting gum, improves the signal to noise ratio of view data thus, but be reduction of whole resolution.In addition, potting gum also can cause the uneven or inconsistent spatial sampling of view data, but can be corrected it by potting gum compensation filter discussed in detail as follows by using.
As shown in the figure, imageing sensor 90a and 90b can provide raw image data as signal Sif0 and Sif1 respectively.Each in imageing sensor 90a and 90b generally can be associated with corresponding statistical disposition unit 120 (StatsPipe0) and 122 (StatsPipe1), and statistical disposition unit 120 and 122 can be configured to image data processing to determine one or more groups statistical information (being represented by signal Stats0 and Stats1), it comprises the statistical information relating to automatic exposure, Automatic white balance, auto-focusing, flicker detection, blackness compensation and the correction of camera lens light and shade etc.In a particular embodiment, when only there being one obtaining image actively in imageing sensor 90a and 90b, if need additional statistical information, then view data can be sent to StatsPipe0 and StatsPipe1.For example, if StatsPipe0 and StatsPipe1 can use, StatsPipe0 can be used to the statistical information of collecting a kind of color space (such as RGB), and StatsPipe1 can be used to the statistical information of collecting another kind of color space (such as YUV or YCbCr).That is, statistical disposition unit 120 and 122 can operate many groups statistical information of each frame collecting the view data obtained by activity sensor concurrently.
In the present embodiment, in ISP front end 80, five asynchronous data sources are provided.These data sources comprise: (1) is from the direct input (being called as Sif0 or Sens0) of the sensor interface corresponding to transducer 0 (90a), (2) from the direct input (being called as Sif1 or Sens1) of the sensor interface corresponding to transducer 1 (90b), (3) transducer 0 data from the memory 108 that can comprise DMA interface input (being called as SifIn0 or Sens0DMA), (4) from transducer 1 data input (being called as SifIn1 or Sens1DMA) of memory 108, and one group of view data (being called as FeProcIn or ProcInDMA) come with the frame of sensor 0 and the input of transducer 1 data that (5) retrieve from memory 108.ISP front end 80 also can comprise multiple destinations that the view data from source can be routed to, and wherein each destination can be memory location or the processing unit of (such as in 108) in memory.Such as, in the present embodiment, ISP front end 80 comprises six destinations: (1) in memory 108 for the Sif0DMA of receiving sensor 0 data, (2) in memory 108 for the Sif1DMA of receiving sensor 1 data, (3) first statistical disposition unit 120 (StatsPipe0), (4) second statistical disposition unit 122 (StatsPipe1), (5) front end pixel processing unit (FEProc) 130, and (6) are to the FeOut (or FEProcOut) (discussing in detail further below) of memory 108 or ISP streamline 82.In one embodiment, as shown in table 1 below, ISP front end 80 can be configured to for concrete source, only has specific destination effective.
SIf0DMA SIf1DMA StatsPipe0 StatsPipe1 FEProc FEOut
Sens0 X X X X X
Sens1 X X X X X
Sens0DMA X
Sens1DMA X
ProcInDMA X X
Table 1-is for the example of effective destination, ISP front end in each source
Such as, according to table 1, source Sens0 (sensor interface of transducer 0) can be configured to data are supplied to destination SIf0DMA (signal 134), StatsPipe0 (signal 136), StatsPipe1 (signal 138), FEProc (signal 140) or FEOut (signal 142).In some instances, for FEOut, source data can be provided to FEOut with the processes pixel of bypass FEProc, such as, for debugging or test purpose.In addition, source Sens1 (sensor interface of transducer 1) can be configured to data are supplied to destination SIf1DMA (signal 144), StatsPipe0 (signal 146), StatsPipe1 (signal 148), FEProc (signal 150) or FEOut (signal 152); Source Sens0DMA (transducer 0 data from memory 108) can be configured to data are supplied to StatsPipe0 (signal 154); Source Sens1DMA (transducer 1 data from memory 108) can be configured to data are supplied to StatsPipe1 (signal 156); Source ProcInDMA (transducer 0 and transducer 1 data from memory 108) can be configured to data are supplied to FEProc (signal 158) and FEOut (signal 160).
Should be noted that current shown embodiment is configured such that Sens0DMA (transducer 0 frame from memory 108) and Sens1DMA (transducer 1 frames from 108) is only supplied to StatsPipe0 and StatsPipe1.This structure allows ISP front end 80 to retain the previous frame (such as 5 frames) of some in memory.Such as, (such as picture system is changed into seizure or logging mode from preview mode to start capture events owing to using imageing sensor from user, or or even only open or initialisation image transducer) to the time delay captured between image scene or delayed, and non-user wants each frame caught can be captured and obtain substantially real-time process subsequently.Therefore, by retaining the previous frame (such as from preview phase) of some in memory 108, these previous frames can be processed subsequently or be processed together with in response to the actual frame that is captured of capture events, delayedly to compensate so thus, and more complete sets of image data is provided.
About the structure shown in Fig. 8, should be noted that StatsPipe0120 is configured to receive by the input 136 (from Sens0) selecting logical one 24 (such as multiplexer) to determine, 146 (from Sens1) and 154 (from Sens0DMA).Same, select logical one 26 that one can be selected from signal 138,156 and 148 to input to be supplied to StatsPipe1, select logical one 32 that one can be selected from signal 140,150 and 158 to input to be supplied to FEProc.As mentioned above, statistics (Stats0 and Stats1) can be provided to the determination of control logic 84 for each controling parameters, and these controling parameters can be used to operation imaging device 30 and/or ISP stream treatment logic 82.As what can understand, the selection logical block shown in Fig. 8 (120,122 and 132) can be provided by the logic of any applicable type, such as, select the multiplexer of in multiple input signal in response to control signal.
Pixel processing unit (FEProc) 130 can be configured to perform various image processing operations by pixel to raw image data.As shown in the figure, FEProc130 as destination processing unit can receive from source Sens0 (signal 140), Sens1 (signal 150) or ProcInDMA (signal 158) via selecting the view data of logical one 32.Perform can comprise the following processes pixel operation by the time-domain filtering discussed further and potting gum compensation filter time, FEProc130 also can receive and export multi-signal (such as can represent Rin, Hin, Hout and Yout of motion history and the brightness data used during time-domain filtering).Then such as by one or more first in first out (FIFO) queue, the output 109 (FEProcOut) of pixel processing unit 130 can be forwarded to ISP flowing water logic 82, or can be sent to memory 108.
In addition, as shown in Figure 8, except Received signal strength 140,150 and 158, selection logical one 32 also can Received signal strength 159 and 161.Signal 159 can represent " pretreated " raw image data from StatsPipe0, and signal 161 can represent " pretreated " raw image data from StatsPipe1.As will be discussedbelow, before collection statistical information, each statistical disposition unit can apply one or more pretreatment operation to raw image data.In one embodiment, each statistical disposition unit can perform defect pixel detection/correction, the correction of camera lens light and shade, blackness compensation to a certain degree and compensate against blackness.Therefore signal 159 and 161 can represent and used aforementioned pretreatment operation (will discuss in detail further in Figure 37 below) to carry out the raw image data processed.Therefore, select logical one 32 to give not pretreated raw image data that ISP front-end processing logic 80 is to provide sensor 0 (signal 140) and transducer 1 (signal 150) is also to provide the flexibility of the pretreated raw image data from StatsPipe0 (signal 159) and StatsPipe1 (signal 161).In addition, as selected shown in logical block 162 and 163, it is that the not pretreated raw image data of in the future sensor 0 (signal 134) or transducer 1 (signal 144) is written to memory 108 that ISP front-end processing logic 80 also has, and still the pretreated raw image data from StatsPipe0 (signal 159) or StatsPipe1 (signal 161) is written to the flexibility of memory 108.
There is provided front-end control unit 164 with the operation of control ISP front end logic 80.Control unit 164 can be configured to initialization and programming for configuring and start the control register (being referred to herein as " carrying out register (goregister) ") of process of picture frame, and is configured to select suitable (one or more) register banks (registerbank) for upgrading double buffering data register.In certain embodiments, control unit 164 goes back availability energy watchdog logic with recording clock cycle, memory latency time and service quality (QOS) information.In addition, control unit 164 can also control dynamic clock gating, and when it can be used to do not have enough data in the input rank at activity sensor, forbidding is to the clock of one or more parts of ISP front end 80.
Use above-mentioned " carrying out register ", control unit 164 can control the renewal of the various parameters for each processing unit (such as StatsPipe0, StatsPipe1 and FEProc), and can with sensor interface alternately with the startup of controlled processing unit and stopping.In general each front-end processing unit is that frame by frame operates.As mentioned above (table 1), the input to processing unit can from sensor interface (Sens0 or Sens1) or from memory 108.In addition, processing unit can utilize and can be stored in various parameter in corresponding data register and configuration data.In one embodiment, the data register be associated with each processing unit or destination can be set as the block forming register banks group.In the embodiment in fig. 8, definable seven register banks groups in ISP front end: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn.Copy each block of registers address space to provide two register banks.Only have and be instantiated in second storehouse by the register of double buffering.If register is not by double buffering, the address in second storehouse will be mapped to the address of identical register in first storehouse.
For by the register of double buffering, the register from a storehouse is movable and is used by processing unit, and from the register in another storehouse by shadowed (shadowed).While hardware uses active register, shadow register can be upgraded by control unit 164 during current frame interval.Can by carrying out " NextBk " (next storehouse) field in register for this concrete processing unit and specify in corresponding with source view data being supplied to processing unit the decision which storehouse a concrete frame place uses.NextBk is the field allowing control unit 164 to control the activity of becoming of which register banks when there is the trigger event for subsequent frame in essence.
Before discussing the operation carrying out register in detail, Fig. 9 provides a kind of conventional method 166 for frame by frame image data processing according to this technology.It starts from step 168, and the destination processing unit pointed to by data source (such as Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA) enters idle condition.This can show that the process of present frame completes, and therefore control unit 164 can prepare to process next frame.Such as, in step 170, upgrade the programmable parameter being used for each destination processing unit.This can comprise, such as, upgrade the NextBk field of carrying out in register in the source of corresponding to, and upgrades any parameter in the data register corresponding to destination unit.After this, in step 172, trigger event can make destination unit enter running status.In addition, as shown in step 174, each destination unit pointed to by source completes it and operates the process of present frame, and the method 166 can return the process of step 168 for next frame subsequently.
Figure 10 depicts the block diagram in two storehouses of the data register 176 and 178 that can be used by each destination unit of ISP front end shown in it.Such as, storehouse 0 (176) can comprise data register 1-n (176a-176d), and storehouse 1 (178) can comprise data register 1-n (178a-178d).As mentioned above, the embodiment shown in Fig. 8 can utilize the register banks (storehouse 0) with seven register banks groups (such as SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn).Therefore, in such embodiments, the block of registers address space of each register is copied to provide the second register banks (storehouse 1).
Figure 10 also carries out register 180 exemplified with may correspond in a source.As shown in the figure, carry out register 180 and comprise " NextVld " field 182 and above-mentioned " NextBk " field 184.These fields can be programmed before the process starting present frame.Specifically, NextVld can indicate (one or more) destination that the data from source will be sent to.As mentioned above, NextBk can select corresponding data register from the storehouse 0 of each destination pointed by being indicated by NextVld or storehouse 1.Although not shown in Figure 10, carry out register 180 and also can comprise the position in place (armingbit) being called as " carrying out position " (gobit) herein, it is in place that it can be configured to make to carry out register.When the trigger event 192 for present frame being detected, NextVld and NextBk can be copied the CurrVld field 188 into corresponding current or " activity " register 186 and CurrBk field 190.In one embodiment, (one or more) actual registers 186 can be by the read-only memory of hardware setting, but still cannot can be accessed by the software command in ISP front end 80.
As understood, register can be carried out for each ISP front-end source provides corresponding.For this purpose of the present disclosure, carrying out register and can be called Sens0Go, Sens1Go, Sens0DMAGo, Sens1DMAGo and ProcInDMAGo corresponding to above-mentioned source Sens0, Sens1, Sens0DMA, Sens1DMA and ProcInDMA.As mentioned above, control unit can utilize the frame processing sequence of carrying out register to control in ISP front end 80.Each register that carries out comprises NextVld field and NextBk field to indicate what destination will be effective respectively for next frame, and will use which register banks (0 or 1).As above, shown in Figure 10, when the trigger event 192 of next frame occurs, NextVld field and NextBk field are copied into the movable read-only register 186 of the correspondence of the current effective destination of instruction and storehouse numbering.Each source can be configured to operate asynchronously, and can send data to its any effective destination.In addition, should be understood that, for each destination, may generally only have a source to be movable during present frame.
About carrying out the in place of register 180 and triggering, carrying out in register 180, asserting that (assert) position in place or " carrying out position " can make to make corresponding source in place by NextVld and the NextBk field be associated.Read from memory (such as Sens0DMA, Sens1DMA or ProcInDMA) or from sensor interface (such as Sens0 or Sens1) according to source input data, have various modes to can be used for triggering.Such as, if input is from memory 108, then carrying out the in place of position self can as trigger event, because when reading data from memory 108, control unit 164 has started to control.If picture frame is inputted by sensor interface, trigger event then can be depending on the corresponding timing carrying out the time of the register data relative to receiving from sensor interface in place.According to the present embodiment, Figure 11-13 shows three kinds of different technology of the triggering timing for inputting based on sensor interface.
First with reference to Figure 11, exemplified with the first scene, wherein once idle condition is on purpose all changed into from busy or running status in the institute pointed to by source, then trigger.Here, digital signal VVALID (196) represents the viewdata signal from source.The present frame of pulse 198 presentation video data, next frame in pulse 202 presentation video data, interval 200 represents vertical blanking interval (VBLANK) 200 (such as, representing the time difference between last column of present frame 198 and next frame 202).Time difference between the rising edge of pulse 198 and trailing edge represents frame interval 201.Therefore, in fig. 11, when the destination of all sensings has been terminated the operational processes of present frame 198 and transferred idle condition to, source can be configured to trigger.In this scenario, before destination completes process, make source (such as by place or " carrying out " position of setting) in place, as long as make the destination pointed to enter the free time, source just can be triggered and be started the process of next frame 202.During vertical blanking interval 200, before transducer input data arrive, can use and be arranged and configuration process unit for next frame 202 by the register banks specified by register of carrying out in the source of corresponding to.Only exemplarily, the sense buffer used by FEProc130 can be filled before next frame 202 arrives.In this case, the shadow register corresponding to active register storehouse can be updated after this trigger event, thus allows whole frame interval to arrange the double buffering register being used for next frame (such as after frame 202).
Figure 12 exemplified with the second scene, wherein by the source of corresponding to carry out make to carry out the in place and trigger source in position in register.In this " by carrying out triggering (trigger-on-go) " structure, the destination unit pointed to by source is in the free time, and carrying out the in place of position is trigger event.This trigger mode can be used to not by the register of double buffering, and therefore these registers are updated (such as, contrary with the shadow register upgrading double buffering during frame interval 201) at vertical blanking period.
Figure 13 is exemplified with the third trigger mode, and wherein once the beginning of next frame be detected, that is, the VSYNC of rising, with regard to trigger source.But, it should be noted, in this mode, if carry out register (carrying out position by arranging) in place after starting to process next frame 202, source will use the target destination and register banks that correspond to former frame, because do not upgrade CurrVld and CurrBk field before destination starts process.So just do not leave the vertical blanking interval for arranging destination processing unit, and lost frames may be caused potentially, especially when operating with dual sensor pattern.Should be noted that, however, if image processing circuit 32 operates in the single-sensor pattern each frame all being used to identical register storehouse (such as destination (NextVld) and register banks (NextBk) do not change), this pattern then can realize exact operations.
With reference now to Figure 14, it illustrate in more detail control register (or " carrying out register ") 180.Carry out register 180 and comprise " carrying out " in place position 204, and NextVld field 182 and NextBk field 184.As mentioned above, each source (such as Sens0, Sens1, Sens0DMA, Sens1DMA or ProcInDMA) of ISP front end 80 can have and corresponding carries out register 180.In one embodiment, carry out the field that position 204 can be single position, and to be set to 1 make to carry out register 180 in place by carrying out position 204.NextVld field 182 can comprise the multiple positions corresponding to the destination quantity in ISP front end 80.Such as, in the embodiment shown in fig. 8, ISP front end comprises six destinations: Sif0DMA, Sif1DMA, StatsPipe0, StatsPipe1, FEProc and FEOut.Therefore, carry out register 180 and can be included in six positions in NextVld field 182, wherein each corresponds to a destination, and the destination wherein pointed to is set as 1.Similarly, NextBk field 182 can comprise the multiple positions corresponding to the data register quantity in 1SP front end 80.Such as, as mentioned above, the embodiment of the ISP front end 80 shown in Fig. 8 can comprise seven data registers: SIf0, SIf1, StatsPipe0, StatsPipe1, ProcPipe, FEOut and ProcIn.Therefore, NextBk field 184 can comprise seven positions, and wherein each corresponds to a data register, and wherein by their corresponding place value being set to the data register that 0 or 1 selects to correspond respectively to storehouse 0 and 1.Therefore, use and carry out register 180, source just accurately learns which destination unit is by frames received certificate once triggering, and the destination unit which register banks will be used to pointed by configuration.
In addition, because ISP circuit 32 supports that dual sensor configures, therefore ISP front ends operable is in single-sensor configuration mode (such as only having a transducer in acquisition data) and dual sensor configuration mode (such as two transducers are all in acquisition data).In typical single-sensor configuration, the input data from sensor interface (such as Sens0) are sent to StatsPipe0 (for statistical disposition) and FEProc (for processes pixel).In addition, transducer frame is also sent to memory (SIf0DMA) for further process, as discussed above.
The example of the NextVld field that how can configure each source corresponding to ISP front end 80 when operating in single-sensor pattern is depicted with following table 2.
SIf0DMA SIf1DMA StatsPipe0 StatsPipe1 FEProc FEOut
Sens0Go 1 X 1 0 1 0
Sens1Go X 0 0 0 0 0
Sens0DMAGo X X 0 X X X
Sens1DMAGo X X X 0 X X
ProcInDMAGo X X X X 0 0
The NextVld example in the table each source of 2-: single-sensor pattern
As above table 1 discuss, ISP front end 80 can be configured to only have specific destination effective for particular source.Therefore, the destination being marked as " X " is in table 2 intended to indicate ISP front end 80 not to be configured to permission one concrete source transmission frame data to this destination.For such destination, each position corresponding to the NextVld field in this concrete source of this destination can be often 0.Should be understood that, but this is only an embodiment, in fact in other embodiments, ISP front end 80 can be configured such that each available destination unit can both be pointed in each source.
Configuration as shown in Table 2 above represents only has transducer 0 providing the single-sensor pattern of frame data.Such as, the instruction of Sens0Go register SIf0DMA, StatsPipe0 and FEProc are as destination.Therefore, when activated, each frame of transducer 0 view data is sent to this three destinations.As mentioned above, SIf0DMA can by Frame storage to memory 108 to treat with reprocessing, the process of StatsPipe0 applied statistics is to determine various statistical number strong point, and FEProc uses such as time-domain filtering and potting gum compensation filter to carry out processed frame.In addition, expect in the configuration of additional statistics (statistical information such as in different colours space) at some, during single-sensor pattern, also can activate StatsPipe1 (correspond to and NextVld is set to 1).In such embodiments, the frame data of transducer 0 are sent to StatsPipe0 and StatsPipe1.In addition, as shown in this embodiment, during single-sensor pattern, single-sensor interface (such as Sens0 or the Sen0 as replacement) is only had to be unique active source.
With such idea, Figure 15 provides the flow chart being depicted in and being used for the method 206 of processed frame data in ISP front end 80 when only having single-sensor (such as transducer 0) movable.Although the method 206 particular instantiation example processed by the frame data of FEProc130 to transducer 0, should be understood that such process also can be applied to the corresponding destination unit in any other source and ISP front end 80.Method 206 starts from step 208, and transducer 0 starts to obtain view data, and the frame of seizure is sent to ISP front end 80.As indicated in step 210, control unit 164 initialization can correspond to the programming carrying out register of Sens0 (transducer 0 interface) to determine target destination (comprising FEProc) and will use what storehouse register.Afterwards, decision logic 212 determines whether to there occurs source trigger event.As mentioned above, the frame data input from sensor interface can utilize different trigger mode (Figure 11-13).If trigger event do not detected, process 206 continuation etc. are to be triggered.Once trigger, next frame becomes present frame and is sent to FEProc (with other target destinations) with the process in carry out step 214.The data parameters of the data register (ProcPipe) based on the correspondence of specifying in the NextBk field of Sens0Go register can be used to configure FEProc.After completing the process to present frame in the step 216, method 206 can turn back to step 210, wherein programmes for next frame to Sens0Go register.
When the transducer 0 of ISP front end 80 and transducer 1 are all movable time, statistical disposition is still roughly simple, because the input of each transducer can by corresponding statistics block, StatsPipe0 and StatsPipe1, processes.But because the embodiment of shown ISP front end 80 only provides single pixel processing unit (FEProc), FEProc can be configured to correspond to transducer 0 in process and inputs the frame of data and to correspond between frame that transducer 1 inputs data alternately.Can understand, in the embodiment shown from FEProc reading images frame, the view data to avoid the view data from a transducer to be processed in real-time from another transducer does not obtain processing this situation in real time.Such as, as depict each source when ISP front end 80 operates in dual sensor pattern below the table 3 carrying out a kind of possible configuration of the NextVld field in register shown in, the input data from each transducer are sent to memory (SIf0DMA and SIf1DMA) and corresponding statistical disposition unit (StatsPipe0 and StatsPipe1).
SIf0DMA SIf1DMA StatsPipe0 StatsPipe1 FEProc FEOut
Sens0Go 1 X 1 0 0 0
Sens1Go X 1 0 1 0 0
Sens0DMAGo X X 0 X X X
Sens1DMAGo X X X 0 X X
ProcInDMAGo X X X X 1 0
The NextVld example in the table each source of 3-: dual sensor pattern
Transducer frame is in memory sent to FEProc from ProcInDMA source, they are replaced with the speed based on its corresponding frame rate between transducer 0 and transducer 1.Such as, if transducer 0 and transducer 1 are all obtaining view data with the speed of 30 frames (fps) per second, then can interweave in the mode of 1 to 1 their transducer frame.Such as, if transducer 0 (30fps) obtains view data with the speed doubling transducer 1 (15fps), then interweaving can be 2 to 1.That is, often read transducer 1 data of a frame, transducer 0 data of two frames can be read from memory.
With such idea, Figure 16 depicts for having the method 220 that two obtain processed frame data in the ISP front end 80 of the transducer of view data simultaneously.In step 222, transducer 0 and transducer 1 all start to obtain picture frame.It will be apparent that, transducer 0 and transducer 1 can use different frame rate, resolution etc. to obtain picture frame.In step 224, the frame that transducer 0 and transducer 1 obtain is written to memory 108 (such as using SIf0DMA and SIf1DMA destination).Then, as indicated by step 226, source ProcInDMA reads frame data from memory 108 in an alternating manner.As mentioned above, frame can be dependent on the frame rate of acquisition data and replaces between transducer 0 data and transducer 1 data.In step 228, obtain next frame from ProcInDMA.After this, in step 230, be transducer 0 or transducer 1 data based on next frame and programme corresponding to NextVld and the NextBk field of carrying out register of source (being PeocInDMA here).After this, decision logic 232 determines whether to there occurs source trigger event.As mentioned above, input (such as " by triggering " pattern) by making to carry out the position data triggered from memory in place.Therefore, once the position of carrying out of carrying out register is set as 1, just can trigger.In step 234, once trigger, next frame becomes present frame, and is sent to FEProc for process.As mentioned above, the data parameters of the data register (ProcPipe) based on the correspondence specified by the NextBk field of ProcInDMAGo register can be used to configure FEProc.Complete the process to present frame in step 236 after, method 220 can turn back to step 228 and continue.
Configuration variation during the another kind of Action Events that ISP front end 80 is arranged to process is image procossing.Such as, when ISP front end 80 changes dual sensor configuration into from single-sensor configuration, such event may occur, or vice versa.As mentioned above, according to being one or two imageing sensors are movable, the NextVld field for particular source may be different.Therefore, when transducer configuration changes, ISP front-end control unit 164 can discharge institute's on purpose unit before destination unit is pointed to by new source.This can be avoided invalid configuration (such as multiple source being assigned to a destination).In one embodiment, can by all NextVld fields of carrying out register be set to 0 to make on purpose to lose efficacy, and it is in place to make to carry out position, and complete the release of destination unit.After releasing destination unit, can reconfigure according to current mode sensor and carry out register, and image procossing can continue.
Figure 17 shows the method 240 for switching between single and double transducer configuration according to an embodiment.Start from step 242, identify the next frame of the view data in the concrete source from ISP front end 80.In step 244, target destination (NextVld) is programmed into carrying out in register corresponding to source.Then in step 246, according to target destination, NextBk is programmed to point to the correct data register being associated with target destination.After this, decision logic 248 determines whether to there occurs source trigger event.As shown in step 250, once there occurs triggering, next frame is sent to by NextVld designated destination unit, and is processed by destination unit by the corresponding data register using NextBk to specify.Process continues until step 252, completes the process to present frame there.
Subsequently, decision logic 254 determines whether the target destination in source changes.As mentioned above, the NextVld carrying out register corresponding to Sens0 and Sens1 arranges and can change according to being one or two sensor activities.Such as, reference table 2, if only there is transducer 0 to be movable, then transducer 0 data are sent to SIf0DMA, StatsPipe0 and FEProc.But reference table 3, if transducer 0 and transducer 1 are all movable, then transducer 0 data can not be sent directly to FEProc.As mentioned above, instead, transducer 0 and transducer 1 data are written to memory 108, and read into FEProc in an alternating fashion by source ProcInDMA.Therefore, if it is determined that logic 254 does not detect that target destination changes, then control unit 164 estimate transducer configuration do not change, method 240 turns back to step 246.There, the NextBk field that register is carried out in source is programmed to point to the correct data register for next frame, and process continues.
But if it is determined that logic 254 detects that destination changes, then control unit 164 determines that transducer configuration there occurs change.Such as, this can represent the switching from single-sensor pattern to dual sensor pattern, or fully closure sensor.Therefore, method 240 proceeds to step 256, and there, all positions of carrying out the NextVld field of register are set as 0, thus when next triggering, effectively forbids frame to be sent to any destination.Then, determine whether in decision logic 258 on purpose unit all change idle condition into.If not, method 240 is waited at decision logic 258, until on purpose unit completed their current operation.Then, determine whether to continue image procossing at decision logic 260.Such as, if destination changes represent that transducer 0 and transducer 1 are all unmovable, then image procossing terminates in step 262.But if determine to proceed image procossing, then method 240 turns back to step 244, and programme carry out the NextVld field of register according to current mode (such as single-sensor or dual sensor).As shown here, for empty carry out register and destination field step 254-262 can together with represented by reference marker 264.
Then, Figure 18 shows another embodiment, and it provides another kind of dual sensor operator scheme in the mode of flow chart (method 265).Method 265 depicts such a case, as shown in step 266, one of them transducer (such as transducer 0) obtains view data actively, and picture frame is sent to FEProc130 for process, also picture frame is sent to StatsPipe0 and/or memory 108 (Sif0DMA), and another transducer (such as transducer 1) is inactive (being such as closed) simultaneously.Then decision logic 268 detects a condition, and wherein when next frame, transducer 1 is movable to send view data to FEProc by becoming.If do not meet this condition, then method 265 turns back to step 266.But if meet this condition, then method 265 performed an action for 264 (comprising the step 254-262 of Figure 17), take this to empty and the destination field of source of configuration again in step 264.Such as, in step 264, the NextVld field of carrying out register being associated with transducer 1 can be programmed to FEProc and StatsPipe1 and/or memory (Sif1DMA) to be appointed as destination, and the NextVld field of carrying out register being associated with transducer 0 can be programmed to empty FEProc as destination.In this embodiment, as shown in step 270, although when next frame, the frame caught by transducer 0 is not sent to FEProc, but transducer 0 can keep movable and continue to send its picture frame to StatsPipe0, simultaneously in step 272, transducer 1 catches and continues to send data to FEProc for process.So transducer 0 and these two transducers of transducer 1 all continue to operate in this " dual sensor " pattern, although only the picture frame from a transducer is sent to FEProc for process.For the object of this example, send a frame to FEProc can be called as " activity sensor " for the transducer processed, do not send a frame to FEProc but the transducer still sending data to statistical disposition unit can be called as " half activity sensor ", the transducer not obtaining data completely can be called as " inactive transducer ".
A benefit of above-described technology is, because half activity sensor (transducer 0) can be continuously obtain statistical information, next time is when half activity sensor changes active state into and current active transducer (transducer 1) changes semi-active state or inactive state into, owing to making color balance and exposure parameter can use to the persistent collection of image statistics, so half activity sensor can start to obtain data in a frame.This technology can be called as " hot-swap " (hotswitching) of imageing sensor, which obviates the shortcoming relevant with " cold start-up " of imageing sensor (such as not having the startup of available statistical information).In addition, in order to power saving, because each source is asynchronous (as mentioned above), so half activity sensor can with the clock reduced and/or frame rate operation between half active stage.
Before the more detailed description of the statistical disposition described in the ISP front end logic 80 continuing Fig. 8 and processes pixel operation, believe that the briefly introducing of definition about various ISP frame region can contribute to better understanding the technical program.With such idea, Figure 19 is exemplified with the various frame regions defined in image source frame.The form being supplied to the source frame of image processing circuit 32 can use above-described piecemeal or linear addressing pattern, and can utilize the pixel format of 8,10,12 or 14 precision.Image source frame 274 as shown in figure 19 can comprise transducer frame region 276, primitive frame region 278 and zone of action 280.Transducer frame 276 is generally the largest frames size that imageing sensor 90 can be supplied to image processing circuit 32.Primitive frame region 278 can be defined as the region being sent to ISP front-end processing logic 80 of transducer frame 276.Zone of action 280 can be defined as a part for source frame 274, and it is the part being typically positioned at primitive frame region 278 and it being carried out to concrete image processing operations.According to the embodiment of this technology, for different image processing operations, zone of action 280 can be identical or different.
According to some aspects of this technology, ISP front end logic 80 only receives primitive frame 278.Therefore, for purposes of this discussion, the overall frame sign for ISP front-end processing logic 80 can be assumed to be the primitive frame size determined by wide 282 and high 284.In certain embodiments, can be determined by control logic 84 and/or preserve the boundary shifts amount from transducer frame 276 to primitive frame 278.Such as, control logic 84 can comprise the firmware determining primitive frame region 278 based on input parameter, and described input parameter is all knows it is the specified x side-play amount 286 relative to transducer frame 276 and y side-play amount 288.In addition, in some cases, the processing unit in ISP front end logic 80 or ISP flowing water logic 82 can have predefined zone of action, the pixel in primitive frame but outside zone of action 280 will not be processed, that is, remain unchanged.Such as, can based on the zone of action 280 with wide 290 and high 292 defined relative to the x side-play amount 294 of primitive frame 278 and y side-play amount 296 for concrete processing unit.In addition, for the situation that zone of action is not specifically defined, an embodiment of image processing circuit 32 can suppose zone of action 280 identical with primitive frame 278 (such as x side-play amount 294 and y side-play amount 296 all equal 0).Therefore, in order to carry out image processing operations to view data, about the border of primitive frame 278 or zone of action 280, boundary condition can be defined.
With such idea and with reference to Figure 20, exemplified with the more detailed view of the ISP front end processes pixel logical one 30 (discussing in fig. 8) according to this technology embodiment before.As shown in the figure, ISP front end processes pixel logical one 30 comprises time domain filtering 298 and potting gum compensating filter 300.Time domain filtering 298 can receive one in received image signal Sif0, Sif1, FEProcIn or pretreated picture signal (such as 159,161), and can operate raw pixel data before any additional treatments of execution.Such as, time domain filtering 298 can first image data processing to reduce noise by being averaged to picture frame on time orientation.Below by the potting gum compensating filter 300 discussed in detail can to from the raw image data application convergent-divergent through potting gum of imageing sensor (such as 90a, 90b) and resampling to maintain the homogeneous space distribution of image pixel.
Time domain filtering 298 can be that pixel is adaptive based on motion and lightness feature.Such as, when pixel motion is larger, filtering strength can be reduced to avoid occurring " hangover " or " ghosting artifact " in the treated image that obtains, otherwise when light exercise being detected or without when moving, can filtering strength being increased.In addition, also filtering strength can be adjusted based on lightness data (such as " brightness ").Such as, along with image lightness increases, filtering artifact becomes more easily to be discovered by human eye.Therefore, when pixel have comparatively high brightness level time should reduce filtering strength further.
When applying time-domain filtering, time domain filtering 298 can receive the reference pixel data (Rin) and motion history input data (Hin) that can come from the last frame through filtering or initial frame.Use these parameters, time domain filtering 298 can provide motion history export data (Hout) and export (Yout) through the pixel of filtering.Then be passed to potting gum compensating filter 300 through the pixel output Yout of filtering, the latter can be configured to perform one or more zoom operations to produce output signal FEProcOut to the pixel data output Yout through filtering.Then, treated pixel data FEProcOut can be forwarded to ISP stream treatment logic 82 as above.
With reference to Figure 21, can the process block diagram of time-domain filtering process 302 that performs of time domain filtering as shown in Figure 20 exemplified with what describe according to the first embodiment.Time domain filtering 298 can comprise 2 tap filters, and wherein filter coefficient can carry out self-adaptative adjustment based on motion and lightness data by pixel at least partly.Such as, can represent that input pixel x (t) of variable " t " of time value and the reference pixel r (t-1) in the last frame through filtering or last initial frame make comparisons, with the motion index look-up table generated in the motion history table (M) 304 that can comprise filter coefficient by having.In addition, based on motion history input data h (t-1), can determine that the motion history corresponding to current input pixel x (t) exports h (t).
Can determine that motion history exports h (t) and filter coefficient K based on increment of motion d (j, i, t), wherein (j, i) represents the coordinate of the locus of current pixel x (j, i, t).By determining that maximum for three absolute increments between the initial of the identical pixel of three horizontal levels with same color and reference pixel is to calculate increment of motion d (j, i, t).Such as, simple with reference to Figure 22, exemplified with the locus of three the co-located reference pixels 308,309 and 310 corresponding to initial input pixel 312,313 and 314.In one embodiment, initial and reference image usually calculates increment of motion based on these can to use following formula:
d(j,i,t)=max3[abs(x(j,i-2,t)-r(j,i-2,t-1)),
(abs(x(j,i,t)-r(j,i,t-1)),(1a)
(abs(x(j,i+2,t)-r(j,i+2,t-1))]
Further illustrate in following Figure 24 and describe this for determining the flow chart of the technology of motion delta value.In addition, should be understood that and only aim to provide one for determining the embodiment of motion delta value for the technology calculating motion delta value as shown in above equation 1a (and in following Figure 24).
In other embodiments, can estimate that the array of the pixel of same color is to determine motion delta value.Such as, except three pixels of reference in equation 1a, estimate from reference pixel 312,313 and 314 two row (such as j-2 above for determining that the embodiment of motion delta value can also comprise for one; Be assumed to be bayer-pattern) the pixel of the same color co-located pixel corresponding with it between absolute increment, and from reference pixel 312,313 and 314 two row (such as j+2 below; Be assumed to be bayer-pattern) the pixel of this same color co-located pixel corresponding with it between absolute increment.Such as, in one embodiment, motion delta value can represent as follows:
d(j,i,t)=max9[abs(x(j,i-2,t)-r(j,i-2,t-1)),
(abs(x(j,i,t)-r(j,i,t-1)),
(abs(x(j,i+2,t)-r(j,i+2,t-1)),
(abs(x(j-2,i-2,t)-r(j-2,i-2,t-1)),
(abs(x(j-2,i,t)-r(j-2,i,t-1)),(1b)
(abs(x(j-2,i+2,t)-r(j-2,i+2,t-1)),
(abs(x(j+2,i-2,t)-r(j+2,i-2,t-1))
(abs(x(j+2,i,t)-r(j+2,i,t-1)),
(abs(x(j+2,i+2,t)-r(j+2,i+2,t-1))]
Thus, in the embodiment described by equation 1b, by compare the pixel of same color 3x3 array between absolute increment to determine motion delta value, wherein current pixel (313) is positioned at this 3x3 array (such as, if count the pixel of different colours in, be actually the 5x5 array of Bayer color mode) center.What should understand is, wherein current pixel (such as 313) can be analyzed and be positioned at the two-dimensional array (such as, comprising all pixels array in the same row (such as equation 1a) or the array of all pixels in same-Lie) of any suitable same color pixel of array center to determine motion delta value.In addition, although motion delta value can be confirmed as the maximum (such as, as shown in equation 1a and 1b) of absolute increment, in other embodiments, motion delta value also can be selected as the average of absolute increment or median.In addition, aforementioned techniques also can be applied to the color filter array (such as RGBW, CYGM etc.) of other types, and not intended to be by bayer-pattern exclusive.
Refer back to Figure 21, once determine motion delta value, the motion index search that can be used to selective filter COEFFICIENT K from motion table (M) 304 can be calculated by the increment of motion d (t) being used for current pixel (such as at locus (j, i)) is inputted h (t1) phase Calais with motion history.Such as, filter coefficient K is determined by following equation:
K=M[d(j,i,t)+h(j,i,t-1)](2a)
In addition, following formula can be used to determine that motion history exports h (t):
h(j,i,t)=d(j,i,t)+(1-K)×h(j,i,t-1)(3a)
Then, the lightness of current input pixel x (t) can be used to generate the brightness index search in illuminometer (L) 306.In one embodiment, can comprise can between 0 and 1 and can based on brightness index by the decay factor selected for illuminometer.By the first filter coefficient K is multiplied by the brightness decay factor to calculate the second filter coefficient K ', as shown in following equation:
K′=K×L[x(j,i,t)](4a)
Determined value K ' can be used as the filter factor of time domain filtering 298 subsequently.As mentioned above, time domain filtering 298 can be 2 tap filters.In addition, time domain filtering 298 can be configured to infinite impulse response (IIR) filter using the last frame through filtering, or uses finite impulse response (FIR) (FIR) filter of last initial frame.Time domain filtering 298 can use current input pixel x (t), reference pixel r (t-1) and filter coefficient K ' calculates output pixel y (t) (Yout) through filtering by following formula:
y(j,i,t)=r(j,i,t-1)+K′(x(j,i,t)-r(j,i,t-1))(5a)
As discussed above, time-domain filtering process 302 as shown in figure 21 can be performed by pixel.In one embodiment, identical motion table M and illuminometer L can be used for all colours component (such as R, G and B).In addition, some embodiments can provide by pass mechanism, wherein can the such as bypass time-domain filtering in response to the control signal from control logic 84.In addition, as following by about Figure 26 and 27 discuss, an embodiment of time domain filtering 298 can use independent motion and illuminometer for each color component of view data.
Better can understand the embodiment with reference to the temporal filtering technique described by Figure 21 and 22 with reference to Figure 23, wherein Figure 23 depicts the flow chart illustrated according to the method 315 of above-described embodiment.Method 315 starts from step 316, and wherein time-domain filtering system 302 receives current pixel x (t) being positioned at the locus (j, i) of the present frame of view data.In step 317, determine the motion delta value d (t) of current pixel x (t) at least in part based on one or more co-located reference pixels (such as r (t-1)) of the former frame (picture frame before being such as close in present frame) from view data.Further illustrate for determining the technology of motion delta value d (t) in step 317 below with reference to Figure 24, it can perform according to above-mentioned equation 1a.
Once obtain motion delta value d (t) from step 317, as indicated in step 318, motion delta value d (t) and the motion history input value h (t-1) corresponded to from the locus (j, i) of former frame can be used to determine that motion table searches index.In addition, although not shown, once be aware of motion delta value d (t), in step 318 can also such as by using above-mentioned equation 3a to determine to correspond to motion history value h (t) of current pixel x (t).After this, in step 319, the motion table from step 318 can be used to search index from motion table 304, select the first filter coefficient K.Can perform according to above-mentioned equation 2a motion table search index determination and from motion table to the selection of the first filter coefficient K.
Then, in step 320, decay factor can be selected from illuminometer 306.Such as, illuminometer 306 can comprise scope roughly from 0 to 1 decay factor, and the value of current pixel x (t) can be used from illuminometer 306, to select decay factor as searching index.Once have selected decay factor, in step 321, as shown in above formula 4a, the decay factor of selection and the first filter coefficient K (from step 319) can be used to determine the second filter coefficient K '.Then in step 322, can determine to correspond to the time domain of current input pixel x (t) through the output valve y (t) of filtering based on the value of the value of the reference pixel r (t-1) of the second filter coefficient K ' (from step 320), co-located and input pixel x (t).Such as, in one embodiment, output valve y (t) can be determined according to above-mentioned equation 5a.
With reference to Figure 24, illustrate in more detail the step 317 for determining motion delta value d (t) from method 315 according to an embodiment.Specifically, the determination of motion delta value d (t) generally corresponds to according to above equation 1a the operation described.As shown in the figure, step 317 can comprise sub-step 324-327.It starts from sub-step 324, identifies the set with three horizontal adjacent pixels of the color value identical with current input pixel x (t).Exemplarily, according to the embodiment shown in Figure 22, view data can comprise bayer images data, and three horizontal adjacent pixels can comprise the 3rd pixel of current input pixel x (t) (313), the second pixel 312 of same color on current input pixel 313 left side and the same color on the right of current input pixel 313.
Then, in sub-step 325, three the co-located reference pixels 308,309 and 310 from former frame corresponding with the set of selected three horizontal adjacent pixels 312,313 and 314 are identified.Use the pixel 312,313 and 314 and three co-located reference pixels 308,309 and 310 selected, in sub-step 326, determine the absolute value of each the co-located reference pixel 308 corresponding with it in the pixel 312,313 and 314 of three selections, the difference between 309 and 310 respectively.Subsequently, in sub-step 327, elect the maximum of three differences from sub-step 326 as motion delta value d (t) for current input pixel x (t).As mentioned above, the Figure 24 exemplified with the motion delta value computing technique such as shown in equation 1a only aims to provide an embodiment.In fact, as mentioned above, wherein current pixel can be used to be positioned at any suitable two-dimensional array with same color pixel at place of array center to determine motion delta value (such as equation 1b).
Another embodiment for the technology to view data application time-domain filtering is further illustrated in Figure 25.Such as, because the signal to noise ratio of the different colours component of view data may be different, gain can be applied to current pixel, makes current pixel can carry out gain before select motion and brightness value from motion table 304 and illuminometer 306.Depended on the corresponding gain of color by application, the signal to noise ratio of different colours component can more reach unanimity.Only exemplarily, use in the realization of original bayer images data a kind of, relative to green color passage (Gr and Gb), red and blue color channels may be general sensitiveer.Therefore, by applying the suitable gain depending on color to each treated pixel, generally can reduce the signal to noise ratio variation between each color component, (especially) reduces ghosting artifact thus, and improves the consistency after Automatic white balance gain between different colours.
With such thought, Figure 25 provides the flow chart of a kind of description according to such an embodiment for the method 328 to the view data application time-domain filtering received by front-end processing unit 130.It starts from step 329, and time-domain filtering system 302 receives current pixel x (t) being positioned at locus (j, the i) place of view data present frame.In step 330, determine the motion delta value d (t) of current pixel x (t) at least in part based on one or more co-located reference pixels (such as r (t-1)) of the former frame (picture frame before being such as close in present frame) from this view data.Step 330 is similar with the step 317 of Figure 23, and can utilize the computing shown in above equation 1.
Then, in step 331, can use motion delta value d (t), from former frame correspond to locus (j, i) motion history input value h (t-1) (such as corresponding to the reference pixel r (t-1) of co-located) and the gain be associated with the color of current pixel, determine that motion table searches index.After this, in step 332, the motion table that step 331 can be used to determine is searched index from motion table 304, is selected the first filter coefficient K.Only exemplarily, in one embodiment, determination filter coefficient K and motion table that can be following search index:
K=M[gain[c]×(d(j,i,t)+h(j,i,t-1))],(2b)
Wherein M represents motion table, and wherein gain [c] corresponds to the gain with the color-associations of current pixel.In addition, although not shown in Figure 25, be to be understood that motion history output valve h (t) for current pixel also can be determined, and the co-located pixel application time-domain filtering to subsequent picture frame (such as next frame) can be used to.In the present embodiment, following formula can be used to export h (t) to the motion history determining current pixel x (t):
h(j,i,t)=d(j,i,t)+K[h(j,i,t-1)-d(j,i,t)](3b)
Then in step 333, the illuminometer determined based on the gain be associated with the color of current pixel x (t) (gain [c]) can be used to search index and to select decay factor from illuminometer 306.As mentioned above, the decay factor be stored in illuminometer can have roughly from 0 to 1 scope.After this, in step 334, the second filter coefficient K ' can be calculated based on decay factor (from step 333) and the first filter coefficient K (from step 332).Only exemplarily, in one embodiment, determination second filter coefficient K ' that can be following and illuminometer search index:
K′=K×L[gain[c]×x(j,i,t)](4b)
Then, in step 335, based on the value of the second filter coefficient K ' (from step 334), co-located reference pixel r (t-1) and the value of input pixel x (t), determine time-domain filtering output valve y (t) corresponding to current input pixel x (t).Such as, in one embodiment, determination output valve y (t) that can be following:
y(j,i,t)=x(j,i,t)+K′(r(j,i,t-1)-x(j,i,t))(5b)
Proceed to Figure 26, depict another embodiment of time-domain filtering process 336.Here, mode that can be similar according to the embodiment discussed with Figure 25 is to complete time-domain filtering process 336, difference is, replace depending on the gain (such as gain [c]) of color to each input pixel application and using the motion and illuminometer shared, for each color component provides independent motion and illuminometer.Such as, as shown in figure 26, motion table 304 can comprise the motion table 304a corresponding to the first color, and the motion table 304b corresponding to the second color and the motion table 304c corresponding to n-th kind of color, wherein n depends on the quantity of the color existed in raw image data.Similar, illuminometer 306 can comprise the illuminometer 306a corresponding to the first color, the illuminometer 306b corresponding to the second color and the illuminometer 306c corresponding to n-th kind of color.Therefore, be in the embodiment of bayer images data at raw image data, three motions and illuminometer can be provided to be respectively used to redness, blueness and green color component.As mentioned above, the selection of filter factor K and decay factor can depend on motion selected by current color (such as the color of current input pixel) and illuminometer.
Figure 27 show a kind of illustrate to use depend on the motion of color and illuminometer to carry out the method 338 of another embodiment of time-domain filtering.It will be apparent that, the adoptable various calculating of method 338 can be similar to embodiment as shown in figure 23 with formula, but it uses concrete motion selected by each color and illuminometer; Or similar to embodiment as shown in figure 25, but it replaces the use of the gain gain [c] depending on color with the selection of the motion and illuminometer that depend on color.
Start from step 339, time-domain filtering system 336 (Figure 26) receives current pixel x (t) being positioned at the locus (j, i) of the present frame of view data.In step 340, the one or more co-located reference pixels (such as r (t-1)) at least in part based on the former frame (picture frame before being such as close in present frame) from view data determine motion delta value d (t) for current pixel x (t).Step 340 is similar with the step 317 in Figure 23, and can utilize the computing as shown in above-mentioned equation 1.
Then, in step 341, motion delta value d (t) can be used and determine that motion table searches index from the motion history input value h (t-1) (such as corresponding to coordination reference pixel r (t-1)) corresponding with locus (j, i) of former frame.After this in step 342, the first filter coefficient K can be selected based on the color of current input pixel from one of available motion table (such as 304a, 304b, 304c).Such as, once identify suitable motion table, the motion table determined in step 341 can be used to search index selection first filter coefficient K.
After have selected the first filter coefficient K, as shown in step 343, select the illuminometer corresponding to current color, and from the illuminometer selected, select decay factor based on the value of current pixel x (t).After this, in step 344, determine the second filter coefficient K ' based on decay factor (from step 343) and the first filter coefficient K (step 342).Then, in step 345, determine based on the value of the reference pixel r (t-1) of the second filter coefficient K ' (from step 344), co-located and the value of input pixel x (t) time-domain filtering output valve y (t) corresponding to current input pixel x (t).Although the technology shown in Figure 27 implements possibility cost higher (such as because need for storing additional motion and the memory of illuminometer), but in some cases, it can provide further improvement to the consistency between the different colours after ghosting artifact and Automatic white balance gain.
According to another embodiment, the time-domain filtering process provided by time domain filtering 298 can utilize the gain that depends on color and specific to the combination of the motion of color and/or illuminometer with to input pixel application time-domain filtering.Such as, in such an embodiment, for all colours component provides single motion table, and can determine based on the gain depending on color from motion table, select the motion table of the first filter factor (K) to search index (such as shown in the step 331-332 in Figure 25), illuminometer searches the gain that index then can not be employed to depend on color, but can be used to select lightness decay factor (such as step 343) as shown in figure 27 from one of multiple illuminometers of the color depending on current input pixel.As replacement, in another embodiment, multiple motion table can be provided, and can use motion table search index (do not have to apply and depend on the gain of color) from correspond to current input pixel color motion table select the first filter factor (K) (such as step 342) as shown in figure 27, but can be all colours component single illuminometer is provided, and wherein can determine selecting the illuminometer of lightness decay factor to search index (such as, step 333-334) as shown in figure 25 based on the gain depending on color.In addition, utilize in an embodiment of Bayer color filters array wherein, red (R) can be respectively and blueness (B) color component provides a motion table and/or illuminometer, and can be that two green color component (Gr and Gb) provide shared motion table and/or illuminometer.
The output of time domain filtering 298 can be sent to potting gum compensating filter (BCF) 300 subsequently, the latter can be configured to process image pixel to compensate the nonlinear Distribution (such as uneven spatial distribution) of the color samples caused because of the potting gum of (one or more) imageing sensor 90a or 90b, makes the image processing operations (such as solution mosaic etc.) depending on the linear distribution of color samples subsequently in ISP flowing water logic 82 can true(-)running.Such as, with reference now to Figure 28, depict the full resolution sampling 346 of bayer images data.It can represent the full resolution sampled original image data caught by the imageing sensor 90a (or 90b) being coupled to ISP front-end processing logic 80.
It will be apparent that, under specific image capture conditions, it may be unpractiaca for the full resolution image data caught by imageing sensor 90a being sent to ISP circuit 32 for process.Such as, when captured video data, in order to ensure the facile motion image of it seems from human eye, expect the frame rate of about 30 frames at least per second.But, if in each frame of full resolution sampling the pixel data amount that comprises when sampling with 30 frames per second more than the disposal ability of ISP circuit 32, then can apply potting gum compensation filter to reduce the resolution of picture signal together with the potting gum performed by imageing sensor 90a, also improve signal to noise ratio simultaneously.Such as, as mentioned above, various potting gum technology can be applied, such as 2 × 2 potting gum by being averaged to the value of the surrounding pixel in the zone of action 280 of primitive frame 278, " through potting gum " original image pixels generating.
With reference to Figure 29, exemplified with according to an embodiment be configured to potting gum is carried out to generate an embodiment of the imageing sensor 90a of the raw image data 358 through potting gum of the correspondence shown in Figure 30 to the full resolution image data 346 of Figure 28.As shown in the figure, imageing sensor 90a can catch full resolution raw image data 346.Potting gum logic 357 can be configured to apply potting gum to generate the raw image data 358 through potting gum to full resolution raw image data 346, these data 358 can use sensor interface 94a and be provided to ISP front-end processing logic 80, and this sensor interface 94a can be SMIA interface or any other suitable parallel or serial camera interface as mentioned above.
As shown in figure 30, potting gum logic 357 can apply 2 × 2 potting gum to full resolution raw image data 346.Such as, for the view data 358 through potting gum, pixel 350,352,354 and 356 can form bayer-pattern, and can be determined by being averaged to the value of the pixel from full resolution raw image data 346.Such as, with reference to Figure 28 and 30, the Gr pixel 350 through potting gum can be confirmed as mean value or the median of full resolution Gr pixel 350a-350d.Similar, R pixel 352 through potting gum can be confirmed as the mean value of full resolution R pixel 352a-352d, B pixel 354 through potting gum can be confirmed as the mean value of full resolution B pixel 354a-354d, and the Gb pixel 356 through potting gum can be confirmed as the mean value of full resolution Gb pixel 356a-356d.Therefore, in the present embodiment, 2 × 2 potting gum can provide the set comprising four full resolution pixels, it comprises upper left (such as 350a), upper right (such as 350b), lower-left (such as 350c) and bottom right (such as 350d) pixel, and they are by the average pixel through potting gum being positioned at the foursquare center formed by the set of these four full resolution pixels with derivation.Therefore, the Bayer block 348 through potting gum as shown in figure 30 comprises four " super-pixel (superpixel) ", and it represents 16 pixels comprised in the Bayer block 348a-348d of Figure 28.
Except reducing spatial resolution, potting gum also has the attendant advantages of the noise reduced in picture signal.Such as, as long as imageing sensor (such as 90a) is exposed to light signal, just a certain amount of noise may be had, the photon noise be such as associated with image.This noise can be random or system, and it also may from multiple source.Therefore be included in and can be expressed as signal to noise ratio by the amount of information in the image of image capture sensor.Such as, when image to be captured by imageing sensor 90a and is sent to treatment circuit (such as ISP circuit 32), noise to a certain degree just may be had, this is because " reading noise " is incorporated in picture signal by the reading of view data and transmission process inherently in pixel value.This kind of " reading noise " can be random, and normally unavoidable.By using the average of four pixels, generally can reduce noise (such as photon noise) and not considering noise source.
Therefore, when considering the full resolution image data 346 of Figure 28, each self-contained four pixels of bayer-pattern (2 × 2 pieces) 348a-348d, each pixel comprises signal and noise component(s).If separately read in each pixel in such as Bayer block 348a, then present four signal components and four noise component(s)s.But, by the potting gum of application as Figure 28 and 30, make four pixels (such as 350a, 350b, 350c, 350d) that the single pixel (such as 350) in the view data of potting gum can be represented as, in full resolution image data 346, be can be used as single pixel by the equal area that four pixels take be read, it only has an example of noise component(s), thus improves signal to noise ratio.
In addition, although the potting gum logic 357 of Figure 29 is depicted as and is configured to the process of application 2 × 2 potting gum by the present embodiment, such as, but what should understand is the potting gum process that potting gum logic 357 can be configured to apply any applicable type, 3x3 potting gum, vertical pixel merging, horizontal pixel merging etc.In certain embodiments, imageing sensor 90a can be configured to select between different pixels merging patterns during picture catching process.In addition, in a further embodiment, imageing sensor 90a also can be configured to the technology that application one is called as " skipping (skipping) ", wherein replace being averaged pixel sampling, logic 357 only selects some pixel (such as every two pixels are selected, and every three pixels are selected etc.) to output to ISP front end 80 for process from full resolution data 346.In addition, although Figure 29 only illustrates imageing sensor 90a, should understand and can be realized imageing sensor 90b by similar mode.
Equally as Figure 30 describe, an impact of potting gum process is may not have equal interval through the spatial sampling of the pixel of potting gum.In some systems, such spatial distortion can cause usual less desirable aliasing (such as jagged edge).In addition, because some image processing step in ISP flowing water logic 82 may depend on the linear distribution of color samples to carry out correct operation, so can apply potting gum compensating filter (BCF) 300 with perform through the pixel of potting gum resampling and reorientate, make the pixel through potting gum have uniform spatial distribution.That is, BCF300 compensates in fact uneven spatial distribution (such as shown in Figure 30) by the resampling to sampling (such as pixel) position.Such as, Figure 31 is exemplified with in the resampling part via the view data 360 through potting gum after BCF300 process, and the Bayer block 361 wherein comprising equally distributed resampling pixel 362,363,364 and 365 corresponds respectively to the pixel 350,352,354 and 356 through potting gum of the view data 358 through potting gum from Figure 30.In addition, skip as above in utilization in the embodiment of (such as, replacing potting gum), spatial distortion as shown in figure 30 may there will not be.Under these circumstances, BCF300 can play the effect of low pass filter, to reduce the artifact (such as aliasing) that may cause when imageing sensor 90a adopts and skips.
Figure 32 shows the block diagram of the potting gum compensating filter 300 according to an embodiment.BCF300 can comprise potting gum compensation logic 366, it can process pixel 358 through potting gum to come application level and vertically scale respectively by usage level scaling logic 368 and vertically scale logic 370, with resampling and the pixel 358 of reorientating through potting gum, thus arrange these pixels with space uniform distribution, as shown in figure 31.In one embodiment, can usage level perform by (one or more) zoom operations of the execution of BCF300 with vertical many tap polyphase filtering.Such as, filtering process can comprise the pixel that selection is suitable from input source view data (view data 358 through potting gum such as provided by imageing sensor 90a), each in the pixel of selection is multiplied by filter factor, and the value obtained is added to form output pixel in expectation destination.
Can use one of them for vertically scale, one determine the selection of the pixel used in zoom operations for the differential analyzer 372 separated of horizontal scaling, it can comprise the center pixel of same color and the neighbor pixel of surrounding.In the embodiment depicted, differential analyzer 372 can be digital differential analyser (DDA), and can be configured to control current output pixel position in the vertical and horizontal direction during zoom operations.In the present embodiment, all colours component during a DDA (being called as 372a) is used to horizontal scaling, all colours component during the 2nd DDA (being called as 372b) is then used to vertically scale.Only exemplarily, DDA372 is provided as 32 bit data register, and it comprises the complement of two's two's complement fixed-point number with 16 integer parts and 16 fractional parts.16 integer parts can be used to the current location determining output pixel.The fractional part of DDA372 can be used to determine current index or phase place, and current index or phase place can based on fractional position between the pixel of current DDA position (such as corresponding to the locus of output pixel).Index or phase place can be used to select suitable coefficient sets from the set of filter coefficient table 374.In addition, the pixel of same color can be used to complete the filtering of each color component.Therefore, not only also Choose filtering coefficient can be carried out based on the color of current pixel based on the phase place of current DDA position.In one embodiment, can there are 8 phase places between each input pixel, therefore vertical and horizontal scaling parts can utilize 8 depth coefficient tables, and 3 high positions of 16 fractional parts are used to express current phase place or index thus.Therefore, should be understood that, as used herein, term " original image " data or similar term refer to by the many color image data having the single-sensor that is coated with color filter array pattern (such as Bayer) and obtain, and this provides multiple color component in a plane.In another embodiment, independent DDA can be used for each color component.Such as, in such embodiments, BCF300 can extract R, B, Gr and Gb component from raw image data, and each component can be processed as independent plane.
In operation, horizontal and vertical convergent-divergent can comprise initialization DDA372 and use the integer of DDA372 and fractional part to perform the filtering of many tap polyphase.Although be executed separately and use DDA separately, horizontal and vertical zoom operations performs by similar mode.Step value or step size (DDAStepX for horizontal scaling and the DDAStepY for vertically scale) are determined at each output pixel by the increment size of DDA value (currDDA) after determining, and use next currDDA value to repeat the filtering of many tap polyphase.Such as, if step value is less than 1, then enlarged image, if step value is greater than 1, then downscaled images.If step value equals 1, then there is not convergent-divergent.In addition, it should be noted that and can use identical or different step size for horizontal and vertical convergent-divergent.
Output pixel is generated with the order (such as using bayer-pattern) identical with input pixel by BCF300.In the present embodiment, even number or odd number can be categorized as based on the sequence of input pixel.Such as, with reference to Figure 33, describe exemplified with based on the input location of pixels (row 375) of multiple DDAStep value (row 376-380) and the figure of corresponding output pixel position.In this illustration, the redness (R) of the line display described in original bayer images data and the row of green (Gr) pixel.In order to carry out horizontal filtering, the red pixel being in position 0.0 in 375 of being expert at can be considered to even pixel, and the green pixel being in position 1.0 in 375 of being expert at can be considered to odd pixel, by that analogy.For output pixel position, even number and odd pixel can be determined based on the least significant bit in the fractional part (low 16) in DDA372.Such as, suppose the DDAStep of 1.25 as shown in row 377, least significant bit corresponds to the position 14 of DDA, because this specifies the resolution of 0.25.So, even pixel (least significant bit can be considered at the red output pixel of DDA position (currDDA) 0.0, position 14 is 0), can odd pixel be considered at the green output pixel (position 14 is 1) of currDDA1.0, by that analogy.In addition, although discuss Figure 33 about the filtering (using DDAStepX) of horizontal direction, but be to be understood that about vertical filtering (using DDAStepY), also can determine the input and output pixel of even number and odd number in a like fashion.In other embodiments, DDA372 also can be used to follow the trail of input pixel position (such as, instead of follow the trail of expect output pixel position).In addition, should understand, DDAStepX and DDAStepY can be set as identical or different value.In addition, suppose to use bayer-pattern, then should be noted that the starting pixel used by BCF300 can be any one in Gr, Gb, R or B pixel, this such as depends on which pixel is positioned at one jiao of zone of action 280.
With such thought, even/odd input pixel is used to generate even/odd output pixel respectively.Suppose output pixel position between even number and odd positions alternately, by DDA being rounded to central source input location of pixels (being called as " currPixel ") determining filtering object for the immediate even number of even number or odd number output pixel position (based on DDAStepX) or odd number input location of pixels respectively herein.One wherein DDA372a be configured to use 16 to represent integer, use 16 to represent in the embodiment of mark, following equation 6a and 6b can be used to determine the currPixel of even number and odd number currDDA position:
Even number output pixel position can be determined based on the position of following numerical value [31:16]:
(currDDA+1.0)&0xFFFE.0000(6a)
Odd number output pixel position can be determined based on the position of following numerical value [31:16]:
(currDDA)|0x0001.0000(6b)
In fact, above-mentioned equation represents computing of rounding off, and takes this, and is rounded to immediate even number and odd number input location of pixels respectively, for the selection of currPixel by the determined even number of currDDA and odd number output pixel position.
In addition, also current index or phase place (currIndex) can be determined in each currDDA position.As mentioned above, index or phase value represent output pixel position relative to position between the fraction pixel inputting location of pixels.Such as, in one embodiment, 8 phase places can be defined between each input location of pixels.Such as, refer again to Figure 33, between the first red input pixel in position 0.0 and the redness of the next one in position 2.0 input pixel, provide 8 index value 0-7.Similarly, between the first green input pixel in position 1.0 and the green of the next one in position 3.0 input pixel, 8 index 0-7 are provided.In one embodiment, the currIndex value of even number and odd number output pixel position can be determined respectively according to following equation 7a and 7b:
Even number output pixel position can be determined based on the position of following numerical value [16:14]:
(currDDA+0.125)(7a)
Odd number output pixel position can be determined based on the position of following numerical value [16:14]:
(currDDA+1.125)(7b)
For odd positions, 1 extra pixel displacement is equivalent to the side-play amount of increase by 4 to the coefficient index for odd number output pixel position, to take into account for DDA372, and the index offset amount between different colours component.
Once determine currPixel and currIndex in a concrete currDDA position, filtering process just can select one or more adjacent same color pixel based on currPixel (selected center input pixel).Exemplarily, one wherein horizontal scaling logic 368 comprise 5 tap polyphase filter, and vertically scale logic 370 comprises in the embodiment of 3 tap polyphase filter, the pixel being in two same colors of the every side of currPixel in the horizontal direction can be selected for carries out horizontal filtering (such as-2 ,-1,0 ,+1 ,+2), and the pixel being in a same color of the every side of currPixel in vertical direction can be selected for and carries out vertical filtering (such as-1,0 ,+1).In addition, currIndex can be used as selecting index to select suitable filter factor to be applied to selected pixel from filter coefficient table 374.Such as, use 5 tap horizontal filtering/3 tap vertical Filtered Embodiment, five 8 depthmeters can be provided for horizontal filtering, and three 8 depthmeters can be provided for vertical filtering.Although only illustrate a part of BCF300, should understand in a particular embodiment, filter coefficient table 374 can be stored in the memory with BCF300 physical separation, such as, in memory 108.
Before discussing horizontal and vertical zoom operations further in detail, following table 4 shows and how to use different DDAStep value to determine the example of currPixel and currIndex value (such as may be used on DDAStepX or DDAStepY) based on various DDA position.
Table 4: potting gum compensating filter-for currPixel and currIndex calculate DDA example
In order to provide an example, the DDA step size (DDAStep) (row 378 of Figure 33) of let us hypothesis selection 1.5, its current DDA position (currDDA) starts from 0, and this indicates even number output pixel position.In order to determine currPixel, can applicable equations 6a, as follows:
CurrDDA=0.0 (even number)
0000000000000001.0000000000000000(currDDA+1.0)
(AND)1111111111111110.0000000000000000(0xFFFE.0000)
0000000000000000.0000000000000000
CurrPixel (determining according to the position [31:16] of result)=0
Therefore, in currDDA position 0.0 (row 378), for filtering source input center pixel corresponding to be expert at 375 position 0.0 redness input pixel.
In order to determine the currIndex at even number currDDA0.0, can applicable equations 7a, as follows:
CurrDDA=0.0 (even number)
0000000000000000.0000000000000000(currDDA)
+0000000000000000.0010000000000000(0.125)
=000000000000000 0.0010000000000000
CurrIndex (determining according to the position [16:14] of result)=[000]=0
Therefore, in currDDA position 0.0 (row 378), be that the currIndex value of 0 can be used to Choose filtering coefficient from filter coefficient table 374.
Therefore, can (be at X (level) or in Y (vertical) direction based on DDAStep based on determined currPixel with the currIndex application filtering at currDDA0.0, it can be vertical or level), DDA372 is incremented DDAStep (1.5), and next currPixel and currIndex value is determined.Such as, next currDDA position 1.5 (odd positions), equation 6b can be used to determine currPixel, as follows:
CurrDDA=0.0 (odd number)
0000000000000001.1000000000000000(currDDA)
(OR)0000000000000001.0000000000000000(0x0001.0000)
0000000000000001.1000000000000000
CurrPixel (determining according to the position [31:16] of result)=1
Therefore, in currDDA position 1.5 (row 378), for filtering source input center pixel corresponding to be expert at 375 position 1.0 green input pixel.
Further, equation 7b can be used determine the currIndex at odd number currDDA1.5, as follows:
CurrDDA=1.5 (odd number)
0000000000000001.1000000000000000(currDDA)
+0000000000000001.0010000000000000(1.125)
=000000000000001 0.1010000000000000
CurrIndex (determining according to the position [16:14] of result)=[010]=2
Therefore, in currDDA position 1.5 (row 378), be that the currIndex value of 2 can be used to select suitable filter factor from filter coefficient table 374.So these currPixel with currIndex values can be used to apply filtering (be that at X (level) or in Y (vertical) direction it can be vertical or level based on DDAStep).
Then, DDA372 is incremented DDAStep (1.5) again, obtain be 3.0 currDDA value.Equation 6a can be used determine the currPixel corresponding to currDDA3.0, as follows:
CurrDDA=3.0 (even number)
0000000000000100.0000000000000000(currDDA+1.0)
(AND)1111111111111110.0000000000000000(0xFFFE.0000)
0000000000000100.0000000000000000
CurrPixel (determining according to the position [31:16] of result)=4
Therefore in currDDA position 3.0 (row 378), for filtering source input center pixel corresponding to be expert at 375 position 4.0 redness input pixel.
Then, equation 7a can be used determine the currIndex at even number currDDA3.0, as follows:
CurrDDA=3.0 (even number)
0000000000000011.0000000000000000(currDD
+0000000000000000.0010000000000000(0.125)
=000000000000001 1.0010000000000000
CurrIndex (determining according to the position [16:14] of result)=[100]=4
Therefore, in currDDA position 3.0 (row 378), be that the currIndex value of 4 can be used to select suitable filter factor from filter coefficient table 374.It will be apparent that, for each output pixel, DDA372 is incremented DDAStep by continuing, and currPixel with currIndex that determine for each currDDA value can be used to apply filtering (be that at X (level) or in Y (vertical) direction it can be vertical or level based on DDAStep).
As mentioned above, currIndex can be used as selecting index to select suitable filter factor from filter coefficient table 374, to be applied to the pixel of selection.Filtering process can comprise the source pixel value obtained around center pixel (currPixel), each of selected pixel is multiplied by the suitable filter factor selected from filter coefficient table 374 based on currIndex, and by results added to obtain the value of the output pixel in the position corresponding to currDDA.In addition, because the present embodiment uses 8 phase places between same color pixel, so use 5 tap horizontal/3 tap vertical Filtered Embodiment, can be horizontal filtering and five 8 depthmeters are provided, and can be vertical filtering three 8 depthmeters are provided.In one embodiment, each coefficient table entry can comprise the complement of two's two's complement fixed-point number of 16 with 3 integer-bit and 13 fractional bits.
In addition, in one embodiment, suppose to use bayer images pattern, vertically scale parts can comprise four 3 tap polyphase filter separated, separately for a color component: Gr, R, B and Gb.As mentioned above, each 3 tap filters can use DDA372 to control the stepping of Current central pixel and the index for coefficient.Equally, horizontal scaling parts can comprise four 5 tap polyphase filter separated, separately for a color component: Gr, R, B and Gb.Each 5 tap filters can use DDA372 to control the stepping (such as passing through DDAStep) of Current central pixel and the index for coefficient.But should be understood that, horizontal and vertical scaler can utilize less or more tap in other embodiments.
For the situation on border, the pixel used in horizontal and vertical filtering process can be depending on the relation between current DDA position (currDDA) and frame boundaries (border such as defined by zone of action 280 in Figure 19).Such as, in horizontal filtering, if currDDA position is after the width (SrcWidth) (width 290 of the zone of action 280 of such as Figure 19) of the position (SrcX) and frame that input pixel with center is compared, show that DDA372 is near border, make there is no enough pixels to perform 5 tap filtering, so can repeat the input boundary pixel of same color.Such as, if the center input pixel selected is at the left hand edge of frame, so can carry out copying for twice to carry out horizontal filtering to center pixel.If input pixel is near the left hand edge of frame, making only have a pixel to use between center input pixel and left hand edge, so in order to carry out horizontal filtering, copying that available pixel to provide two pixel values to the left side of center input pixel.In addition, horizontal scaling logic 368 can be configured such that the quantity (comprising initial and pixel that is that copy) of input pixel can not exceed input width.It can by following expression:
StartX=(((DDAInitX+0x0001.0000)&0xFFFE.0000)>>16)
EndX=(((DDAInitX+DDAStepX*(BCFOutWidth-1))|0x0001.0000)>>16)
EndX-StartX<=SrcWidth-1
Wherein, DDAInitX represents the initial position of DDA372, and DDAStepX represents DDA step value in the horizontal direction, and BCFOutWidth represents the width of the frame exported by BCF300.
For vertical filtering, if currDDA position is after the width (SrcHeight) (width 290 of the zone of action 280 of such as Figure 19) of the position (SrcY) and frame that input pixel with center is compared, show that DDA372 is near border, make there is no enough pixels to perform 3 tap filtering, so can repeat to input boundary pixel.In addition, configurable vertical scaling logic 370 can not exceed input height to make the quantity (comprising initial and pixel that is that copy) inputting pixel.It can by following expression:
StartY=(((DDAInitY+0x0001.0000)&0xFFFE.0000)>>16)
EndY=(((DDAInitY+DDAStepY*(BCFOutHeight-1))|0x0001.0000)>>16)
EndY-StartY<=SrcHeight-1
Wherein, DDAInitY represents the initial position of DDA372, and DDAStepY represents DDA step value in vertical direction, and BCFOutHeight represents the height of the frame exported by BCF300.
With reference now to Figure 34, depict according to the flow chart of an embodiment for the method 382 to the view data application potting gum compensation filter received by front end pixel processing unit 130.It will be apparent that, method 382 as shown in figure 34 may be used on vertical and horizontal scaling.It starts from step 383, and initialization DDA372 also determines DDA step value (it may correspond in the DDAStepX for horizontal scaling and the DDAStepY for vertically scale).Then, in step 384, current DDA position (currDDA) is determined based on DDAStep.As mentioned above, currDDA may correspond in output pixel position.As shown in step 385, use currDDA, method 382 can determine center pixel (currPixel) from input pixel data, and it can be used to potting gum compensation filter to determine the corresponding output valve at currDDA.Subsequently, in step 386, can based on currDDA relative to the fractional form of input pixel (such as, the row 375 of Figure 33) pixel between position determine to correspond to the index (currIndex) of currDDA.Exemplarily, one wherein DDA comprise in the embodiment of 16 integer-bit and 16 fractional bits, currPixel can be determined according to equation 6a and 6b, and determine currIndex according to equation 7a and 7b, as described above.16 integer/16 described herein mark configuration is only an example, and what should understand is other configurations that can utilize DDA372 according to this technology.Exemplarily, other embodiments of DDA372 can be configured to comprise 12 integer parts and 20 fractional parts, 14 integer parts and 18 fractional parts etc.
Once determine currPixel and currIndex, as shown in step 387, the source pixel of the same color around currPixel can be selected to carry out many tap filtering.Such as, as implied above, an embodiment can utilize 5 tap polyphase filtering (such as selecting the pixel of 2 same colors in the every side of currPixel) in the horizontal direction and utilize 3 tap polyphase filtering (such as selecting the pixel of 1 same color in the every side of currPixel) in the vertical direction.Then, in step 388, once have selected source pixel, can based on currIndex Choose filtering coefficient from the filter coefficient table 374 of BCF300.
After this, in step 389, can to source pixel application filtering with the value of the corresponding output pixel in the position determined with represented by currDDA.Such as, in one embodiment, source pixel can be multiplied by their respective filter factors, and can by results added to obtain output pixel value.Depending on DDAStep is on X (level) or Y (vertically) direction, and in step 389, application filter wave line of propagation can be vertical or level.Finally, in step 390, DDA372 is increased DDAStep, and method 382 turns back to step 384, use potting gum compensation filter technology discussed herein to determine next output pixel value thus.
With reference to Figure 35, according to embodiment more detailed exemplified with in method 382 for determining the step 385 of currPixel.Such as, step 385 can comprise the output pixel position determining to correspond to currDDA (from step 384) is even number or the sub-step of odd number 392.As mentioned above, even number or odd number output pixel can be determined according to the least significant bit of the currDDA based on DDAStep.Such as, the DDAStep of given 1.25, the currDDA value of 1.25 can be confirmed as odd number, because the value of least significant bit (position 14 corresponding to the fractional part of DDA372) is 1.For the currDDA value of 2.5, position 14 is 0, therefore represents it is even number output pixel position.
At decision logic 393, the output pixel position of making corresponding to currDDA is the determination of even number or odd number.If output pixel is even number, decision logic 393 proceeds to sub-step 394, as shown in above equation 6a, wherein by the value of currDDA is increased by 1, and result is rounded to immediate even number input location of pixels to determine currPixel.If output pixel is odd number, then decision logic 393 proceeds to sub-step 395, as shown in above equation 6b, wherein determines currPixel by the value of currDDA being rounded to immediate odd number input location of pixels.As mentioned above, then currPixel value can be applied to the step 387 of method 382 to select to be used for the source pixel of filtering.
Also with reference to Figure 36, according to embodiment more detailed exemplified with in method 382 for determining the step 386 of currIndex.Such as, step 386 can comprise for determining that the output pixel position corresponding to currDDA (from step 384) is even number or the sub-step of odd number 396.Can be performed this by the method similar to the step 392 of Figure 35 to determine.At decision logic 397, the output pixel position of making corresponding to currDDA is the determination of even number or odd number.If output pixel is even number, decision logic 397 proceeds to sub-step 398, wherein by by currDDA value increase index step-length, determines currIndex based on the minimum integer-bit of DDA372 and two best result numerical digits.Such as, in one embodiment, wherein between each same color pixel, provide 8 phase places, and wherein DDA comprises 16 integer-bit and 16 fractional bits, an index step-length may correspond in 0.125, and can determine currIndex (such as equation 7a) based on the position [16:14] of the currDDA value adding 0.125.If output pixel is odd number, decision logic 397 proceeds to sub-step 399, wherein by by currDDA value increase index step-length and a pixel shift, and determines currIndex based on the minimum integer-bit of DDA372 and two best result numerical digits.Therefore, in one embodiment, wherein between each same color pixel, provide 8 phase places, and wherein DDA comprises 16 integer-bit and 16 fractional bits, an index step-length may correspond in 0.125, a pixel shift may correspond in 1.0 (8 the index step-lengths that are shifted then arrive next same color pixel), and can determine currIndex (such as equation 7b) based on the position [16:14] of the currDDA value adding 1.125.
Although the embodiment here provides BCF300 as the parts of front end pixel processing unit 130, but BCF300 can be incorporated into following by the raw image data process streamline of the ISP flowing water 82 of discussion by other embodiments, ISP flowing water 82 can comprise defect pixel detection/correction logic, gain/skew/compensation block, noise reduction logic, camera lens light and shade correcting logic reconciliation mosaic logic.In addition, do not rely in the embodiment of the linear distribution of pixel at above-mentioned defect pixel detection/correction logic, gain/skew/compensation block, noise reduction logic, camera lens light and shade correcting logic, BCF300 can be incorporated to and separate mosaic logic, to perform the reorientating, because separate the uniform spacial arrangement that mosaic generally depends on pixel of potting gum compensation filter and pixel before mosaic separating.Such as, in one embodiment, BCF300 can be incorporated into any position between transducer input reconciliation mosaic logic, and to raw image data application time-domain filtering and/or defect pixel detection/correction before potting gum compensates.
As above, the output of BCF300 can be the output FEProcOut (109) of the view data (sampling 360 of such as Figure 31) with space uniform distribution, and this output can be passed to ISP stream treatment logic 82 for other process.But, before the focus of this discussion is transferred to ISP stream treatment logic 82, first will provide the more detailed description to statistical disposition unit (such as 122 and 124) the available various function that can realize in ISP front end logic 80.
Again the general description of reference statistical processing unit 120 and 122, these unit can be configured to collect about catching and providing the various statistical informations of the imageing sensor of original image signal (Sif0 and Sif1), such as, about the statistical information of automatic exposure, Automatic white balance, auto-focusing, flicker detection, blackness compensation and the correction of camera lens light and shade etc.Therefore, first statistical disposition unit 120 and 122 can apply one or more image processing operations to they input signal Si f0 separately (carrying out sensor 0) and Sif1 (carrying out sensor 1).
Such as, with reference to Figure 37, exemplified with the more more detailed block diagram view of the statistical disposition unit 120 be associated with transducer 0 (90a) according to an embodiment.As shown in the figure, statistical disposition unit 120 can comprise following functions block: defect pixel detects and correcting logic 460, blackness compensate (BLC) logic 462, camera lens light and shade correcting logic 464, inverse BLC logic 466 and statistical information collection logic 468.Each of these functional blocks will be discussed below.In addition, should be understood that and can realize by similar mode the statistical disposition unit 122 that is associated with transducer 1 (90b).
First, the output (Sif0 or SifIn0) selecting logical one 24 is received by front end defect pixel correction logic 460.It will be apparent that, " defect pixel " can be understood as in imageing sensor 90 cannot the imaging pixel of accurate induction light level.Defect pixel is attributable to some factors, and it can comprise " dry point (hotpixel) " (or electric leakage pixel), " bright spot (stuckpixel) " and " bad point (deadpixel) ".It is brighter that " dry point " generally shows as the non-defective pixel giving identical light be compared in same spatial location.The generation of dry point is attributable to failing-resetting and/or high-leakage.Such as, dry point can present the charge leakage higher than the normal charge leakage of non-defective pixel, therefore may obtain brighter than non-defective pixel performance.In addition, " bad point " and " bright spot " may be that the impurity of such as dust or other minor material and so on is in the result manufacturing and/or pollute in assembling process imageing sensor, it can cause some defect pixel darker or brighter than non-defective pixel, or defect pixel can be caused to be fixed on the light quantity of particular value and no matter its actual exposure.In addition, the fault occurred during image sensor operation also can cause bad point and bright spot.Exemplarily, bright spot can appear as often to be opened (entirely charging) and therefore looks brighter, and bad point then appears as normal pass.
Defect pixel in ISP front end logic 80 detected and corrects (DPDC) logic 460 can collect (such as 468) middle consideration defect pixel in statistical information before and corrects (such as replacing defective pixel value) these defect pixels.In one embodiment, independently defect pixel correction is performed for each color component (R, B, Gr and Gb of such as bayer-pattern).In general, front end DPDC logic 460 can provide dynamic defect to correct, and wherein automatically can determine the position of defect pixel based on the directional gradient using the neighbor of same color to calculate.Will be appreciated that, at given time, certain pixel is thought that defective pixel can be depending on the view data in neighbor, defect can be " dynamically " in this sense.Exemplarily, if the position being always in the bright spot of high-high brightness is in the region dominated by brighter color or white of present image, so this bright spot may can not be considered to defect pixel.On the contrary, if bright spot is in the region dominated by black or darker color of present image, then during being processed by DPDC logic 460, this bright spot can be identified as defect pixel, and is correspondingly corrected.
DPDC logic 460 can utilize the one or more horizontal adjacent pixels with same color in the every side of current pixel, uses pixel to pixel orientation gradient to determine current pixel whether defectiveness.If current pixel is identified as defect, the value of usage level neighbor the value of defect pixel can be replaced.Such as, in one embodiment, five of being used in primitive frame 278 (Figure 19) border have the horizontal adjacent pixels of same color, and wherein these five horizontal adjacent pixels comprise two neighbors of current pixel and every side.Therefore, as shown in figure 38, neighbor pixel P0, P1, P2 and P3 of level can be considered for given color component c and current pixel P, DPDC logic 460.However, it is noted that, depend on the position of current pixel P, when calculating pixel is to pixel gradient, do not consider the pixel outside primitive frame 278.
Such as, as shown in figure 38, in " left hand edge " situation 470, current pixel P is positioned at the Far Left of primitive frame 278, does not therefore consider neighbor P0 and P1 outside primitive frame 278, only leaves pixel P, P2 and P3 (N=3).In " left hand edge+1 " situation 472, current pixel P, from Far Left unit picture element of primitive frame 278, does not therefore consider pixel P0.Only leave pixel P1, P, P2 and P3 (N=4).In addition, in " between two parties " situation 474, pixel P0 on the left of current pixel P and P1 and the pixel P2 on the right side of current pixel and P3 is within primitive frame 278 border, therefore at calculating pixel to considering all neighbor P0, P1, P2 and P3 (N=5) during pixel gradient.In addition, the rightmost analogue 476 and 478 close to primitive frame 278 can also be run into.Such as, in " right hand edge-1 " situation 476, current pixel P, from rightmost unit picture element of primitive frame 278, does not therefore consider pixel P3 (N=4).Same, in " right hand edge " situation 478, current pixel P is in the rightmost of primitive frame 278, does not therefore consider neighbor P2 and P3 (N=3).
In an illustrated embodiment, for each neighbor (k=0 to 3) in picture (such as primitive frame 278) border, can as follows calculating pixel to pixel gradient:
G k=abs (P-P k), for 0≤k≤3 (only for the k in primitive frame) (8)
Once determine pixel to pixel gradient, detect being performed following defect pixel by DPDC logic 460.First, if the pixel gradient G of some kbe equal to or less than the specific threshold represented by variable dprTh, then suppose that this pixel is defective.Therefore for each pixel, the counting (C) that the neighbor in picture boundary is equal to or less than the gradient magnitude of threshold value dprTh is added up.Exemplarily, for each neighbor pixel in primitive frame 278, following calculating the gradient G of threshold value dprTh can be equal to or less than kaccumulated counts C:
C = Σ k N ( G k ≤ dprTh ) , - - - ( 9 )
For 0≤k≤3 (only for the k in primitive frame)
It will be apparent that, depend on color component, threshold value dprTh can be change.Then, if cumulative counting C is confirmed as being less than or equal to the maximum count represented by variable dprMaxC, then pixel can be considered defective.This logic is by following expression:
If (C≤dprMaxC), then pixel is defective.(10)
Multiple alternative agreement is used to substitute defect pixel.Such as, in one embodiment, pixel P1 near its left side can be used to replace defect pixel.Under a kind of border condition (such as P1 is positioned at outside primitive frame 278), can use near the pixel P2 on the right of it to replace defect pixel.In addition, should be understood that, substitution value can be retained or propagate and detect operation for follow-up defect pixel.Such as, with reference to horizontal pixel set as shown in figure 38, if P0 or P1 is being identified as defect pixel by DPDC logic 460 before, then the substitution value of their correspondences can be used to the defect pixel detection of current pixel P and substitute.
Detect and alignment technique to sum up above-mentioned defect pixel, Figure 39 provides the flow chart describing this process, and is referred to by reference marker 480.As shown in the figure, process 480 starts from step 482, wherein receives current pixel (P) and identifies one group of neighbor pixel.According to above-described embodiment, neighbor pixel can comprise two horizontal pixels (such as P0, P1, P2 and P3) with same color component from the relative both sides of current pixel.Then, in step 484, as described in above equation 8, for the pixel in each neighbor calculated level direction in primitive frame 278 to pixel gradient.After this, in step 486, determine the counting C of the quantity of the gradient being less than or equal to specific threshold dprTh.As shown in decision logic 488, if C is less than or equal to dprMaxC, then process 480 and proceed to step 490, and current pixel is identified as defective.Then substitution value is used to carry out correct defective pixels in step 492.In addition, referring again to decision logic 488, if C is greater than dprMaxC, then process proceeds to step 494, and current pixel is identified as not having defect, and its value is not changed.
Should be noted that the defect pixel detection/correction technology applied during the statistical disposition of ISP front end may be poorer than the robustness of the defect pixel detection/correction performed in ISP flowing water logic 82.Such as, to discuss in detail further as following, the defect pixel detection/correction performed in ISP flowing water logic 82 also can provide fixing defect correction except dynamic defect corrects, and wherein priori can learn the position of defect pixel, and is loaded into one or more defect map.In addition, as described below, the dynamic defect in ISP flowing water logic 82 corrects the pixel gradient that it is also conceivable on both horizontal and vertical directions, and can provide the detection/correction to spot (speckle).
Get back to Figure 37, the output of DPDC logic 460 is sent to blackness subsequently and compensates (BLC) logic 462.BLC logic 462 can be the pixel of collecting for statistical information in each color component " c " (R, B, Gr and Gb of such as Bayer) and provides digital gain, skew and brachymemma (clipping) independently.Such as, as shown in following computing, first offset the input value of current pixel by signed value, be then multiplied by gain.
Y=(X+O[c])×G[c],(11)
Wherein X represents the input pixel value of given color component c (such as R, B, Gr or Gb), and O [c] represents signed 16 side-play amounts of current color component c, and G [c] represents the yield value of this color component c.In one embodiment, gain G [c] can be 16 unsigned numbers (such as with 2.14 of floating point representation) with 2 integer-bit and 14 fractional bits, and can round off to gain G [c] application.Only exemplarily, gain G [c] can have the scope between 0 to 4X (four times of such as input pixel value).
Then, as shown in following equation 12, the signed value Y calculated can be punctured into minimum value and maximum range:
Y=(Y<min[c])?min[c]:(Y>max[c])?max[c]:Y)(12)
Variable min [c] and max [c] represents signed 16 brachymemma values of minimum and maximum output valve respectively.In one embodiment, BLC logic 462 also can be configured to be respectively each color component to be kept on maximum and under minimum value by the counting of the quantity of the pixel of brachymemma.
Subsequently, the output of BLC logic 462 is forwarded to camera lens light and shade and corrects (LSC) logic 464.LSC logic 464 can be configured to apply suitable gain with the decline of compensation intensity by pixel, and this strength is generally roughly proportional to the distance with the optical centre of the camera lens 88 of imaging device 30.Can understand, this decline may be the result of the geometric optics of camera lens.Exemplarily, the camera lens having perfect optics character can be modeled as the biquadratic of incidence angle cosine, i.e. cos 4(θ), it is called as cos 4law.But due to camera lens manufacturing process and imperfections, the various scramblings in camera lens may cause optical property to depart from the cos of hypothesis 4model.Such as, the thinner edge (thinneredged) of camera lens presents maximum scrambling usually.In addition, the scrambling of camera lens LO-pattern also may be the result that the microlens array in imageing sensor does not align with color filter array perfection.In addition, infrared (IR) filter in some camera lenses also may cause decline to depend on luminous element, so can adjust the gain of camera lens light and shade according to the light source detected.
With reference to Figure 40, exemplified with the distributed in three dimensions 496 of the luminous intensity for describing typical lens relative to location of pixels.As shown in the figure, luminous intensity reduces near the center 498 of camera lens gradually to the corner of camera lens or edge 500.Better illustrate the camera lens light and shade scrambling described in Figure 40 by Figure 41, Figure 41 shows the coloured drawing showing the image 502 that luminous intensity reduces towards corner and edge.More specifically, should be noted that and look brighter than the luminous intensity in the corner and/or edge that are positioned at image in the luminous intensity being roughly positioned at picture centre.
According to some embodiments of this technology, camera lens light and shade correcting gain can be designated as each Color Channel (such as Gr, R, B, Gb of Bayer filter) two dimensional gain grid.Gain grid point can be spaced apart with the horizontal and vertical of fixing in primitive frame 278 (Figure 19).As above Figure 19 discuss, primitive frame 278 can comprise zone of action 280, which define by specific image process operate to its perform process region.About camera lens light and shade correct operation, in primitive frame region 278, define the active process region that can be called as LSC region.As will be discussedbelow, LSC region must completely in gain faceted boundary or boundary, otherwise the possibility of result is indefinite.
Such as, with reference to Figure 42, the LSC region 504 and gain grid 506 that can define in primitive frame 278 is shown.LSC region 504 can have wide 508 and high by 510, and can define by relative to the x side-play amount 512 on primitive frame 278 border and y side-play amount 514.Grid offset amount from the benchmark 520 of gain grid 506 to the pixel of first in LSC region 504 522 (such as grid x side-play amount 516 and grid y side-play amount 518) also can be provided.These side-play amounts can be positioned at the first grid spacings for given color component.Each Color Channel specified level (x direction) and vertical (y direction) grid point interval 524 and 526 independently can be respectively.
As mentioned above, suppose to employ Bayer color filters array, then definable 4 has the Color Channel (R, B, Gr and Gb) of grid gain.In one embodiment, always total 4K (4096) individual grid point can be used, and such as by using indicating device, is provided for the presumptive address of the starting position of grid gain for each Color Channel.In addition, level (524) and vertical (526) grid point interval can also be defined in units of pixel under the resolution a planes of color, in a particular embodiment, the grid point interval of the power individual pixel such as (such as 8,16,32,64 or 128) of 2 can be provided in the horizontal and vertical directions respectively.Can understanding, by utilizing the power of 2, displacement (such as division) and phase add operation can be used to complete effective realization of gain interpolation.Use these parameters, even if imageing sensor clipping region changes still can use identical yield value.Such as, a small number of parameters is only had to need to be updated grid point to be snapped to clipping region (such as upgrading grid offset amount 524 and 526), and the grid yield value that non-update is all.Only exemplarily, this uses during cutting during digital zoom operation may be useful.In addition, although the gain grid 506 as shown in the embodiment of Figure 42 is depicted as the grid point with basic equal intervals, be to be understood that in other embodiments, grid point is not must equal intervals.Such as, in certain embodiments, grid point can anisotropically distribute (such as log series model), and grid point is not concentrated at the center in LSC region 504, and more concentrated towards the corner (distortion of camera lens light and shade is more easily discovered there usually) in LSC region.
According to camera lens light and shade alignment technique disclosed herein, when current pixel position is positioned at outside LSC region 504, not using gain (such as pixel is passed through without change).When current pixel position is positioned at gain grid positions, the yield value at this specific gate lattice point place can be used.But, when current pixel position is between grid point, bilinear interpolation can be used to carry out calculated gains.Following Figure 43 provides a kind of example being carried out the gain of calculating pixel position " G " by interpolation.
As shown in figure 43, pixel G is between grid point G0, G1, G2 and G3, and G0, G1, G2 and G3 can correspond respectively to the upper left of current pixel position G, upper right, lower-left and bottom right gain.The horizontal and vertical size of grid spacings is represented by X and Y respectively.In addition, ii and jj represents the horizontal and vertical pixel-shift amount of the position relative to upper left gain G 0 respectively.Based on these factors, interpolation calculation the gain of position G can be corresponded to as follows:
G = ( G 0 ( Y - jj ) ( X - ii ) ) + ( G 1 ( Y - jj ) ( ii ) ) + ( G 2 ( jj ) ( X - ii ) ) + ( G 3 ( ii ) ( jj ) ) XY - - - ( 13 a )
Every to obtain following expression formula subsequently in above equation 13a capable of being combined:
G = G 0 [ XY - X ( jj ) - Y ( ii ) + ( ii ) ( jj ) ] + G 1 [ Y ( ii ) - ( ii ) ( jj ) ] + G 2 [ X ( jj ) - ( ii ) ( jj ) ] + G [ ( ii ) ( jj ) ] XY - - - ( 13 b )
In one embodiment, incremental mode can perform this interpolation method, to replace using multiplier to each pixel, thus reduce computation complexity.Such as, can use can be initialized to 0 at the position of gain grid 506 (0,0) place and each when a prostatitis number increase pixel using current line number as the adder of increment to realize item (ii) (jj).As mentioned above, because the value of X and Y can be elected as the power of 2, so simple shifting function can be used to realize gain interpolation.Therefore, only need multiplier at grid point G0 (but not each pixel) place, and only need add operation to determine the interpolation gain of the pixel be left.
In a particular embodiment, the interpolation of the gain between grid point can use 14 precision, and grid gain can be not signed 10 place values (such as 2.8 floating point representations) with 2 integer-bit and 8 fractional bits.Use this agreement, gain can have the scope between 0 to 4X, and the gain resolution between grid point can be 1/256.
Camera lens light and shade alignment technique can be illustrated further by the process 528 shown in Figure 44.As shown in the figure, process 528 starts from step 530, wherein determines the position of current pixel relative to the border, LSC region 504 of Figure 42.Then, decision logic 532 determines current pixel position whether within LSC region 504.If current pixel position is outside LSC region 504, process 528 proceeds to step 534, and not to current pixel using gain (such as, pixel is passed through without change).
If current pixel position is within LSC region 504, process 528 proceeds to decision logic 536, determines whether current pixel position corresponds to the grid point in gain grid 504 further at this.If current pixel position corresponds to grid point, then, as shown in step 538, select the yield value at this grid point and apply it to current pixel.If current pixel position does not correspond to grid point, then process and 528 proceed to step 540, and carry out interpolation based on the grid point (G0, G1, G2 and G3 of such as Figure 43) around it and obtain gain.Such as, interpolation gain can be calculated according to equation 13a and 13b as above.After this, process 528 ends at step 542, this by the interpolation gain application from step 540 to current pixel.
It will be apparent that, can to each pixel reprocessing 528 of view data.Such as, as shown in figure 45, the distributed in three dimensions of the gain of each location of pixels in LSC region (such as 504) can be applied to exemplified with description.As shown in the figure, the gain being applied to image corner 544 can be greater than the gain being applied to picture centre 546 usually, this is because as shown in Figure 40 and 41, corner has larger luminous intensity decline.Use camera lens light and shade alignment technique described herein, the luminous intensity decline that can reduce or substantially present in removal of images.Such as, Figure 46 provides one after applying camera lens light and shade and correcting from the example how colored graph of the image 502 of Figure 41 will present.As shown in the figure, compared to the initial pictures from Figure 41, the overall light intensity of full image is substantially more even.More specifically, the luminous intensity in image approximate center can be generally equal with the light intensity value in the corner of image and/or edge.In addition, as mentioned above, in certain embodiments, the calculating (equation 13a and 13b) of interpolation gain replaces by the additivity " increment " between grid point by the columns and rows increment structure of utilization order.It will be apparent that, this can reduce computation complexity.
In a further embodiment, except using grid gain, also use a global gain for each color component, its basis and the distance of picture centre and convergent-divergent.Picture centre can be provided as input parameter, and estimates described picture centre by the luminous intensity amplitude analyzing each image pixel in the image that is uniformly lighted.As follows, so the radial distance that can be used between the center pixel of identification and current pixel is to obtain the radial gain G of linear convergent-divergent r:
G r=G p[c]×R,(14)
Wherein G p[c] represents the global gain parameter being used for each color component c (R, B, Gr and Gb component of such as bayer-pattern), and wherein R represents the radial distance between center pixel and current pixel.
With reference to the Figure 47 showing LSC region 504 discussed above, multiple technologies can be used calculate or estimated distance R.As shown in the figure, the pixel C corresponding to picture centre can have coordinate (x 0, y 0), and current pixel G can have coordinate (x g, y g).In one embodiment, LSC logic 464 can use following equation to calculate distance R:
R = ( x G - x 0 ) 2 + ( y G - y 0 ) 2 - - - ( 15 )
In another embodiment, simpler estimation formulas shown below can be utilized to obtain the estimated value of R:
R=α×max(abs(x G-x 0),abs(y G-y 0))+β×min(abs(x G-x 0),abs(y G-y 0))(16)
In equation 16, estimation coefficient α and β can be scaled to 8 place values.Only exemplarily, in one embodiment, α can be substantially equal to 123/128, and β can be substantially equal to 51/128, to provide the estimated value of R.Use these coefficient values, worst error can be roughly 4%, and median error is roughly 13%.Therefore, although how many accuracy of this estimation technique can determine R lower than utilizing computing technique (equation 15), error span is the enough low radial gain component determining camera lens light and shade alignment technique of the present invention to make estimated value or R be applicable to still.
Then can by radial gain G rbe multiplied by the grid yield value G (equation 13a and 13b) obtained through interpolation for current pixel to determine to be applied to the overall gain of current pixel.As follows, obtain output pixel Y by input pixel value X is multiplied by overall gain:
Y=(G×G r×X)(17)
Therefore, according to this technology, only can use interpolation gain or use interpolation gain and radial gain component to correct to perform camera lens light and shade.As replacement, radial gain can be only used to correct to complete camera lens light and shade together with the radial grid table compensating radial approximate error.Such as, the radial gain grid of multiple grid points of the gain with definition radial direction and angle can be provided to replace rectangle gain grid 506 as shown in figure 42.Therefore, when determining that gain will be applied to the pixel of not aliging with one of radial grid point in LSC region 504, four can be used around the grid point of this pixel to apply interpolation to determine suitable interpolation camera lens light and shade gain.
With reference to Figure 48, the use of interpolation and radial gain component in being corrected exemplified with camera lens light and shade by process 548.It should be noted that process 548 can comprise the step similar to the process 528 described in Figure 44.Thus, such step is marked as identical reference number.It starts from step 530, receives current pixel and determines its position relative to LSC region 504.Then, decision logic 532 determines whether current pixel position is within LSC region 504.If current pixel position is outside LSC region 504, process 548 proceeds to step 534, and not to current pixel using gain (such as, pixel is passed through without change).If current pixel position is within LSC region 504, then processes 548 and can proceed to step 550 and decision logic 536 simultaneously.First refer step 550, retrieval is used for the data at recognition image center.As mentioned above, determine that picture centre can comprise the luminous intensity amplitude of the pixel analyzed under Uniform Illumination.Such as, this can occur between alignment epoch.Therefore, should be understood that step 550 there is no need to comprise for processing each pixel and double counting picture centre, but the data (such as, coordinate) retrieving the picture centre previously determined can be related to.Once identify picture centre, process 548 can proceed to step 552, wherein determines the distance (R) between picture centre and current pixel position.As mentioned above, the value of (equation 15) or estimation (equation 16) R can be calculated.Then, in step 554, service range R and the global gain parameter corresponding with the color component of current pixel radial gain component G can be calculated r(equation 14).As following by will discussing in step 558, radial gain component G rcan be used to determine overall gain.
Refer back to decision logic 536, it determines that whether current pixel position is corresponding to the grid point be within gain grid 504.If current pixel position corresponds to grid point, then, as shown in step 556, determine the yield value on grid point.If current pixel position does not correspond to grid point, then process 548 and proceed to step 540, and calculate interpolation gain based on the grid point (such as, G0, G1, G2 and G3 of Figure 43) around it.Such as, interpolation gain can be calculated according to equation 13a and 13b discussed above.Then, in step 558, overall gain is determined based on the radial gain determined in step 554 and one of grid gain (step 556) or interpolation gain (540).Can understand, this can be depending on decision logic 536 during process 548 and adopts which branch.Then as shown in step 560, overall gain is applied to current pixel.Again, it should be noted that and be similar to process 528, also can be each pixel reprocessing 548 of view data.
Radial gain can provide multiple advantage together with the use of grid gain.Such as, radial gain is used to allow to use single shared gain grid to all colours component.This total memory space that will greatly reduce for the independent gain grid stored for each color component.Such as, in Bayer image sensor, for all R, B, Gr and Gb components all use single gain grid can reduce by the gain raster data of about 75%.It will be apparent that, this minimizing of grid gain data can reduce and realizes cost, because grid gain data table may take the very most of of memory in image processing hardware or chip area.In addition, depend on hardware implementing, use the set of single gain grid point value also can provide other advantages, such as, reduce total chip area (when being such as stored in when gain grid point value in the memory on chip) and reduce memory bandwidth requirements (when being such as stored in when gain grid point value in the external memory storage outside chip).
After the function thoroughly describing camera lens light and shade correcting logic 464 as shown in figure 37, the output of LSC logic 464 is forwarded to inverse blackness subsequently and compensates (IBLC) logic 466.IBLC logic 466 provides gain, skew and brachymemma independently for each color component (such as R, B, Gr and Gb), and usually performs the reverse function of BLC logic 462.Such as, as shown in following computing, first input pixel value is multiplied by gain, then uses signed value to offset.
Y=(X×G[c])+O[c],(18)
Wherein X represents the input pixel value of given color component c (such as R, B, Gr or Gb), and O [c] represents signed 16 side-play amounts being used for current color component c, and G [c] represents the yield value being used for color component c.In one embodiment, gain G [c] can have large scope between 0 to 4X (four times of input pixel value X).It should be noted that these variablees can be the identical variablees in equation 11 as discussed above.Use such as equation 12, the value Y calculated can be punctured into minimum value and maximum range.In one embodiment, IBLC logic 466 can be configured to each color component preserve counted by the pixel quantity of brachymemma on maximum and under minimum value respectively.
After this, the output that block 468 receives IBLC logic 466 is collected by statistical information, this statistical information collects the collection that block 468 can provide the various statistical number strong points about (one or more) imageing sensor 90, such as, about the statistical number strong point of automatic exposure (AE), Automatic white balance (AWB), auto-focusing (AF), flicker detection etc.With this idea, statistical information is provided to collect the specification specific embodiment of block 468 and its relevant many aspects below with reference to Figure 48-46.
It will be apparent that, AWB, AE and AF statistical information can be used for the image acquisition of static digital camera and video camera.In order to easy, at this, AWB, AE and AF statistical information can be referred to as " 3A statistical information ".In the embodiment of the ISP front end logic shown in Figure 37, can realize with hardware, software or both compound modes the framework that statistical information collects logic 468 (" 3A statistic logic ").In addition, control software design or firmware can be used to analyze the statistics of being collected by 3A statistic logic 468, and control the various parameters (length of such as focusing) of camera lens, transducer various parameters (such as, analog gain, the time of integration) and the various parameters (such as, digital gain, color correction matrix coefficient) of ISP streamline 82.In a particular embodiment, image processing circuit 32 can be configured to collect statistical information provide flexibility can perform various AWB, AE and AF algorithm to make control software design or firmware.
About white balance (AWB), imageing sensor response on each pixel can depend on light source, because light source is reflected from the object image scene.So each pixel value recorded in image scene is relevant with the colour temperature of light source.Such as, Figure 49 shows and illustrates Figure 57 0 that YCbCr color space is in the color gamut of the white portion under low colour temperature and high color temperature.As shown in the figure, the x-axis of Figure 57 0 represents the blue dyeing degree (Cb) of YcbCr color space, and the y-axis of Figure 57 0 represents the red dyeing degree (Cr) of YCbCr color space.Figure 57 0 also show low color temperature axis 572 and high color temperature axle 574.The region 576 that axle 572 and 574 is arranged in represents the color gamut of the white portion of YCbCr color space under low colour temperature and high color temperature.It is to be understood however that YCbCr color space is only an example of the color space that can use in conjunction with the Automatic white balance process of the present embodiment.Other embodiments can use any suitable color space.Such as, in a particular embodiment, other suitable color spaces can comprise Lab (CLELab) color space (such as based on CIE1976), red/blue normalization color space (such as, R/ (R+2G+B) and B/ (R+2G+B) color space, R/G and B/G color space, Cb/Y and Cr/Y color space etc.).Therefore, in order to object of the present disclosure, the axle of the color space used by 3A statistic logic 468 can be called as C1 and C2 (situation as Figure 49).
When with low colour temperature illumination white object, this object may look partially red in the image be captured.Otherwise during with high color temperature illumination white object, this object may look partially blue in the image be captured.Therefore, the object of white balance is adjustment rgb value, to look like take under specification light to make image at human eye.So, when the imaging statistical information about white balance, collect colouring information about white object to determine the colour temperature of light source.In general, white balance algorithm can comprise two key steps.First, the colour temperature of light source is estimated.Secondly, the colour temperature estimated is used to adjust colour gain values and/or determine/adjust the coefficient of color correction matrix.Such gain can be the combination of analog-and digital-image sensor gain and ISP digital gain.
Such as, in certain embodiments, multiple different reference illuminant can be used to calibrate imaging device 30.Therefore, by selecting the color correction coefficient corresponding to reference illuminant of mating closest with the luminous element of current scene to determine the white point of current scene.Only as an example, five reference illuminant can be used in one embodiment to calibrate imaging device 30, these five luminous elements be low colour temperature luminous element, in low colour temperature luminous element, middle colour temperature luminous element, middle high color temperature luminous element and high color temperature luminous element.As shown in Figure 50, following color correction overview can be used in one embodiment to define white balance gains: horizon light (H) (simulating the colour temperature of about 2300 degree), incandescence (A or IncA) (simulating the colour temperature of about 2856 degree), D50 (simulating the colour temperature of about 5000 degree), D65 (simulating the colour temperature of about 6500 degree) and D75 (simulating the colour temperature of about 7500 degree).
Based on the luminous element of current scene, the gain corresponding to reference illuminant of mating closest with current luminous element can be used to determine white balance gains.Such as, approximately match with colour temperature luminous element D50 in reference if statistic logic 468 (will describe in detail in Figure 51 below) determines current luminous element, then can, respectively to white balance gains that is red and blue color channels application about 137 and 1.23, substantially not have gain (1.0) to be applied to green channel (G0 and G1 of bayer data) simultaneously.In certain embodiments, if current luminous element colour temperature is between two reference illuminant, then can determine white balance gains via the white balance gains between these two reference illuminant of interpolation.In addition, use H, A, D50, D65 and D75 luminous element to calibrate imaging device although present example shows, it should be understood that, the luminous element of any suitable type all can be used for camera calibrated, such as TL84 or CWF (fluorescence reference luminous element) etc.
As following by what describe, can be AWB and multiple statistical information is provided, comprise two dimension (2D) color histogram and RGB or YCC summation, multiple programmable color gamut is provided thus.Such as, in one embodiment, statistic logic 468 can provide one group of multiple pixel filter, wherein can be the subset of the plurality of pixel filter of AWB processing selecting.In one embodiment, eight groups of filters separately with different configurable parameters can be provided, and can for collecting piecemeal (tile) statistical information and for each floating window collects statistical information and select three groups of color gamut filters from above-mentioned bank of filters.As an example, the first filter selected can be configured to cover current colour temperature and estimate to obtain color accurately, and the second filter selected can be configured to cover low color temperature regions, and the 3rd filter selected can be configured to cover high color temperature region.This customized configuration can enable AWB algorithm along with the change of light source is to adjust current color temperature area.In addition, 2D color histogram can be used to determine the overall situation and local luminous element, and determines each pixel filter threshold value of accumulation rgb value.Again, it should be understood that selection three pixel filter only mean exemplified with an embodiment.In other embodiments, can be AWB statistics and select less or more pixel filter.
In addition, except selecting three pixel filter, also an extra pixel filter can be used for automatic exposure (AE), it is commonly referred to as adjustment paxel integration time and gain to control the process of the brightness of captured image.Such as, automatic exposure is by arranging the light quantity controlled the time of integration by the scene of (one or more) image capture sensor.In certain embodiments, the piecemeal of brightness statistics information and floating window can be collected via 3A statistic logic 468, and process to determine anomalous integral gain control parameter to it.
In addition, auto-focusing can refer to determine that the best focusing length of camera lens is with the focusing of optimized image fully.In a particular embodiment, the floating window of high frequency statistics information can be collected, and the focusing length that can adjust camera lens is to make image focus.As following by what discuss, in one embodiment, auto-focusing adjustment can utilize rough and intense adjustment, to make image focus based on the one or more tolerance being called as auto-focusing mark (AF mark).In addition, in certain embodiments, can be different colours and determine AF statistical information/mark, and the relativity between the AF of each Color Channel statistical information/mark can be used to determine the direction of focusing.
Therefore, block 468 can be collected via statistical information determine and collect especially these various types of statistical informations.As shown in the figure, the output STATS0 of the statistical information collection block 468 of transducer 0 statistical disposition unit 120 can be sent to memory 108 and then be routed to control logic 84, or alternatively, can be sent straight to control logic 84.Furthermore, it is to be understood that as shown in Figure 8, the 3A statistical information of statistical information STATS1 that provides that transducer 1 statistical disposition unit 122 also can comprise similar configuration collects block.
As mentioned above, can be equipment 10 ISP subsystem 32 in the control logic 84 of application specific processor can process the statistics collected to determine controlling one or more controling parameters of imaging device 30 and/or image processing circuit 32.Such as, such controling parameters can comprise the parameter (such as focus length adjustment parameter) of the camera lens for application drawing image-position sensor 90, imageing sensor parameter (such as simulation and/or digital gain, the time of integration) and ISP stream treatment parameter (such as digital gain value, color correction matrix (CCM) coefficient).In addition, as mentioned above, in a particular embodiment, the precision of 8 can carry out statistical disposition, therefore, there is more high-order dark raw pixel data and can be reduced to 8 bit formats for statistics object.As mentioned above, taper to 8 (or any other more low bit resolution) and hardware size (such as area) can be reduced, process complexity can also be reduced, and make statistics for noise more robust (such as using the space average of view data).
With aforesaid thought, Figure 51 is the block diagram of the logic depicted for the embodiment realizing 3A statistic logic 468.As shown in the figure, 3A statistic logic 468 can receive the signal 582 representing Bayer RGB data, and as shown in figure 37, it can correspond to the output of inverse BLC logic 466.3A statistic logic 468 can process Bayer RGB data 582 to obtain various statistical information 584, as shown in Figure 37, it can represent the output STATS0 of 3A statistic logic 468, or alternatively, it can represent the output STATS1 of the statistic logic associated with transducer 1 statistical disposition unit 122.
In an illustrated embodiment, in order to make statistical information for noise more robust, first by logic 586, the Bayer rgb pixel 582 entered is averaged.Such as, perform on average described in the window size of 4 × 4 sensor pixels that can form four 2 × 2 Bayer grids (quad) (such as representing 2 × 2 block of pixels of bayer-pattern), and redness (R) average in this 4 × 4 window, green (G) and blue (B) value can be calculated, and convert thereof into 8, as described above.Figure 52 is detailed in this process relatively, it illustrates 4 × 4 pixel windows 588 formed by four 2 × 2 Bayer grids 590.Use this to arrange, each Color Channel comprises one 2 × 2 pieces of the respective pixel in window 588, and the pixel of same color can be summed and average, thinks that each Color Channel in window 588 generates average color.Such as, the red pixel 594 in sample 588 can by average to obtain average red value (R aV) 604, and the blue pixel 596 in sample 588 can by average to obtain average blue valve (B aV) 606.Average about green pixel, because bayer-pattern has the samples of green doubling redness or blue sample, so can use multiple technologies.In one embodiment, by only average Gr pixel 592, only average Gb pixel 598 or all Gr with Gb pixels 592 are on average obtained average green value (G together with 598 aV) 602.In another embodiment, can Gr and Gb pixel 592 and 598 in average each Bayer grid 590, and green value in each Bayer grid 590 on average can by average to obtain G together further aV602.It will be apparent that, can be noise reduction and pixel value average across block of pixels is provided.Furthermore, it is to be understood that, use 4 × 4 pieces and only aim to provide an example as window sample.In fact, in other embodiments, any suitable block size (such as, 8 × 8,16 × 16,32 × 32 etc.) can be used.
Subsequently, the Bayer rgb value 610 through reduction is input to color space conversion logical block 612 and 614.Because some 3A statisticss can be dependent on the pixel after application color space conversion, therefore color space conversion (CSC) logic 612 and CSC logic 614 can be configured to the Bayer rgb value 610 through reduction to be transformed in other color spaces one or more.In one embodiment, CSC logic 612 can provide non-linear space to change, and CSC logic 614 can provide linear space to change.Therefore, raw image data can be transformed into another color space (such as sRGB from transducer Bayer RGB by CSC logical block 612 and 614 linear, sRGB, YCbCr etc.), like this for perform be used for white balance white point estimate may be more desirable or suitable.
In the present embodiment, non-linear CSC logic 612 can be configured to execution 3 × 3 matrix multiple, next performs the Nonlinear Mapping realized with form of look, and is performing another 3 × 3 matrix multiple further by the side-play amount increased subsequently.This allows 3A statistical information color space conversion to copy the color treatments (such as apply white balance gains, application color correction matrix, application RGB gamma adjustment and perform color space conversion) of the RGB process in ISP streamline 82 to given colour temperature.It also can provide Bayer rgb value to the conversion of the color space of solid colour more, all CIELab in this way of this color space or any one other color space (such as YCbCr, red/blue normalization color space etc.) in front discussion.In some cases, Lab color space can be more suitable for white balance computing, because the linearisation more of colourity specific luminance.
As shown in figure 51, be used in this one 3 × 3 color correction matrix (3A_CCM) represented by reference number 614 and process the output pixel reducing signal 610 from Bayer RGB.In the present embodiment, 3A_CCM616 can be configured to camera RGB color space (camRGB) to be converted to linear sRGB aligned spaces (sRGB linear).The programmable color space conversion that can use in one embodiment is below provided with equation 19-21:
sR linear=max(0,min(255,(3A_CCM_00*R+3A_CCM_01*G+3A_CCM_02*B)));(19)
sGlinear=max(0,min(255,(3A_CCM_10*R+3A_CCM_11*G+3A_CCM_12*B)));(20)
sBlinear=max(0,min(255,(3A_CCM_20*R+3A_CCM_21*G+3A_CCM_22*B)));(21)
Wherein, the signed coefficient of 3A_CCM_00 to 3A_CCM_22 representing matrix 614.Therefore, sRGB can be determined by following linearthe sR of color space linear, sG linearand sB lineareach of component: first, determine to apply the redness of corresponding 3A_CCM coefficient, blueness and green down-sampling Bayer rgb value and, subsequently, if this value more than 255 or be less than 0, is punctured into 0 or 255 (the minimum and max pixel value of 8 pixel datas) by this value.Represent the sRGB obtained with reference number 618 in Figure 51 linearvalue is using the output as 3A_CCM616.In addition, as follows, 3A statistic logic 468 can maintain for each sR linear, sG linearand sB linearthe counting of the brachymemma pixel quantity of component:
3A_CCM_R_clipcount_low: by the sR of the < 0 of brachymemma linearthe quantity of pixel
3A_CCM_R_clipcount_high: by the sR of the > 255 of brachymemma linearthe quantity of pixel
3A_CCM_G_clipcount_low: by the sG of the < 0 of brachymemma linearthe quantity of pixel
3A_CCMG_clipcount_high: by the sG of the > 255 of brachymemma linrarthe quantity of pixel
3A_CCM_B_clipcount_low: by the sB of the < 0 of brachymemma linearthe quantity of pixel
3A_CCM_B_clipcount_high: by the sB of the > 255 of brachymemma linearthe quantity of pixel
Next, nonlinear lookup table 620 can be used process sRGB linearpixel 618 is to generate sRGB pixel 622.Look-up table 620 can comprise the entry of 8 place values, and each table clause value represents an output level.In one embodiment, look-up table 620 can comprise 65 equally distributed input entries, wherein table index with 4 for step-length represents input value.When input value falls between interval, then carry out linear interpolation to obtain output valve.
As by understand, sRGB color space can represent for given white point, the color space of the final image generated by imaging device 30 (Fig. 7), because perform white balance statistics information in the color space of the final image generated by vision facilities.In one embodiment, such as, based on red-green and/or blue-green ratio, white point can be determined by the feature of image scene and one or more reference illuminant being matched.Such as, a reference illuminant can be D65, and it is the CIE standard illuminant of solar simulated condition.Except D65, also can be the calibration that other different reference illuminant carry out imaging device 30, and white balance is determined that process can comprise and determined current luminous element, adjusted to make process (such as color balance) can be current luminous element based on corresponding calibration point.As an example, in one embodiment, except D65, also can use cold white fluorescent (CWF) reference illuminant, TL84 reference illuminant (another kind of fluorescence source) and simulation incandescent lighting IncA (or A) reference illuminant to calibrate imaging device 30 and 3A statistic logic 468.In addition, as mentioned above, the camera calibrated of white balance process also can be used to corresponding to various other luminous elements (such as, H, IncA, D50, D65 and D75 etc.) of different-colour.Therefore, by analysis image scene and determine which reference illuminant is most closely matched with current luminous element to determine white point.
Still with reference to non-linear CSC logic 612, the 23 × 3 color correction matrix 624 of the available 3A_CSC of being referred to herein as processes the sRGB pixel output 622 of look-up table 620 further.In the embodiment depicted, 3A_CSC matrix 624 is shown as and is configured to from sRGB color space conversion to YCbCr color space, although it also can be configured to sRGB value to be transformed into other color spaces.As an example, following programmable color space conversion (equation 22-27) can be used:
Y=3A_CSC_00*sR+3A_CSC_01*sG+3A_CSC_02*sB+3A_OffsetY;(22)
Y=max(3A_CSC_MIN_Y,min(3A_CSC_MAX_Y,Y));(23)
C1=3A_CSC_10*sR+3A_CSC_11*sG+3A_CSC_12*sB+3A_OffsetC1;(24)
C1=max(3A_CSC_MIN_C1,min(3A_CSC_MAX_C1,C1));(25)
C2=3A_CSC_20*sR+3A_CSC_21*sG+3A_CSC_22*sB+3A_OffsetC2;(26)
C2=max(3A_CSC_MIN_C2,min(3A_CSC_MAX_C2,C2));(27)
Wherein, the signed coefficient of 3A_CSC_00 to 3A_CSC_22 representing matrix 624,3A_OffsetY, 3A_OffsetC1 and 3A_OffsetC2 represents signed side-play amount, C1 and C2 represents different colors, represents blue dyeing degree (Cb) and red dyeing degree (Cr) here respectively.It is to be understood however that C1 and C2 can represent any suitable difference chrominance color, and there is no need must be Cb and Cr color.
As shown in equation 22-27, in the process determining each component of YCbCr, the suitable coefficient from matrix 624 is applied to sRGB value 622, and result is added corresponding side-play amount (such as equation 22,24 and 26).In fact, this step is 3 × 1 matrix multiplication step.Then, by the brachymemma between minimum and maximum value of the result of matrix multiplication (such as, equation 23,25 and 27).Minimum and the maximum brachymemma value of association can be programmable, and such as can based on just by the specific imaging that uses or video standard (such as, BT.601 or BT.709).
As follows, 3A statistic logic 468 also can be each Y, C1 and C2 component and preserves brachymemma pixel quantity counting:
3A_CSC_Y_clipcount_low: by the quantity of the Y pixel of the < 3A_CSC_MIN_Y of brachymemma
3A_CSC_Y_clipcount_high: by the quantity of the Y pixel of the > 3A_CSC_MAX_Y of brachymemma
3A_CSC_C1_clipcount_low: by the quantity of the C1 pixel of the < 3A_CSC_MIN_C1 of brachymemma
3A_CSC_C1_clipcount_high: by the quantity of the C1 pixel of the > 3A_CSC_MAX_C1 of brachymemma
3A_CSC_C2_clipcount_low: by the quantity of the C2 pixel of the < 3A_CSC_MIN_C2 of brachymemma
3A_CSC_C2_clipcount_high: by the quantity of the C2 pixel of the > 3A_CSC_MAX_C2 of brachymemma
Also the output pixel from Bayer RGB down-sampled signal 610 can be supplied to linear color space conversion logic 614, the latter can be configured to perform camera color space conversion.Such as, can via the output pixel 610 of another 3 × 3 color conversion matrix (3A_CSC2) 630 process of CSC logic 614 from Bayer RGB down-sampling logic 586, to convert linear white balance color space (camYC1C2) to from transducer RGB (camRGB), wherein C1 and C2 can correspond respectively to Cb and Cr.In one embodiment, can carry out convergent-divergent chroma pixel by brightness, this can be of value to realization and improve colour consistency and change to because of brightness the colour filter that the color displacement caused has robustness.Following equation 28-31 provides and can how to use 3 × 3 matrixes 630 to realize the example of camera color space conversion:
camY=3A_CSC2_00*R+3A_CSC2_01*G+3A_CSC2_02*B+3A_0ffset2Y;(28)
camY=max(3A_CSC2_MIN_Y,min(3A_CSC2_MAX_Y,camY));(29)
camC1=(3A_CSC2_10*R+3A_CSC2_11*G+3A_CSC2_12*B);(30)
camC2=(3A_CSC2_20*R+3A_CSC2_21*G+3A_CSC2_22*B);(31)
Wherein, the signed coefficient of 3A_CSC2_00 to 3A_CSC2_22 representing matrix 630,3A_Offset2Y represents the signed side-play amount of camY, C1 and C2 represents different colors, represents blue dyeing degree (Cb) and red dyeing degree (Cr) here respectively.As shown in equation 28, in order to determine camY, the coefficient of correspondence from matrix 630 can be applied to Bayer rgb value 610, and result is added 3A_Offset2Y.Then, as shown in equation 29, this result of brachymemma between minimum and maximum value.As above, this brachymemma restriction can be programmable.
Now, camC1 and the camC2 pixel exporting 632 is signed.As discussed above, in certain embodiments, can convergent-divergent chroma pixel.Such as, shown below a kind of technology for performing colourity convergent-divergent:
camC1=camC1*ChromaScale*255/(camY?camY:1);
(32)
camC2=camC2*ChromaScale*255/(camY?camY:1);
(33)
Wherein, ChromaScale represents the floating-point zoom factor between 0-8.In equation 32 and 33, expression formula (camY? camY:1) situation preventing from being removed by zero is meant to.Namely, if camY equals 0, then the value of camY is set to 1.In addition, in one embodiment, ChromaScale can be depending on the symbol of camC1 and one that is set as in two probable values.Such as, as shown in following equation 34, if camC1 is negative, then ChromaScale can be set as the first value (ChromaScale0), otherwise it can be set as the second value (ChromaScale1):
ChromaScale=ChromaScale0if(camC1<0)(34)
ChromaScale1otherwise
Thereafter, as shown in following equation 35 and 36, add chroma offset amount, and brachymemma camC1 and camC2 chroma pixel are to generate corresponding not signed pixel value:
camC1=max(3A_CSC2_MIN_C1,min(3A_CSC2_MAX_C1,(camC1+3A_Offset2C1)))
(35)
camC2=max(3A_CSC2_MIN_C2,min(3A_CSC2_MAX_C2,(camC2+3A_Offset2C2)))
(36)
Wherein, 3A_CSC2_00 to 3A_CSC2_22 is the signed coefficient of matrix 630, and 3A_Offset2C1 and 3A_Offset2C2 is signed side-play amount.In addition, as follows, calculate camY, camC1 and camC2 by the quantity of the pixel of brachymemma:
3A_CSC2_Y_clipcount_low: by the quantity of the camY pixel of the < 3A_CSC2_MIN_Y of brachymemma
3A_CSC2_Y_clipcount_high: by the quantity of the camY pixel of the > 3A_CSC2_MAX_Y of brachymemma
3A_CSC2_C1_clipcount_low: by the quantity of the camC1 pixel of the < 3A_CSC2_MIN_C1 of brachymemma
3A_CSC2_C1_clipcount_high: by the quantity of the camC1 pixel of the > 3A_CSC2_MAX_C1 of brachymemma
3A_CSC2_C2_clipcount_low: by the quantity of the camC2 pixel of the < 3A_CSC2_MIN_C2 of brachymemma
3A_CSC2_C2_clipcount_high: by the quantity of the camC2 pixel of the > 3A_CSC2_MAX_C2 of brachymemma
Therefore, in the present embodiment, non-linear and linear color space conversion logic 612 and 614 can provide the pixel data in multiple color space: sRGB linear(signal 618), sRGB (signal 622), YCbCr (signal 626) and camYCbCr (signal 630).It should be understood that and can arrange independently and value in the coefficient of each transition matrix 616 (3A_CCM) of programming, 624 (3A_CSC) and 630 (3A_CSC2) and look-up table 620.
Still with reference to Figure 51, the colourity output pixel changing (YCbCr626) or camera color space conversion (camYCbCr632) can be used for generating two dimension (2D) color histogram 636 from non linear color space.As shown in the figure, can be used as multiplexer or can be able to be configured to selecting between the brightness of non-linear or camera color space conversion and chroma pixel by the selection logic 638 and 640 of any other suitable logic realization.Select logic 638 and 640 can operate in response to respective control signal, in one embodiment, described control signal can be provided by the main control logic 84 of image processing circuit 32 (Fig. 7), and can be set up via software.
For this example, can suppose to select logic 638 and 640 to select YC1C2 color space conversion (626), wherein the first component is brightness, and wherein C1 and C2 is the first and second colors (such as Cb, Cr).The 2D histogram 636 in C1-C2 color space is generated with a window.Such as, this window by row starting position and width and row starting position and highly can be specified.In one embodiment, the position of this window and large I are set as the multiple of 4 pixels, and for ading up to 1024 bins (bin), can use 32 × 32 bins.Bin line of demarcation can be fixed intervals, and carries out zoom and lens moving to allow to collect histogram in the specific region of color space, can define pixel scaling and side-play amount.
After side-play amount and scaling, high 5 (the representing 32 values altogether) of C1 and C2 can be used to determine described bin.The bin index of C1 and C2 being referred to herein as C1_index and C2_index can be determined as follows:
C1_index=((C1-C1_offset)>>(3-C1_scale)(37)
C2_index=((C2-C2_offset)>>(3-C2_scale)
(38)
As shown in following equation 39, once determine index, if bin index is in the scope of [0,31], then color histogram bin is increased progressively Count (counting) value (in one embodiment, it can have the value of 0 to 3).In fact, this allows to carry out weighted color counting (such as, brighter pixel is weighted more heavily, instead of weighting each (such as, with 1 weighting) equably) based on brightness value.
if(C1_index>=0&&C1_index<=31&&C2_index>=0&&C2_index<=31)(39)
StatsCbCrHist[C2_index&31][C1_index&31]+=Count;
Wherein, Count is determined based on selected brightness value (being Y in this example embodiment).As by understand, by bin more new logical block 644 realize by equation 37,38 and 39 represent step.In addition, in one embodiment, multiple luminance threshold can be set to define brightness separation.As an example, four luminance threshold (Ythd0-Ythd3) definables, five brightness separations, are wherein respectively each interval and are defined as Count value Count0-4.Such as, (such as by pixel conditional logic 642) Count0-Count4 can be selected based on luminance threshold as follows:
if(Y<=Ythd0)(40)
Count=Count0
elseif(Y<=Ythd1)
Count=Count1
elseif(Y<=Ythd2)
Count=Count2
elseif(Y<=Ythd3)
Count=Count3
else
Count=Count4
With above-mentioned thought, Figure 53 is exemplified with the color histogram C1 and C2 scaling and side-play amount being all set to 0.Division in CbCr space represents each of 32 × 32 bins (amount to 1024 bins).Figure 54 provides the example of zoom and lens moving in 2D color histogram in order to extra precision, and wherein, the rectangular area 646 residing for little rectangle specifies the position of 32 × 32 bins.
In the starting point of the frame of view data, bin value is initialized as 0.For each pixel entering 2D color histogram 636, the bin corresponding to coupling C1C2 value can increase determined Count value (Count0-Count4), and as mentioned above, this count value can based on brightness value.For each bin in 2D histogram 636, using the part of total pixel counts report as collected statistics (such as STATS0).In one embodiment, total pixel counts of each bin can have the resolution of 22, provides the distribution of the internal storage equaling 1024 × 22 thus.
Refer back to Figure 51, by Bayer rgb pixel (signal 610), sRGB linearpixel (signal 618), sRGB pixel (signal 622) and YC1C2 (such as YCbCr) pixel (signal 626) are supplied to one group of pixel filter 650a-c, take this, camYC1C2 or the YC1C2 pixel condition that can define based on each pixel filter 650, conditionally cumulative RGB, sRGB linear, sRGB, YC1C2 or camYC1C2 sum.That is, Y, C1 and C2 value of the output of the output or camera color space conversion (camYC1C2) of changing (YC1C2) from non linear color space can be used for selecting RGB, sRGB conditionally linear, sRGB or YC1C2 value to be to add up.Although the 3A statistic logic 468 that the present embodiment is described has 8 pixel filter (PF0-PF7), be appreciated that the pixel filter that any amount can be provided.
Figure 55 shows the social function theory of the embodiment describing pixel filter (particularly from PF0 (650a) and the PF1 (650b) of Figure 51).As shown in the figure, each pixel filter 650 comprises selection logic, and this selection logic receives Bayer rgb pixel, sRGB linearpixel, sRGB pixel and YC1C2 or camYC1C2 selecting logic 654 to select by another.As an example, multiplexer or any other suitable logic can be used to realize selecting logic 652 and 654.Logic 654 is selected to select YC1C2 or camYC1C2.This selection can be made in response to control signal, and described control signal is provided by the main control logic 84 of image processing circuit 32 (Fig. 7), and/or is arranged by software.Next, pixel filter 650 can use logic 656 according to pixel Conditions Evaluation by the YC1C2 pixel selecting logic 654 to select (such as non-linear or camera).It is one of following that each pixel filter 650 can use selection circuit 652 to select: Bayer rgb pixel, sRGB linearpixel, sRGB pixel and depend on YC1C2 or camYC1C2 of output of selection circuit 654.
Use the result of assessment, can add up by the pixel selecting logic 652 to select.In one embodiment, as shown in the chart 570 of Figure 49, threshold value C1_min, C1_max, C2_min and C2_max can be used to define pixel condition.If pixel meets the following conditions, then this pixel is included in statistical information:
1.C1_min<=C1<=C1_max
2.C2_min<=C2<=C2_max
3.abs((C2_delta*C1)-(C1_delta*C2)+Offset)<distance_max
4.Y min<=Y<=Y max
With reference to Figure 56, in one embodiment, point 662 represents the value (C2, C1) corresponding with the current YC1C2 pixel data selected by logic 654.C1_delta can be defined as the difference of C1_1 and C1_0, and C2_delta can be defined as the difference of C2_1 and C2_0.As shown by the circuit diagram of figure 56, the minimum and maximum boundary of point (C1_0, C2_0) and (C1_1, C2_1) definable C1 and C2.Side-play amount (Offset) is determined with the value (C2_intercept) of axle C2 infall by C1_delta being multiplied by line 664.So, assuming that Y, C1 and C2 meet minimum and maximum boundary condition, if selected pixel (Bayer RGB, sRGB linear, sRGB and YC1C2/camYC1C2) and line 664 between distance 670 be less than distance_max672, then be included in cumulative sum by the pixel of this selection, described distance_max672 is multiplied by normalization factor with line 664 distance 670 apart in units of pixel:
distance_max=distance*sqrt(C1_delta^2+C2_delta^2)
In the present embodiment, distance (distance), C1_delta and C2_delta can have the scope of-255 to 255.So distance_max672 can by 17 bit representations.Can by point (C1_0, and (C1_1 C2_0), C2_1) parameter (such as, (one or more) normalization factor) and for determining distance_max is provided as a part for pixel conditional logic 656 in each pixel filter 650.As by understand, pixel condition 656 is configurable/programmable.
Although the example shown in Figure 56 depicts based on two groups of point (C1_0, and (C1_1 C2_0), C2_1) pixel condition, but in a further embodiment, some pixel filter definable is for determining more complicated shape and the region of pixel condition.Such as, Figure 57 shows pixel filter 650 and can use point (C1_0, C2_0), (C1_1, C2_1), (C1_2, C2_2), (C1_3, C2_3) and (C1_4, C2_4) defines the embodiment of pentagon 673.Every bar limit 674a-674e definable lines part.But, with shown in Figure 56 situation (such as, as long as meet distance_max, pixel can online 664 then arbitrary) unlike, this condition can be the side that it can be surrounded by polygon 673 that pixel (C1, C2) must be positioned at line 674a-674e.Therefore, when meeting the common factor of multiple lines part, pixel (C1, C2) is counted.Such as, in Figure 57, pixel 675a meets such common factor.But pixel 675b does not meet the lines part of line 674d, therefore, when the pixel filter that mode configures thus processes, pixel 675b can not be counted into statistics.
In another embodiment shown in Figure 58, can based on overlapping shapes determination pixel condition.Such as, Figure 58 shows the pixel condition how pixel filter 650 can have use two overlapping shapes definition, by point (C1_0 respectively at this this two overlapping shapes, C2_0), (C1_1, C2_1), (C1_2, and (C1_3 C2_2), C2_3) the rectangle 676a defined and by point (C1_4, C2_4), (C1_5, C2_5), (C1_6, and (C1_7, C2_7) rectangle 676b of defining C2_6).In this example, pixel (C1, C2) if be enclosed in by (such as, if meet the lines part of every bar line of definition two shapes) in the region of the common gauge of shape 676a and 676b, then the lines part defined by such pixel filter can be met.Such as, in Figure 58, pixel 678a meets these conditions.But, pixel 678b does not meet these conditions (specifically, not meeting the lines part of the line 679a of rectangle 676a and the line 679b of rectangle 676b), therefore, when the pixel filter process that mode thus configures, pixel 678b can not be counted into statistics.
For each pixel filter 650, based on the pixel condition identification nonqualifying pixels defined by logic 656, and for nonqualifying pixels value, following statistical information can be collected by 3A statistics engine 468: 32 and: (R sum, G sum, B sum) or (sR linear_sum, sG linear_sum, sB linear_sum), or (sR sum, sG sum, sB sum) or (Y sum, C1 sum, C2 sum); And 24 pixel counts Count, its represent the pixel quantity comprised in statistical information and.In one embodiment, software can use this and be created in piecemeal or window average.
When camYC1C2 pixel is selected by the logic 652 of pixel filter 650, color threshold can be performed to the chromatic value through convergent-divergent.Such as, because the colourity intensity of white point increases along with brightness value, so in some instances, the use of the colourity crossed with brightness value convergent-divergent in pixel filter 650 can provide has better conforming result.Such as, minimum and high-high brightness condition can allow filter to ignore black and/or bright region.If pixel meets YC1C2 pixel condition, then RGB, sRGB linear, sRGB or YC1C2 value added up.By the type selecting the selection of logic 652 pairs of pixel values can depend on information needed.Such as, for white balance, typically select RGB or sRGB linearpixel.Such as, for detection specified conditions, sky, grass, the colour of skin etc., the set of YCC or sRGB pixel may be more suitable for.
In the present embodiment, definable 8 groups of pixel conditions, are associated with in pixel filter PF0-PF7650 respectively.The region that some pixel conditions of definable may be in mark white point in C1-C2 color space (Figure 49).This can be determined based on current luminous element or be estimated.Then, based on R/G and/or the B/G ratio for blank level adjustment, cumulative RGB can be used and determines current white point.In addition, definable or more applicable pixel conditions perform scene analysis and classification.Such as, some pixel filter 650 and window/piecemeal can be utilized to detect condition, the greenweed bottom the sky at described condition such as picture frame top or picture frame.Also this information can be used for adjusting white balance.In addition, definable or more applicable pixel conditions are to detect the colour of skin.For such filter, piecemeal can be used to have in detected image frame the region of the colour of skin.By identifying these regions, the quality of the colour of skin such as by the quantity that reduces noise filter in area of skin color and/or the quantification that reduces in the video compression in these regions to improve quality to improve.
3A statistic logic 468 also can provide the collection of brightness data.Such as, the brightness value camY from camera color space conversion (camYC1C2) can be used to cumulative brightness and statistical information.In one embodiment, following monochrome information can be collected by 3A statistic logic 468:
Y sum: camY and
Cond (Y sum): satisfy condition Y min<=camY < Y maxcamY and
Ycount1: the counting of pixel, wherein camY < Y min
Ycount2: the counting of pixel, wherein camY >=Y max
Here, Ycount1 can represent the quantity of under-exposed pixel, and Ycount2 can represent the quantity of over-exposed pixel.This can be used to determine that image is over-exposed or under-exposed.Such as, if pixel does not have saturated, then camY and (Y sum) mean flow rate in scene can be indicated, the latter can be used for realize target AE and exposes.Such as, in one embodiment, by by Y summean flow rate is determined divided by pixel quantity.In addition, by knowing the brightness/AE statistical information for block statistics and the window's position, AE metering can be performed.Such as, depend on image scene, may expect to carry out heavier weighting to the AE statistical information at window place, center than in those AE statistical informations of image border, such as may when this image be portrait.
In embodiment illustrated herein, 3A statistical information is collected logic and can be configured to collect the statistical information in piecemeal and window.In illustrative configuration, can be block statistics 674 and define a window.By row starting position and width and row starting position and highly can specify this window.In one embodiment, the position of this window and large I are selected as the multiple of 4 pixels, and in this window, in the piecemeal of arbitrary size, assemble statistical information.As an example, piecemeals all in this window can be selected to have identical size to make them.Can be horizontal and vertical direction and a point block size is set individually, and in one embodiment, the maximum restriction (such as, the restriction of 128 horizontal piecemeals) of horizontal piecemeal quantity can be set.In addition, in one embodiment, such as, minimum point of block size can be set to 8 pixels wide × 4 pixels are high.Below some examples based on the acquisition of different video/imaging pattern and standard with the piecemeal configuration of the window of 16 × 16 piecemeals:
VGA640 × 480: point block gap 40 × 30 pixel
HD1280 × 720: point block gap 80 × 45 pixel
HD1920 × 1080: point scarce interval 120 × 68 pixel
5MP2592 × 1944: point block gap 162 × 122 pixel
8MP3280 × 2464: point scarce interval 205 × 154 pixel
About the present embodiment, 4 can be selected for block statistics 674 from 8 available pixel filters 650 (PF0-PF7).For each piecemeal, following statistical information can be collected:
(R sum0, G sum0, B sum0) or (sR linear_sum0, sG linear_sum0, sB linear_sum0), or (sR sum0, sG sum0, sB sum0) or Y sum0, C1 sum0, C2 sum0), Count0
(R sum1, G sum1, B sum1) or (sR linear_sum1, sG linear_sum1, sB linear_sum1), or (sR sum1, sG sum1, sB sum1) or (Y sum1, C1 sum1, C2 sum1), Count1
(R sum2, G sum2, B sum2) or (sR linear_sum2, sG linear_sum2, sB linear_sum2), or (sR sum2, sG sum2, sB sum2) or (Y sum2, C1 sum2, C2 sum2), Count2
(R sum3, G sum3, B sum3) or (sR linear_sum3, sG linear_sum3, sB linear_sum3), or (sR sum3, sG sum3, sB sum3) or (Y sum3, C1 sum3, C2 sum3, Count3
Y sum, cond (Y sum), Y count1, Y count2(from camY)
In statistical information listed above, Count0-3 represents the counting of the pixel meeting the pixel condition corresponding to selected 4 pixel filter.Such as, if pixel filter PF0, PF1, PF5 and PF6 are chosen as 4 pixel filter for specific piecemeal or window, the expression formula then provided above can with (such as by selecting logic 652) pixel data (such as, Bayer RGB, the sRGB corresponded to selected by these four filters linear, sRGB, YC1C2, camYC1C2) corresponding Count value and summation.In addition, Count value can be used to normalization statistical information (such as, by by color and the Count value divided by correspondence).As shown in the figure, depend on the type of required statistical information at least partly, selected pixel filter 650 can be configured at Bayer RGB, sRGB linear, or sRGB pixel data or YC1C2 (non-linear or camera color space conversion, depends on the selection of logic 654) pixel data any one between carry out selecting and be selected pixel data determination color and statistical information.In addition, as previously discussed, also for the brightness added up for automatic exposure (AE) and information are from the brightness value camY of camera color space conversion (camYC1C2).
In addition, 3A statistic logic 468 also can be configured to collect the statistical information 676 for multiple window.Such as, in one embodiment, can use nearly 8 floating frames, its arbitrary rectangular area has the multiple of 4 pixels on each dimension (such as high × wide), until correspond to the full-size of picture frame size.But the position of window there is no need the multiple being restricted to 4 pixels.Such as, each window can overlap each other.
In the present embodiment, can be each window and select 4 pixel filter 650 from available 8 pixel filter (PF0-PF7).As mentioned above, the statistical information of each window can be collected by the mode identical with piecemeal.Therefore, for each window, following statistical information 676 can be collected:
(R sum0, G sum0, B sum0) or (sR linear_sum0, sG linear_sum0, sB linear_sum0), or (sR sum0, sG sum0, sB sum0) or (Y sum0, C1 sum0, C2 sum0), Count0
(R sum1, G sum1, B sum1) or (sR linear_sum1, sG linear_sum1, sB linear_sum1), or (sR sum1, sG sum1, sB sum1) or (Y sum1, C1 sum1, C2 sum1), Count1
(R sum2, G sum2, B sum2) or (sR linear_sum2, sG linear_sum2, sB linear_sum2), or (sR sum2, sG sum2, sB sum2) or (Y sum2, C1 sum2, C2 sum2), Count2
(R sum3, G sum3, B sum3) or (sR linear_sum3, sG linear_sum3, sB linear_sum3), or (sR sum3, sG sum3, sB sum3) or (Y sum3, C1 sum3, C2 sum3), Count3
Y sum, cond (Y sum), Y count1, Y count2(from camY)
In statistical information listed above, Count0-3 represents to meet and corresponds to as certain window and the counting of the pixel of the pixel condition of 4 pixel filter selected.4 active pixel filters can be selected independently from 8 available pixel filter PF0-PF7 for each window.In addition, pixel filter or eamY brightness statistics information can be used to collect a group of organizing in statistical information more.In one embodiment, the window statistical information of collecting for AWB and AE can be mapped to one or more register.
Still also can be configured to use the brightness value camY for camera color space conversion to obtain brightness row and (lumarowsum) statistical information 678 of a window with reference to Figure 51,3A statistic logic 468.This information can be used to detect and compensate flicker.Flicker is generated by the mechanical periodicity of some fluorescence and incandescent source, caused typically by AC power source signal.Such as, with reference to Figure 59, show the figure how change illustrating light source can cause flicker.So flicker detection can be used to the frequency (such as 50Hz or 60Hz) of the AC power supplies of detection light source.Once be aware of frequency, then by the integral multiple being arranged to flicker cycle the time of integration of imageing sensor being avoided flicker.
In order to detect flicker, cumulative camera brightness camY in every a line.Owing to the down-sampling of the bayer data entered, each camY value may correspond to 4 row in initial original image data.Then, control logic and/or firmware to row mean value or more reliably, can carry out frequency analysis to determine the frequency of the AC power supplies associated with specific light source to the difference of the row mean value of successive frame.Such as, with reference to Figure 59, time of integration of imageing sensor can based on time t1, t2, t3 and t4 (such as to make, be roughly in corresponding moment in moment of same brightness level with the light source presenting change carry out integration).
In one embodiment, can given luminance row and window, and report is about the statistical information 678 of pixel in this window.As an example, for 1080pHD video capture, assuming that the window that 1024 pixels are high, generate 256 brightness row and (such as, owing to the reduction that logic 586 is carried out, every 4 row one and), and available 18 bit tables of each cumulative value reach (such as, 8 camY values be used for often row reach 1024 samplings).
The 3A statistical information of Figure 51 collects logic 468 also can provide auto-focusing (AF) statistical information 682 collection via auto-focusing statistic logic 680.A kind of functional block diagram illustrating in greater detail the embodiment of AF statistic logic 680 is provided in Figure 60.As shown in the figure, AF statistic logic 680 can comprise the horizontal filter 684 that is applied to initial Bayer RGB (non-down-sampling) and edge detector 686, two for 3 × 3 filters 688 of the Y from Bayer and two 3 × 3 filters 690 for camY.Usually, horizontal filter 684 provides fine-resolution statistical information for each color component, 3 × 3 filters 688 can provide the fine-resolution statistical information about Bayer Y (applying 3 × 1 conversions (logic 687) to Bayer RGB), and 3 × 3 filters 690 can provide the rough Two-dimensional Statistical information (because camY uses the Bayer RGB data of reduction to obtain, i.e. logic 630) about camY.In addition, logic 680 can comprise for selecting Bayer RGB data (such as, 2 × 2 is average, and 4 × 4 is average etc.) logic 704, and can use the Bayer RGB data 705 of 3 × 3 filters 706 to selection carry out filtering with produces the Bayer RGB data after selection through filtering output 708.Present embodiments provide 16 statistical windows.At primitive frame boundary, for the filter of AF statistic logic 680 copies edge pixel.The all parts of AF statistic logic 680 will be further described below.
First, horizontal edge check processing comprises each color component (R, Gr, Gb, B) application level filter 684, then applies optional edge detector 686 to each color component.Therefore, depend on image-forming condition, this configuration allows AF statistic logic to be set to do not have rim detection (such as, edge detector forbid) high pass filter, or alternatively, the low pass filter of edge detector (such as, edge detector is enabled) is connect after being set to.Such as, under low lighting conditions, horizontal filter 684 may be more easily affected by noise, and therefore logic 680 connects the low pass filter of the edge detector 686 enabled after horizontal filter can being configured to.As shown in the figure, control signal 694 can enable or disable edge detector 686.Statistical information from different color channels is used to determine that the direction of focusing is to improve acutance, because different colours may focus on the different degree of depth.Especially, AF statistic logic 680 can provide the technology using the combination (such as adjusting the focusing length of camera lens) of rough and intense adjustment to control to enable auto-focusing.Below the embodiment of this kind of technology is described in addition.
In one embodiment, horizontal filter can be 7 tap filters, and can be defined by following equation 41 and 42:
out(i)=(af_horzfilt_coeff[0]*(in(i-3)+in(i+3))+af_horzfilt_coeff[1]*(in(i-2)+in(i+2))+(41)
af_horzfilt_coeff[2]*(in(i-1)+in(i+1))+af_horzfilt_coeff[3]*in(i))
out(i)=max(-255,min(255,out(i)))(42)
Here, each coefficient af_horzfilt_coeff [0:3] is in the scope of [-2,2], and i represents the input pixel index for R, Gr, Gb or B.Brachymemma filtering out (i) (equation 42) can be exported between minimum value-255 and maximum 255.Can be each color component and define filter coefficient independently.
Optional edge detector 686 can be followed after the output of horizontal filter 684.In one embodiment, can as edge detector 686 of giving a definition:
edge(i)=abs(-2*out(i-1)+2*out(i+1))+abs(-out(i-2)+out(i+2))(43)
edge(i)=max(0,min(255,edge(i)))(44)
Therefore, as shown in equation 43, edge detector 686 can export a value based on two pixels of every side of current input pixel i when being activated.As shown in equation 44, this result can be truncated to 8 place values between 0 to 255.
Depend on and whether edge detected, the final output of pixel filter (such as filter 684 and detector 686) can be selected as the output of horizontal filter 684 or the output of edge detector 686.Such as, as shown in equation 45, if edge detected, then the output of edge detector 686 can be edge (i), if edge do not detected, then the output of edge detector 686 can be the absolute value that horizontal filter exports 0ut (i).
edge(i)=(af_horzfilt_edge_detected)?edge(i):abs(out(i))(45)
For each window, cumulative value edge_sum [R, Gr, Gb, B] each pixel edge (j on (1) window can be selected as, i) and, or the maximum max (edge) of edge (i) in (2) window in a line, and thus to row summation each in window.Assuming that primitive frame size is 4096 × 4096 pixels, the quantity storing the position needed for maximum of edge_sum [R, Gr, Gb, B] is 30 (such as, every pixel 8, adds that 22 cover the window of whole original image frame for one).
As mentioned above, 3 × 3 filters 690 for camY brightness can comprise two programmable 3 × 3 filters, and it is called as F0 and F1, and is all applied to camY.The result of filter 690 becomes chi square function or ABS function.Given AF window is two 3 × 3 filter F0 and F1 add up this result to generate luminance edges value.In one embodiment, can as the luminance edges value at each camY pixel place of giving a definition:
edgecamY_FX(j,i)=FX*camY(46)
=FX(0,0)*camY(j-1,i-1)+FX(0,1)*camY(j-1,i)+FX(0,2)*camY(j-1,i+1)+
FX(1,0)*camY(j,i-1)+FX(1,1)*camY(j,i)+FX(1,2)*camY(j,i+1)+
FX(2,0)*camY(j+1,i-1)+FX(2,1)*camY(j+1,i)+FX(2,2)*camY(j+1,i+1)
edgecamY_FX(j,i)=f(max(-255,min(255,edgecamY_FX(j,i))))(47)
f(a)=a^2orabs(a)
Wherein, FX represents 3 × 3 programmable filter F0 and F1, and it has the tape symbol coefficient of scope in [-4,4].Index j and i represents the location of pixels in camY image.As mentioned above, the filter for camY can provide rough resolution statistics, because camY uses reduction (such as 4 × 4 to 1) Bayer RGB data to derive.Such as, in one embodiment, Scharr operator can be used to arrange filter F0 and F1, and Scharr operator provides the Rotational Symmetry of the improvement being better than Sobel operator, shown below one of them example:
F 0 = - 3 0 3 - 10 0 10 - 3 0 3 F 1 = - 3 - 10 - 3 0 0 0 3 10 3
For each window, the accumulated value 700edgecamY_FX_sum (wherein FX=F0 and F1) determined by filter 690 can be selected as the edgecamY_FX (j of each pixel on (1) window, i) sum, or the maximum of the edgecamY_FX (j) (2) in window in a line, and thus to each row summation in window.In one embodiment, when f (a) is set to a^2 to provide " more spike " statistical information with more fine-resolution, edgecamY_FX_sum can be saturated to 32 place values.In order to avoid saturated, maximum window size X*Y can be set in primitive frame pixel, make it be no more than total 1024 × 1024 pixel (such as, i.e. X*Y <=1048576 pixel).As mentioned above, f (a) also can be configured to an absolute value to provide more linear statistical information.
Can define AF3 × 3 filter 688 for Bayer Y with mode like 3 × 3 filter class in camY, but the latter is applied to the brightness value Y generated by Bayer grid (2 × 2 pixel).First, 8 Bayer rgb values are converted to the Y of the programmable coefficients had in [0,4] scope, to generate the Y value through white balance, as shown in following equation 48:
bayerY=max(0,min(255,bayerY_Coeff[0]*R+bayerY_Coeff[1]*(Gr+Gb)/2+(48)
bayerY_Coeff[2]*B))
Similar to the filter 690 of camY, 3 × 3 filters 688 for Bayer Y brightness can comprise two programmable 3 × 3 filters, and it is called as F0 and F1, and is all applied to Bayer Y.The result of filter 688 is chi square function or ABS function.Given AF window is two 3 × 3 filter F0 and F1 add up this result to generate luminance edges value.In one embodiment, can as the luminance edges value at each Bayer Y pixel place of giving a definition:
edgebayerY_FX(j,i)=FX*bayerY(49)
=FX(0,0)*bayerY(j-1,i-1)+FX(0,1)*bayerY(j-1,i)+FX(0,2)*bayerY(j-1,i)+
FX(1,0)*bayerY(j,i-1)+FX(1,1)*bayerY(j,i)+FX(1,2)*bayerY(j-1,i)+
FX(2,0)*bayerY(j+1,i-1)+FX(2,1)*bayerY(j+1,i)+FX(2,2)*bayerY(j+1,i)
edgebayerY_FX(j,i)=f(max(-255,min(255,edgebayerY_FX(j,i))))(50)
f(a)=a^2orabs(a)
Wherein, FX represents 3 × 3 programmable filter F0 and F1, the tape symbol coefficient in scope that it has [-4,4].Index j and i represents the location of pixels in Bayer Y image.As mentioned above, the filter for Bayer Y can provide fine-resolution statistical information, because the Bayer rgb signal received by AF logic 680 is not selected.Only as an example, one of following filter configuration can be used arrange filter F0 and F1 of filter logic 688:
- 1 - 1 - 1 - 1 8 - 1 - 1 - 1 - 1 , - 6 10 6 10 0 - 10 6 - 10 - 6 , 0 - 1 0 - 1 2 0 0 0 0
For each window, the accumulated value 702edgebayerY_FX_sum (wherein FX=F0 and F1) determined by filter 688 can be selected as each pixel edgebayerY_FX (j on (1) window, i) and, or the maximum of the edgebayerY_FX (j) (2) in window in a line, and thus to row summation each in window.Here, when f (a) is set to a^2, edgebayerY_FX_sum can be saturated to 32.Therefore, in order to avoid saturated, the maximum window size X*Y in primitive frame pixel should be set, so that it is no more than total 512 × 512 pixel (such as, X*Y <=262144).As implied above, f (a) is set to the statistical information that a^2 can provide more spike, and f (a) is set to the statistical information that abs (a) can provide more linear.
As mentioned above, be the statistical information 682 that 16 windows collect for AF.Any rectangular area of these windows can be each dimension be 4 pixel multiples.Because each filtering logic 688 and 690 comprises two filters, in some instances, a filter can be used to 4 pixel normalization, and can be configured to filtering on vertical and level two direction.In addition, in certain embodiments, AF logic 680 can carry out normalization AF statistical information with lightness.This can by being set to bypass filter realize by one or more filters of logical block 688 and 690.In certain embodiments, the position of window can be defined as the multiple of 4 pixels, and window is allowed to overlap.Such as, a window can be used to obtain normalized value, and another window can be used to other statistical informations, such as variance as discussed below.In one embodiment, AF filter (such as 684,688,690) can not perform pixel in the edge of picture frame and copy, and therefore in order to allow AF filter use all effective pixels, each top edge at least 4 pixel apart from frame that AF window can be provided so that them, lower limb at least 8 pixels of distance frame, and at least 12, the left/right edge pixel of distance frame.In an illustrated embodiment, can be each window collect and report following statistical information:
For 32 edgeGr_sum of Gr
For 32 edgeR_sum of R
For 32 edgeB_sum of B
For 32 edgeGb_sum of Gb
For 32 edgebayerY_F0_sum for the Y from Bayer of filter 0 (F0)
For 32 edgebayerY_F1_sum for the Y from Bayer of filter 1 (F1)
For 32 edgecamY_F0_sum for camY of filter 0 (F0)
For 32 edgecamY_F1_sum for camY of filter 1 (F1)
In such embodiments, may be that 16 (window) is multiplied by 8 (Gr, R, B, Gb, bayerY_F0, bayerY_F1, camY_F0, camY_F1) and is multiplied by 32 again for the memory stored needed for AF statistical information 682.
Therefore, in one embodiment, the accumulated value of each window can be selected in the middle of following: the output (it can be configured to default setting) of filter, input pixel or input pixel square.Each that can be 16 AF windows is selected, and this selection may be used on all 8 the AF statistical informations (listed above) in given window.It can be used to the AF mark between normalization two overlaid windowss, and one in these two windows is configured to collect the output of filter, and one be configured to collect input pixel and.In addition, in order to calculate the pixel variance in two overlaid windows situations, window can be configured to collect input pixel and, and another can be configured to collect input pixel quadratic sum, thus provides the variance that can calculate as follows:
Variance=(avg_pixel 2)-(avg_pixel)^2
Use AF statistics, ISP control logic 84 (Fig. 7) can be configured to use a series of focusing length adjustment with meticulous auto-focusing " mark " based on rough and adjust the focusing length of the camera lens of vision facilities (such as 30), to make image focus.As mentioned above, 3 × 3 filters 690 for camY can provide rough statistical information, horizontal filter 684 and edge detector 686 can be each color component and provide relative meticulousr statistical information, and can provide the meticulous statistical information for Bayer Y for 3 × 3 filters 688 of Bayer Y.In addition, can be for 3 × 3 filters 706 of Bayer rgb signal 705 of selection the statistical information that each Color Channel provides rough.As will be discussed, AF mark can be calculated based on the filter output value of specific input signal (such as, for filter output F0 and the F1 sum of the Bayer RGB of camY, Bayer Y, selection, or based on level/edge detector output etc.).
Figure 61 shows Figure 71 0 that description represents the curve 712 and 714 of rough and meticulous AF mark respectively.As shown in the figure, the rough AF mark based on rough Statistics information can have more linear response along lens focusing distance.Therefore, at any focusing position, camera motion can cause the change of auto-focusing mark, and it can be used to detected image and become more focus or out of focus.Such as, the increase of the rough AF mark after camera lens adjustment can show in length of focusing with correct direction (such as, towards best focusing position) adjustment.
But close along with best focusing position, the change of rough AF mark may reduce for less camera lens adjustment step-length, thus makes the correct direction that is difficult to pick out focusing adjustment.Such as, as shown in Figure 71 0, the change of the rough AF mark between rough position (CP) CP1 and CP2 can be represented as Δ c12, it illustrates the rough increase from CP1 to CP2.But, as shown in the figure, the change Δ in the rough AF mark from CP3 to CP4 c34(it have passed best focusing position (OFP)) is although still increase, relatively little.It should be understood that the position CP1-CP6 along focusing length L does not mean that and must correspond to the step sizes adopted along focusing length by auto-focusing logic.That is, the additional step-length adopted between each rough position do not illustrated can be there is.Shown position CP1-CP6 be only intended to illustrate focusing position near OFP time, how the change of rough AF mark reduces gradually.
Once determine about position (such as, based on the rough AF mark shown in Figure 61, about position of OFP can be between CP3 and CP5) of OFP, then can assess the meticulous AF fractional value that represented by curve 714 with the described focusing position in meticulous location.Such as, when image is in out of focus, meticulous AF mark can be more smooth, makes large lens location change the large change that can not cause meticulous AF mark.But along with focusing position is near best focusing position (OFP), little position adjustment will make meticulous AF mark sharply change.Therefore, by locating peak value or summit 715 in meticulous AF score curve 714, can be present image scene and determining OFP.Therefore, generally speaking, rough AF mark can be used to determine the cardinal principle adjacent domain of best focusing position, and meticulous AF mark can be used to accurately locate position more accurately in above-mentioned adjacent domain.
In one embodiment, auto-focusing process is by obtaining rough AF mark by focusing length start along whole, it starts from position 0 and ends at position L (illustrating on Figure 71 0), and determines the rough AF mark at different stepping position (such as CP1-CP6) place.In one embodiment, once the focusing position of camera lens in-position L, then can be 0 by position reset before the AF mark of each focusing position of assessment.Such as, this is attributable to the coil stabilization time (coilsettlingtime) of the mechanical organ controlling focusing position.In this embodiment, after being reset to position 0, focusing position can be adjusted to the position that first indicates the negative change of rough AF mark towards position L, be position CP5 here, it presents the negative change Δ relative to position CP4 c45.From position CP5, focusing position (such as position FP1, FP2, FP3 etc.) can be adjusted in the other direction towards position 0, search peak 715 in meticulous AF score curve 714 simultaneously with the increment less relative to the increment used in rough AF mark adjustment.As mentioned above, corresponding to the focusing position OFP of the peak value 715 in meticulous AF score curve 714 can be the best focusing position of present image scene.
As by what understand, locate with regard to OFP this point with regard to the change analyzed in AF score curve 712 and 714, above-mentionedly can be called as " climbing method " for the best region of position of focusing facula point and the technology of optimum position.In addition, although the analysis of the analysis of rough AF mark (curve 712) and meticulous AF mark (curve 714) is depicted as the step-length (distance such as between CP1 and CP2) of rough Fraction analysis use formed objects and the step-length (distance such as between FP1 and FP2) to meticulous Fraction analysis use formed objects, but in certain embodiments, step sizes can depend on that the mark from a position to next position changes and changes.Such as, in one embodiment, the step sizes between CP3 and CP4 can reduce relative to the step sizes between CP1 and CP2, because the total increment (Δ of rough AF mark c34) be less than increment (Δ from CP1 to CP2 c12).
The method 720 describing this process is exemplified at Figure 62.It starts from block 722, is that view data determines rough AF mark along focusing length from position 0 to position L (Figure 61) in each stepping.Afterwards, at block 724, analyze rough AF mark and the rough position of first that presents rough AF mark negative change be identified as the starting point of meticulous AF Fraction analysis.Such as, subsequently, at block 726, with less step-length towards initial position 0 backstepping focusing position, the meticulous AF mark of each stepping place is analyzed until the peak value of AF score curve (curve 714 of such as Figure 61) is located.At block 728, the focusing position corresponding to peak value is set to the best focusing position of present image scene.
As mentioned above, owing to mechanical coil stabilization time, the embodiment of technology shown in Figure 62 is applicable to and first obtains rough AF mark along whole focusing length, instead of analyzes each rough position one by one and search for best focusing area.But, wherein not too need to consider that other embodiments of coil stabilization time can analyze the rough AF mark of each stepping place one by one, and the whole focusing length of non-search.
In certain embodiments, the white balance brightness value deriving from Bayer RGB data can be used to determine AF mark.Such as, by as shown in Figure 63 with 2 for the factor selection 2 × 2 Bayer grids, or by as shown in Figure 64 with 4 for the factor selects 4 × 4 block of pixels be made up of 42 × 2 Bayer grids, derive brightness value Y.In one embodiment, gradient can be used to determine AF mark.In another embodiment, use 3 × 3 of Scharr operator conversions to determine AF mark by application, Scharr operator provides Rotational Symmetry, minimizes the weighted mean square angle error in Fourier domain simultaneously.As an example, shown below the rough AF mark using the calculating of common Scharr operator (discussed above) for camY:
AFScore coarse = f ( - 3 0 3 - 10 0 10 - 3 0 3 &times; in ) + f ( - 3 - 10 3 0 0 0 3 10 3 &times; in ) ,
Wherein, in represents the luminance y value through selection.In other embodiments, other 3 × 3 conversions can be used to calculate the AF mark of rough and meticulous statistical information.
Auto-focusing adjustment can depend on that color component is implemented differently, because the wavelength of different light differently may be affected by camera lens, this is reason horizontal filter 684 being applied to independently each color component.Therefore, even if there is aberration (chromaticaberration) in camera lens still can perform auto-focusing.The relative AF mark of each color such as, when aberration exists, typically focuses on different positions or distance due to red and blue relative to green, so can be used to determine direction of focusing.This is illustrated better at Figure 65, and Figure 65 shows the best focusing position of the blueness of camera lens 740, redness and green channel.As shown in the figure, describe redness, green and blue best focusing position respectively with reference letter R, G and B, its each correspond to an AF mark, and current focusing position is 742.Usually, in such an arrangement, expecting the position (such as, because Bayer RGB has the green component doubling redness or blue component) best focusing position being chosen as the best focusing position corresponding to green component, is here position G.Therefore, it is expected to for best focusing position, green channel should present the highest auto-focusing mark.Therefore, based on the position (closer to camera lens have higher AF mark) of the best focusing position of each color, AF logic 680 can determine based on blue, green and red relative AF mark direction of focusing with the control logic 84 associated.Such as, if relative to green channel, blue channel has higher AF mark (as shown in Figure 65), then adjust focusing position from current location 742 to negative direction (towards imageing sensor), and nonessentially first analyzes in positive direction.In certain embodiments, color associated temperature (CCT) can be used to perform luminous element detect or analyze.
In addition, as mentioned above, can also user's difference number.Such as, can be the cumulative pixel of block size (such as, 8 × 8 to 32 × 32 pixels) and and the value of pixel quadratic sum, and these values can be used to derive variance mark (such as, avg_pixel 2-(avg_pixel) ^2).Can by Variance Addition to obtain the population variance mark of each window.Less block size can be used to obtain meticulous variance mark, and larger block size can be used to obtain rough variance mark.
With reference to the 3A statistic logic 468 of Figure 51, logic 468 also can be configured to collect histogram of component 750 and 752.As by what understand, histogram can be used for the pixel class distribution in analysis image.This can be useful for some function of execution, such as, wherein histogram data is used for the histogram equalization determining histogram specification (Histogram Matching).As an example, brightness histogram can be used for AE (such as, for adjusting/arranging sensor integration time), and color histogram can be used for AWB.In the present embodiment, as specified by interval quantity (BinSize), for each color component, histogram can be 256,128,64 or 32 intervals (bin) (wherein the highest 8,7,6 and 5 of pixel are respectively used to determine interval).Such as, when pixel data is 14, the additional scaling factor between 0-6 and side-play amount can be specified to determine to collect for statistics object the pixel data (such as, which 8) of what scope.Interval number can be obtained as follows:
idx=((pixel-hist_offset)>>(6-hist_scale)
In one embodiment, when in the scope being only in [0,2^ (8-BinSize)] at interval index, in color histogram interval, increment is carried out:
if(idx>=0&&idx<2^(8-BinSize))
StatsHist[idx]+=Count;
In the present embodiment, statistical disposition unit 120 can comprise two histogram units.First histogram 750 (Hist0) can be configured to collect the part that pixel data collects as statistical information after 4 × 4 selections.For Hist0, can use selected cell 756 that component is chosen as RGB, sRGB linear, sRGB or YC1C2.Shown in more detail Figure 65, the second histogram 752 (Hist1) can be configured to before statistic fluid waterline (before defect pixel correction logic 460) and collect pixel data.Such as, as will be discussed, logic 760 can be used (to export) original Bayer RGB data (to generate signal 754) from 124 by skipping pixel to select.For green channel, color (counting of Gr and Gb that add up in green interval) can be selected in Gr, Gb or Gr and Gb.
In order to keep histogram width between two histograms identical, Hist1 can be configured to every 4 pixels (every a Bayer grid) and collect a pixel data.The starting point of histogram window determines the position of the first Bayer grid, and histogram starts to add up there.Start from this position, for Hist1, skipped by horizontal and vertical every a Bayer grid.For Hist1, the position of this window start can be any location of pixels, and therefore by changing the pixel that starting point the window's position is selected to be skipped by histogram calculation.Hist1 can be used to collect and the closely-related data of blackness, represents, to contribute to carrying out dynamic blackness compensation at block 462 in Figure 66 by 1112.Therefore, although in order to illustrate the object histogram 752 that is separated with 3A statistic logic 468 shown in Figure 66, but it should be understood that in fact histogram 752 can be a part for the statistical information of write memory, and in fact can be physically located in statistical disposition unit 120.
In the present embodiment, red (R) and blue (B) interval can be 20, and green (G) interval is 21 (green is larger to add up to hold Gr and Gb in Hist1).It allows the maximum picture size of 4160 × 3120 pixels (12MP).Required internal storage size is 3 × 256 × 20 (1) positions (3 color components, 256 intervals).
About memory form, the statistical information for AWB/AE window, AF window, 2D color histogram and histogram of component can be mapped to register to allow to be accessed in early days by firmware.In one embodiment, two memory pointer can be used in statistical information write memory, one for block statistics information 674, and one for brightness row and 678, are the statistical information of every other collection subsequently.By all statistical information write external memory storages, this external memory storage can be dma memory.Memory address register can be double buffering, so that can for a reposition in each frame designated memory.
Before the ISP flowing water logic 82 in ISP front end logic 80 downstream is discussed in detail, should be understood that the various functional logic blocks (such as logical block 460,462,464,466 and 468) in statistical disposition unit 120 and 122, and the layout of various functional logic blocks (such as logical block 298 and 300) in ISP front end pixel processing unit 130 is only intended to the embodiment illustrating this technology.In fact, in other embodiments, illustrative logical block herein can be arranged according to different order, or the additional logic block for performing the not special additional image processing capacity described herein can be comprised.In addition, should be understood that the image processing operations performed in statistical disposition unit (such as 120 and 122), such as camera lens light and shade correction, defect pixel detection/correction and blackness compensate, and are perform in statistical disposition unit for the object of collection of statistical data.Therefore, the process performed the view data that received by statistical disposition unit operates in fact not to be reflected in and to export from ISP front end processes pixel logical one 30 and to be forwarded to the picture signal 109 of ISP stream treatment logic 82 (FEProcOut).
Before proceeding, be also noted that, suppose the similitude between sufficient processing time and many processing demands of various operation described herein, likely reconfigure functional block shown here and carry out carries out image process in a sequential manner, instead of streamline character.Will be understood that, this may reduce overall hardware implementation cost further, but may be increased to the bandwidth (such as in order to cache/store intermediate object program/data) of external memory storage.
iSP streamline (" flowing water ") processing logic
Below described ISP front end logic 80 in detail, its emphasis is transferred to ISP stream treatment logic 82 by this discussion now.In general, the function of ISP flowing water logic 82 receives raw image data that is that can be provided by ISP front end logic 80 or that retrieve from memory 108, and additional image process operation is performed to it, namely processed before view data is outputted to display device 28.
Figure 67 depicts the block diagram of the embodiment that ISP flowing water logic 82 is shown.As shown in the figure, ISP flowing water logic 82 can comprise original processing logic 900, RGB processing logic 902 and YCbCr processing logic 904.Original processing logic 900 can perform multiple image processing operations, is detected by the defect pixel discussed further and corrects such as below, camera lens light and shade corrects, separates mosaic and for Automatic white balance using gain and/or arrange blackness.As shown in this embodiment, depend on the current configuration selecting logic 906, the input signal 908 to original processing logic 900 can be from original pixels output 109 (the signal FEProcOut) of ISP front end logic 80 or the raw pixel data 112 from memory 108.
As performing the result of separating mosaic operation in original processing logic 900, picture signal exports 910 can be in RGB territory, and can be forwarded to RGB processing logic 902 subsequently.Such as, as seen in figure 67, RGB processing logic 902 Received signal strength 916, depend on the current configuration selecting logic 914, this signal 916 can be output signal 910 or the RGB picture signal 912 from memory 108.RGB processing logic 902 can provide multiple RGB color to adjust operation, comprises following by the color correction discussed further (such as using color correction matrix), for the application of the color gain of Automatic white balance and overall tone mapping.RGB processing logic 904 also can provide rgb image data to the color notation conversion space of YCbCr (brightness/chroma) color space.So picture signal exports 918 can be in YCbCr territory, and can be forwarded to YCbCr processing logic 904 subsequently.
Such as, as seen in figure 67, YCbCr processing logic 904 Received signal strength 924, depend on and select the current configuration of logic 922, this signal 924 can be the output signal 918 from RGB processing logic 902, or from the YCbCr signal 920 of memory 108.To discuss in detail further as following, YCbCr processing logic 904 may be provided in the image processing operations in YCbCr color space, comprises convergent-divergent, colourity suppresses, luminance sharpening, brightness, contrast and color (BCC) adjustment, YCbCr gamma maps, colourity selection etc.The picture signal of YCbCr processing logic 904 exports 926 can be sent to memory 108, or can be used as picture signal 114 (Fig. 7) and export from ISP stream treatment logic 82.Picture signal 114 can be sent to display device 28 (directly or by memory 108), for user's viewing, or can use compression engine (such as encoder 118), CPU/GPU, graphics engine etc. and be further processed.
According to the embodiment of this technology, ISP flowing water logic 82 can support the process of the raw pixel data of 8,10,12 or 14 bit formats.Such as in one embodiment, the input data of 8,10 or 12 can be transformed to 14 in the output of original processing logic 900, and 14 precision can be used to perform original process and RGB process operation.In a rear embodiment, before RGB data is transformed to YCbCr color space, 14 bit image data can be down sampled to 10, and 10 precision can be used to perform YCbCr process (logic 904).
In order to provide the comprehensive description to the several functions provided by ISP stream treatment logic 82, below by each in the original processing logic 900 of discussion of order, RGB processing logic 902 and YCbCr processing logic 904, and the internal logic for performing various image processing operations that can realize in each logical block 900,902 and 904, first from original processing logic 900.Such as, with reference now to Figure 68, according to an embodiment of this technology, exemplified with the block diagram of more detailed view of an embodiment that original processing logic 900 is shown.As shown in the figure, original processing logic 900 comprises gain, skew and clamper (GOC) logic 930, defect pixel detection/correction (DPDC) logic 932, noise reduction logic 934, camera lens light and shade correcting logic 936, GOC logic 938 conciliates mosaic logic 940.In addition, although example discussed below hypothesis combines (one or more) imageing sensor 90 use Bayer color filters array, be to be understood that other embodiments of this technology also can utilize dissimilar colour filter.
Can be that first the input signal 908 of original image signal is received by gain, skew and clamper (GOC) logic 930.GOC logic 930 can provide with as in above Figure 37 the similar function of the BLC logic 462 of the statistical disposition unit 120 of relevant ISP front end logic 80 discussed, and to realize in a similar fashion.Each color component R, B, Gr and Gb that such as GOC logic 930 can be Bayer image sensor provide digital gain, skew and clamper (brachymemma) independently.More specifically, GOC logic 930 can perform Automatic white balance or arrange the blackness of raw image data.In addition, in certain embodiments, GOC logic 930 also can be used to the skew between correction or compensation Gr and Gb color component.
In operation, first the input value of current pixel is offset by signed value, and and multiplied by gains.This operation can use above formula shown in equation 11 to perform, wherein X represents the input pixel value for given color component R, B, Gr or Gb, O [c] represents signed 16 side-play amounts being used for current color component c, and G [c] represents the yield value being used for color component c.(such as in ISP front-end block 80) value of G [c] can be pre-determined during statistical disposition.In one embodiment, gain G [c] can be 16 unsigned numbers (such as 2.14 floating point representations) with 2 integer-bit and 14 fractional bits, also can round off to gain G [c] application.Only exemplarily, gain G [c] can have the scope between 0 to 4X.
Then according to equation 12, the pixel value Y calculated by equation 11 (it comprises gain G [c] and side-play amount O [c]) is punctured into minimum value and maximum range.As mentioned above, variable min [c] and max [c] can represent signed 16 " brachymemma values " for minimum and maximum output valve respectively.In one embodiment, GOC logic 930 also can be configured to preserve each color component respectively on maximum range and under minimum value scope by the counting of the quantity of the pixel of brachymemma.
Subsequently, the output of GOC logic 930 is forwarded to defect pixel detection and correcting logic 932.As above (DPDC logic 460) described in reference diagram 37, defect pixel is attributable to some factors, and " dry point " (or electric leakage pixel), " bright spot " and " bad point " can be comprised, wherein dry point presents leakage higher than normal charge leakage relative to non-defective pixel, therefore may look brighter than non-defective pixel; And wherein bright spot appear as and often open (such as, entirely charging) and therefore show brighter, bad point then appears as normal pass.Therefore, it is desirable to obtain a kind of enough robusts to identify the pixel detection scheme of conciliating failure situations never of the same type.More specifically, when compared with only may providing the front end DPDC logic 460 of dynamic defect detection/correction, flowing water DPDC logic 932 can provide fixing or static defects detection/correction, dynamic defect detection/correction and spot remove.
According to the embodiment of technology disclosed herein, defect pixel correction/the detection performed by DPDC logic 932 can be carried out independently for each color component (such as R, B, Gr and Gb), and can comprise the operation of multiple operation for detecting defect pixel and the multiple defect pixel arrived for correct detection.Such as, in one embodiment, defect pixel detection operation can provide the detection to static defect, dynamic defect and the detection to spot, and spot can refer to and may be present in electrical interference in imaging sensor or noise (such as photon noise).As analogy, spot as the random noise artifact appearance looked in the picture, may be similar to static noise and can appear at mode on the display of such as television indicator.In addition, as mentioned above, dynamic defect corrects and is thought dynamically in the sense: in the given moment, pixel is thought that defectiveness can be depending on the view data in neighbor.Such as, if the bright spot being always in high-high brightness is arranged in the region that present image is dominated by bright white, then this bright spot may not be considered to defect pixel.Otherwise if bright spot is in by black or dark-coloured leading region in present image, then this bright spot can be identified as defect pixel during processing via DPDC logic 932, and is correspondingly corrected.
About static defects detection, by the position of each pixel compared with static defect map, this table can store the data of the position corresponding to known defective pixel.Such as, in one embodiment, DPDC logic 932 can monitor the detection (such as using counter mechanism or register) of defect pixel, and if certain specific pixel is observed repeatedly failure, the position of this pixel is stored in static defect map.Therefore, during static defects detection, if determine that the position of current pixel is in static defect map, so current pixel is identified as defect pixel, and a substitution value is determined and is stored temporarily.In one embodiment, substitution value can be the value (based on scanning sequency) of the last pixel of same color component.As described below, substitution value can be used to correct static defect during dynamically/spot defects detects and corrects.In addition, if last pixel is in (Figure 19) outside primitive frame 278, then do not use the value of this last pixel, and static defect can be corrected during dynamic defect correction process.In addition, consider memory, static defect map can the location entries of memory limited quantity.Such as, in one embodiment, static defect map can be implemented as fifo queue, and it is configured to store for every two lines of view data amount to 16 positions.Even so, the position defined in static defect map will use last pixel substitution value (instead of by dynamic defect check processing discussed below) to correct.As mentioned above, the embodiment of this technology can also upgrade static defect map in time off and on.
Embodiment may be provided in on-chip memory or chip external memory the static defect map realized.It will be apparent that, use on sheet and realize increasing overall chip area/size, and use sheet to realize reducing chip area/size outward, but can memory bandwidth requirements be increased.Therefore, be to be understood that the specific implementation demand that can be depending on (that is, will be stored in the sum of the pixel in static defect map), on sheet or sheet other places realizes this static defect map.
Dynamic defect and spot detection process can have time shift relative to the static defects detection process of above-mentioned discussion.Such as, in one embodiment, dynamic defect and spot detection process can start after static defects detection process has analyzed the pixel of two scan lines (such as, OK).Can understand, this allows to determine the identification of static defect and their respective substitution value before dynamically/spot detection occurs.Such as, during dynamically/spot detection process, if current pixel had previously just been marked as static defect, then use the substitution value previously estimated simply to correct static defect, instead of apply dynamically/spot detection operation.
About dynamic defect and spot detection, these process can order or parallel generation.The dynamic defect performed by DPDC logic 932 and spot detection and correction can be dependent on the auto-adaptable image edge detection using pixel to pixel orientation gradient.In one embodiment, DPDC logic 932 can select the pixel with eight direct neighbors of same color component of current pixel in used primitive frame 278 (Figure 19).In other words, shown in following Figure 69, pixel P0, P1, P2, P3, P4, P5, P6 and P7 of current pixel and eight direct neighbors thereof can form 3x3 region.
However, it is noted that, depend on the position of current pixel P, when calculating pixel is to the pixel do not considered during pixel gradient outside primitive frame 278.Such as, for " upper left " situation 942 such as shown in Figure 69, current pixel P is in the upper left corner of primitive frame 278, does not therefore consider neighbor P0, P1, P2, P3 and P5 outside primitive frame 278, only remaining pixel P4, P6 and P7 (N=3)." on " situation 944, current pixel P is in the uppermost edge of primitive frame 278, does not therefore consider neighbor P0, P1 and P2 outside primitive frame 278, only remaining pixel P3, P4, P5, P6 and P7 (N=5).Then, in " upper right " situation 946, current pixel P is in the upper right corner of primitive frame 278, does not therefore consider neighbor P0, P1, P2, P4 and P7 outside primitive frame 278, only remaining pixel P3, P5 and P6 (N=3).In " left side " situation 948, current pixel P is in the leftmost edge of primitive frame 278, does not therefore consider neighbor P0, P3 and P5 outside primitive frame 278, only remaining pixel P1, P2, P4, P6 and P7 (N=5).
" " center " situation 950, all pixel P0-P7 are positioned at primitive frame 278, so be all used to determine that pixel is to pixel gradient (N=8).In " right side " situation 952, current pixel P is in the rightmost edges of primitive frame 278, does not therefore consider neighbor P2, P4 and P7 outside primitive frame 278, only remaining pixel P0, P1, P3, P5 and P6 (N=5).In addition, in " lower-left " situation 954, current pixel P is in the lower left corner of primitive frame 278, does not therefore consider neighbor P0, P3, P5, P6 and P7 outside primitive frame 278, only remaining pixel P1, P2 and P4 (N=3).In D score situation 956, current pixel P is in the edge bottom of primitive frame 278, does not therefore consider neighbor P5, P6 and P7 outside primitive frame 278, only remaining pixel P0, P1, P2, P3 and P4 (N=5).Finally, in " bottom right " situation 958, current pixel P is in the lower right corner of primitive frame 278, does not therefore consider neighbor P2, P4, P5, P6 and P7 outside primitive frame 278, only remaining pixel P0, P1 and P3 (N=3).
Therefore, depend on the position of current pixel P, be used to determine that pixel may be 3,5 or 8 to the pixel quantity of pixel gradient.In an illustrated embodiment, for each neighbor (k=0 to 7) in picture (such as primitive frame 278) border, calculating pixel that can be following is to pixel gradient:
G k=abs (P-P k), for 0≤k≤7 (only for the k in primitive frame) (51)
In addition, the mean P of current pixel and its surrounding pixel is calculated by equation as follows avbetween difference as average gradient G av:
P av = ( &Sigma; k N P k ) N , Wherein N=3,5 or 8 (depending on location of pixels) (52a)
G av=abs(P-P av)(52b)
Pixel can be used to determine dynamic defect situation to pixel gradient value (equation 51), and average (equation 52a and 52b) of neighbor can be used to identify spot situation discussed below.
In one embodiment, DPDC logic 932 can perform dynamic defect detection as follows.First, if the gradient G of some kbe equal to or less than the specific threshold represented by variable dynTh (dynamic defect threshold value), then suppose that this pixel is defective.Therefore, for each pixel, the counting (C) being equal to or less than the quantity of the gradient of threshold value dynTh of the neighbor in picture boundary is added up.Threshold value dynTh can be the combination of fixed threshold component and the dynamic threshold component that can be depending on " activity " that occur in surrounding pixel.Such as, in one embodiment, by calculating high frequency component values P hfdetermine the dynamic threshold component of dynTh, calculate high frequency component values P hfthen based on to average pixel value P avthe absolute value summation of the difference between (equation 52a) and each neighbor, as follows:
P hf = 8 N &Sigma; k N abs ( P av - P k ) , Wherein N=3,5 or 8 (52c)
When pixel is positioned at image one jiao (N=3) or is positioned at image border (N=5), P hf8/3 or 8/5 can be multiplied by respectively.Can understand, which ensure that and carry out normalization high fdrequency component P based on eight neighbors (N=8) hf.
Once determine P hf, can calculating dynamic defect detection threshold dynTh as follows:
dynTh=dynTh 1+(dynTh 2×P hf),(53)
Wherein dynTh 1represent fixed threshold component, dynTh 2represent dynamic threshold component, and be for P in equation 53 hfmultiplier.Different fixed threshold component dynTh can be provided for each color component 1but, for the pixel of each same color, dynTh 1identical.Only exemplarily, can dynTh be set 1noise variance in image is at least greater than to make it.
Dynamic threshold component dynTh can be determined based on some feature of image 2.Such as, in one embodiment, can use storage about exposure and/or the empirical data of sensor integration time to determine dynTh 2.This empirical data can be determined between the alignment epoch of imageing sensor (such as, 90), and can will can be dynTh 2the dynamic threshold component value selected is associated with each of multiple data point.So, based on the current exposure can determined during the statistical disposition performed by ISP front end logic 80 and/or sensor integration time value, determine dynTh by selecting the dynamic threshold component value corresponding to current exposure and/or sensor integration time value from the empirical data stored 2.In addition, if current exposure and/or sensor integration time value directly correspond to a point in empirical data points, then by carrying out interpolation to falling into current exposure and/or sensor integration time value dynamic threshold component value that data point is therebetween associated and determine dynTh 2.In addition, fixed threshold component dynTh is similar to 1, for each color component, dynamic threshold component dynTh 2different values can be had.Therefore, for each color component (such as R, B, Gr, Gb), combined threshold value dynTh can be different.
As mentioned above, for each pixel, determine the counting C being equal to or less than the quantity of the gradient of threshold value dynTh of the neighbor in picture boundary.Such as, for each neighbor in primitive frame 278, the gradient G being equal to or less than threshold value dynTh can be calculated as follows kaccumulated counts C:
C = &Sigma; k N ( G k &le; dynTh ) - - - ( 54 )
For 0≤k≤7 (only for the k in primitive frame)
Then, if accumulated counts C is confirmed as being less than or equal to the maximum count represented by variable dynMaxC, then this pixel can be considered to dynamic defect.In one embodiment, can be N=3 (angle), the situation of N=5 (edge) and N=8 provides the different value of dynMaxC.This logic is expressed as follows:
If (C≤dynMaxC), then current pixel P is defective.(55)
As mentioned above, the position of defect pixel can be stored in static defect map.In certain embodiments, the minimal gradient value (min (G calculated between the dynamic defect detection period for current pixel k)) can be stored and be used to taxonomic defficiency pixel, make larger " seriousness " of larger minimal gradient value instruction defect, and should be corrected before so not serious defect is corrected during pixel correction.In one embodiment, pixel may need the process carrying out across multiple one-tenth picture frame to it before being stored in static defect map, such as, by filtering the position of defect pixel along with the time.In a rear embodiment, can only just the position of this defect pixel be stored in static defect map when defect appears at the same position of the consecutive image of specific quantity.In addition, in certain embodiments, static defect map can be configured to the defect pixel position of classifying stored based on minimal gradient value.Such as, the highest minimal gradient value can indicate defect to have larger " seriousness ".By coming name placement by this way, the priority of static defect correction can be set, make it possible to first correct serious or the most important defect.In addition, static defect map can be updated to comprise the static defect newly detected along with the time, and sorted to it based on their respective minimal gradient values thus.
Determined value G can be passed through avwhether (equation 52b) performs can to walk abreast the spot detection occurred with above-described dynamic defect check processing higher than spot detection threshold value spkTh.Be similar to dynamic defect threshold value dynTh, spot threshold value spkTh also can comprise fixing and dynamic component, and it is respectively by spkTh 1and spkTh 2refer to.In general, compared to dynTh 1and dynTh 2value, fixing and dynamic component spkTh 1and spkTh 2can be arranged by more " aggressiveness ground ", to avoid detecting spot mistakenly in the image-region may with heavier texture or other (such as word, leaf, particular web patterns etc.).Thus, in one embodiment, dynamic speckle threshold component spkTh 2can increase to some extent for the high texture region of image, for " milder " or evenly region then reduce to some extent.Can calculating spot detection threshold value spkTh as follows:
spkTh=spkTh 1+(spkTh 2×P hf),(56)
Wherein spkTh 1represent fixed threshold component, and wherein spkTh 2represent dynamic threshold component.The detection of spot can be determined subsequently according to following formula:
If (G av> spkTh), then current pixel P is spot.(57)
Once identify defect pixel, DPDC logic 932 can be depending on the defect type detected and operates to apply pixel correction.Such as, if defect pixel is identified as static defect, then the substitution value (value of the last pixel of such as same color component) stored is used to replace this pixel as mentioned above.If pixel is identified as dynamic defect or spot, then execution pixel correction that can be following.First, as follows, gradient four direction (level (h) direction, vertical (v) direction, just to angular direction (dp) and bear angular direction (dn)) on be calculated as absolute value (the such as equation 51 couples of G of the difference between current pixel and the first and second neighbor pixels kcalculating) sum:
G h=G 3+G 4(58)
G v=G 1+G 6(59)
G dp=G 2+G 5(60)
G dn=G 0+G 7(61)
Then, via with directional gradient G h, G v, G dpand G dnin there is minimum value the linear interpolation of two neighbors that is associated of directional gradient to determine correction pixels value P c.Such as, in one embodiment, following logical statement can represent P ccalculating:
if(min==G h)(62)
P C = P 3 + P 4 2 ;
elseif(min==G v)
P C = P 1 + P 6 2 ;
elseif(min==G dp)
P C = P 2 + P 5 2 ;
elseif(min==G dn)
P C = P 0 + P 7 2 ;
The pixel correction technology realized by DPDC logic 932 also can consider the exception of boundary condition.Such as, if one that is associated with in two neighbors in selected interpolation direction outside primitive frame, then the value being used in that neighbor pixel in primitive frame replaces.Therefore, use this technology, the value of this neighbor pixel that correction pixels value will equal in primitive frame.
It should be noted that the defect pixel detection/correction technology applied by DPDC logic 932 during ISP stream treatment is compared to the DPDC logic 460 more robust in ISP front end logic 80.As above embodiment discussed, the neighbor of DPDC logic 460 only on usage level direction only performs dynamic defect and detects and correct, otherwise DPDC logic 932 usage level provides detection and the correction of static defect, dynamic defect and spot with the neighbor on vertical two directions.
It will be apparent that, use the position of static defect map storage defect pixel can provide time-domain filtering to defect pixel with lower storage requirement.Such as, store whole image compared to those and apply time-domain filtering in time to identify the conventional art of static defect, the position of the embodiment of this technology only storage defect pixel, this typically can use the only sub-fraction of the memory stored needed for whole picture frame just can realize.In addition, as discussed above, minimal gradient value (min (G k)) storage allow effectively to use the static defect map order of the position that defect pixel is corrected being distinguished to priority (such as starting from those the most significant pixels).
In addition, use comprises dynamic component (such as, dynTh 2and spkTh 2) threshold value can contribute to the defects detection reducing mistake, this problem that to be traditional images treatment system often run into when processing high texture region (such as word, leaf, the particular web pattern etc.) of image.In addition, the appearance of visible artefacts during use directional gradient (such as h, v, dp, dn) to carry out defects detection that pixel correction can reduce when making a mistake.Such as, even if the filtering on minimal gradient direction can cause in the event of false detection, the correction of in most of the cases acceptable result can still be produced.In addition, the accuracy that current pixel P can improve gradient detection is comprised, especially when dry point in gradient calculation.
The defect pixel that realized by DPDC logic 932 discussed above detects and alignment technique can be summarized as a series of flow charts that Figure 70-72 provides.Such as, first with reference to Figure 70, exemplified with the process 960 for detecting static defect.It starts from step 962, at very first time T 0receive input pixel P.Then, in step 964, by the position of pixel P compared with the value be stored in static defect map.Decision logic 966 determines whether the position finding pixel P in static defect map.If the position of P is in static defect map, then process 960 and proceed to step 968, wherein pixel P is marked as static defect and determines substitution value.As mentioned above, substitution value can be determined based on the value of the last pixel (with scanning sequency) of same color component.Process 960 then proceeds to step 970, and here, process 960 proceeds to the dynamic and spot detection process 980 as shown in Figure 71.In addition, if it is determined that logic 966 determines that the position of pixel P is not in static defect map, then process 960 and proceed to step 970 and do not perform step 968.
Proceed to Figure 71, as shown in step 982, receive input pixel P at time T1 and determine whether there is dynamic defect or spot for processing.Time T1 can represent the time shift of the static defects detection process 960 relative to Figure 70.As discussed above, dynamic defect and spot detection process can be started after static defects detection process has analyzed the pixel of two scan lines (such as going), thus allow to have before dynamically/spot detection occurs for identifying static defect and determining time of their respective substitution value.
Decision logic 984 determines whether input pixel P had previously been marked as static defect (such as by the step 968 of process 960).If P is marked as static defect, then processes 980 and can proceed to pixel correction process as shown in Figure 72, and walk around other steps shown in Figure 71.If it is determined that logic 984 determines input, pixel P is not static defect, then process proceeds to step 986, and identifies the neighbor that can use in dynamic defect and spot process.Such as, discuss and embodiment shown in Figure 69 according to above, neighbor can comprise the pixel (such as P0-P7) of 8 direct neighbors of pixel P, thus forms 3x3 pixel region.Then, in step 988, as shown in above equation 51, pixel about neighbor each in primitive frame 278 is calculated to pixel gradient.In addition, as shown in equation 52a and 52b, current pixel and the difference of the mean value of its surrounding pixel are calculated as average gradient (G av).
Process 980 is branched off into subsequently for the step 990 of dynamic defect detection and the decision logic 998 for spot detection.As mentioned above, in certain embodiments, dynamic defect detection and spot detection can be carried out concurrently.In step 990, determine the counting C of the quantity of the gradient being less than or equal to threshold value dynTh.As mentioned above, threshold value dynTh can comprise fixing and dynamic component, and in one embodiment, can be determined according to above equation 53.If C is less than or equal to maximum count dynMaxC, then process 980 and proceed to step 996, and current pixel is marked as dynamic defect.After this, process 980 and can proceed to the following pixel correction process as shown in Figure 72 that will discuss.
Branch after referring back to step 988, for spot detection, decision logic 998 determines average gradient G avwhether be greater than the spot detection threshold value spkTh comprising fixing and dynamic component equally.If G avbe greater than threshold value spkTh, be labeled as by pixel P comprise spot in step 1000, after this, process 980 proceeds to the correction of Figure 72 for spot pixels.In addition, if it is determined that the output of both logical blocks 992 and 998 is all "No", this instruction pixel P does not comprise dynamic defect, spot or static defect (decision logic 984).Therefore, when the output of decision logic 992 and 998 is all "No", process 980 can end at step 994, makes pixel P without passing through of changing thus, because defect (such as static, dynamic or spot) do not detected.
Proceed to Figure 72, provide the pixel correction process 1010 according to above-mentioned technology.In step 1012, receive input pixel P from the process 980 of Figure 71.Should be noted that process 1010 can receive pixel P from step 984 (static defect) or from step 996 (dynamic defect) and 1000 (spot defects).Then decision logic 1014 determines whether pixel P is marked as static defect.If pixel P is static defect, then step 1010 continues and ends at step 1016, is used in substitution value that step 968 (Figure 70) determines to correct static defect at this.
If pixel P is not identified as static defect, then processes 1010 and proceed to step 1018 from decision logic 1014, and calculated direction gradient.Such as, as above with reference to equation 58-61 discuss, be for four direction (h, v, dp and dn) by gradient calculation, the absolute value sum of the difference of center pixel and the first and second neighbors.Then, in step 1020, identify the directional gradient with minimum value, and after this, decision logic 1022 estimates whether of being associated with in two neighbors of this minimal gradient is positioned at outside picture frame (such as primitive frame 278).If two neighbors are all within picture frame, then process and 1010 proceed to step 1024, and as shown in equation 62, by the value application linear interpolation of these two neighbors to determine pixel correction value (P c).After this, as shown in step 1030, the pixel correction value P obtained through interpolation can be used ccorrect input pixel P.
Get back to decision logic 1022, if it determines that one in two neighbors is positioned at outside picture frame (such as primitive frame 165), then as shown in step 1026, replace the value using this external pixels (Pout), DPDC logic 932 can use the value of another neighbor being positioned at picture frame (Pin) to replace the value of this Pout.After this, in step 1028, by carrying out interpolation to the value of Pin and the substitution value of Pout and determine pixel correction value P c.In other words, in this case, P cthe value of Pin can be equaled.End at step 1030, use value P ccarry out correction pixels P.Before proceeding, be to be understood that the specified defect pixel detection of the reference DPDC logic 932 discussed in this place and correction process are only intended to reflect one of this technology possible embodiment.In fact, depend on design and/or cost constraint, multiple change can be made, and can increase or remove some features, between the comparatively simple detection/correction logic 460 that the overall complexity of defects detection/correcting logic and robustness are realized in ISP front-end block 80 and defects detection/correcting logic that reference DPDC logic 932 is discussed herein.
Refer back to Figure 68, the pixel data after correction is exported by from DPDC logic 932, and is received to do further process by noise reduction logic 934.In one embodiment, noise reduction logic 934 can be configured to, while maintenance details and texture, realize two-dimentional edge self-adaption low-pass filtering to reduce the noise in view data.Can arrange (such as by control logic 84) edge self-adaption threshold value based on current light level, making can boostfiltering under low light condition.In addition, as what once briefly mentioned when determining dynTh and spkTh value above, can be given transducer determination noise variance in advance, be set to just on noise variance to make noise reduction threshold value, therefore during noise reduction process, noise is lowered and can not the texture of appreciable impact scene and details (such as, avoid/reduce error detection).Be assumed to be Bayer color filters execution mode, noise reduction logic 934 can use separable 7 tap horizontal filter and 5 tap vertical filters to process each color component Gr, R, B and Gb independently.In one embodiment, by correcting the inhomogeneities in green color component (Gb and Gr), and described noise reduction process is carried out in executive level filtering subsequently and vertical filtering.
Generally characterize green inhomogeneities (GNU) by the small lightness difference when throwing light on flat surfaces equably between Gr and Gb pixel.If do not correct this inhomogeneities or compensate, after solution mosaic, such as the specific artifact of " labyrinth type " artifact may appear on full color image.Can comprise for each green pixel in original bayer images data determines whether the absolute value of the difference of current green pixel (G1) and the bottom-right green pixel of current green pixel (G2) is less than GNU corrected threshold (gnuTh) during green inhomogeneities process.Figure 73 is exemplified with the position of G1 and the G2 pixel in 2 × 2 regions of bayer-pattern.As shown in the figure, it is Gb or Gr pixel that the color of the pixel around G1 can be depending on current green pixel.Such as, if G1 is Gr, then G2 is the pixel on the right of Gb, G1 is R (redness), and the following pixel of G1 is B (blueness).As replacement, if G1 is Gb, then G2 is the pixel on the right of Gr, G1 is B, and the following pixel of G1 is R.If the absolute value of the difference of G1 and G2 is less than GNU corrected threshold, then replace current green pixel G1 with the mean value of G1 and G2, it is as shown in following logic:
if(abs(G1-G2)≤gnuTh); G 1 = G 1 + G 2 2 - - - ( 63 )
Can understand, apply green nonuniformity correction in this way and can contribute to preventing G1 and G2 pixel by average across edge, improve thus and/or retain acutance.
Horizontal filtering can be applied after green nonuniformity correction, and can provide 7 tap horizontal filter in one embodiment.Calculate each filter tap across edge gradient, and if it is greater than horizontal edge threshold value (horzTh), then as below by illustrative, filter tap is folded to center pixel.Horizontal filter can for view data described in each color component (R, B, Gr, Gb) independent process, and can use the value of non-filtered as input value.
Exemplarily, Figure 74 shows the diagrammatic representation for one group of horizontal pixel P0-P6, and wherein centre cap is positioned at P3.Based on pixel as shown in Figure 74, the edge gradient for each filter tap can be calculated as follows:
Eh0=abs(P0-P1)(64)
Eh1=abs(P1-P2)(65)
Eh2=abs(P2-P3)(66)
Eh3=abs(P3-P4)(67)
Eh4=abs(P4-P5)(68)
Eh5=abs(P5-P6)(69)
So edge gradient Eh0-Eh5 can be utilized to use formula 70 as follows to determine that horizontal filtering exports P by horizontal filter parts horz:
P horz=C0×[(Eh2>horzTh[c])?P3:(Eh1>horzTh[c])?P2:(Eh0>horzTh[c])?P1:P0]+
C1×[(Eh2>horzTh[c])?P3:(Eh1>horzTh[c])?P2:P1]+
C2×[(Eh2>horzTh[c])?P3:P2]+
C3×P3+(70)
C4×[(Eh3>horzTh[c])?P3:P4]+
C5×[(Eh3>horzTh[c])?P3:(Eh4>horzTh[c])?P4:P5]+
C6×[(Eh3>horzTh[c])?P3:(Eh4>horzTh[c])?P4:(Eh5>horzTh[c])?P5:P6],
Wherein horzTh [c] is the horizontal edge threshold value for each color component c (such as R, B, Gr and Gb), and wherein C0-C6 is the filter tap coefficients corresponding to pixel P0-P6 respectively.Horizontal filter exports P horzthe position of center pixel P3 can be applied to.In one embodiment, filter tap coefficients C0-C6 can be 16 complement of two's two's complement values, and it has 3 integer-bit and 13 fractional bits (3.13 floating-point).In addition, should be noted that filter tap coefficients C0-C6 is not must be symmetrical about center pixel P3.
Vertical filtering is applied by noise reduction logic 934 after green nonuniformity correction and horizontal filtering process.In one embodiment, as shown in Figure 75, vertical filtering operation can provide 5 tap filters, and wherein the centre cap of vertical filter is positioned at P2.Vertical filtering process can be carried out according to the mode similar as above-described horizontal filtering process.Such as, calculate the gradient across edge of each filter tap, if it is greater than vertical edge threshold value (vertTh), filter tap is folded to center pixel P2.Vertical filter can for each color component (R, B, Gr, Gb) independent process view data, and can use the value of non-filtered as input value.
Based on pixel as shown in Figure 75, the vertical edge gradient of each filter tap can be calculated as follows:
Ev0=abs(P0-P1)(71)
Ev1=abs(P1-P2)(72)
Ev2=abs(P2-P3)(73)
Ev3=abs(P3-P4)(74)
So edge gradient Ev0-Ev5 can be utilized by vertical filter, to use formula 75 as follows to determine that vertical filtering exports P vert:
P vert=C0×[(Ev1>vertTh[c])?P2:(Ev0>vertTh[c])?P1:P0]+
C1×[(Ev1>vertTh[c])?P2:P1]+
C2×P2+(75)
C3×[(Ev2>vertTh[c])?P2:P3]+
C4×[(Ev2>vertTh[c])?P2:(Eh3>vertTh[c])?P3:P4],
Wherein vertTh [c] is the vertical edge threshold value for each color component c (such as R, B, Gr and Gb), and wherein C0-C4 is the filter tap coefficients of the pixel P0-P4 corresponded in Figure 75 respectively.Vertical filtering exports P vertthe position of center pixel P2 can be applied to.In one embodiment, filter tap coefficients C0-C4 can be 16 complement of two's two's complement values, and it has 3 integer-bit and 13 fractional bits (3.13 floating-point).In addition, should be noted that filter tap coefficients C0-C4 is not must be symmetrical about center pixel P2.
In addition, about boundary condition, when neighbor is outside primitive frame 278 (Figure 19), be used in the value of the same color pixel of the edge of primitive frame to copy the value of border exterior pixel.This agreement can be implemented both horizontal and vertical filtering operations.Exemplarily, refer again to Figure 74, when horizontal filtering, if pixel P2 is positioned at the leftmost edge pixel of primitive frame, and pixel P0 and P1 is in outside primitive frame, then the value of pixel P2 is used to replace the value of pixel P0 and P1 for horizontal filtering.
Refer again to back the block diagram of the original processing logic 900 shown in Figure 68, the output of noise reduction logic 934 is subsequently sent to camera lens light and shade and corrects (LSC) logic 936 for process.As discussed above, camera lens light and shade alignment technique can comprise applies suitable gain with the decline of compensating light intensity by pixel, and the decline of this luminous intensity may be caused by the geometric optics of camera lens, manufacturing defect, misalignment etc. between micro lens arrays and color array filter.In addition, infrared (IR) filter in some camera lenses also can cause decline to depend on luminous element, so and the gain of camera lens light and shade can depend on the light source that detects and be modified.
In the embodiment depicted, the LSC logic 936 of ISP flowing water 82 can realize in a comparable manner, so and usually provide the function identical with the LSC logic 464 of the ISP front-end block 80 discussed with reference to above accompanying drawing 40-48.Therefore, in order to avoid redundancy, should be understood that the LSC logic 936 of illustrative embodiments is herein configured to operate in the mode substantially identical with LSC logic 460, therefore will no longer repeat the above description to camera lens light and shade alignment technique provided at this.But, in order to sum up in general manner, should be understood that LSC logic 936 can process each color component of raw pixel data stream independently to determine to be applied to the gain of current pixel.According to above-described embodiment, camera lens light and shade correcting gain can be determined based on the gain grid point across the one group of definition becoming picture frame distribution, wherein define the interval between each grid point by multiple pixel (such as 8 pixels, 16 pixels etc.).If the position of current pixel corresponds to grid point, then the yield value being associated with this grid point is applied to current pixel.But, if the position of current pixel between grid point (G0, G1, G2 and G3 of such as Figure 43), then carry out interpolation to calculate LSC yield value (equation 13a and 13b) by the grid point therebetween to current pixel.This process described by the process 528 of Figure 44.In addition, mentioned by with reference to Figure 42, in certain embodiments, grid point anisotropically can be distributed (such as log series model), grid point is not concentrated at the center in LSC region 504, but more concentrated towards each angle (distortion of camera lens light and shade is more easily discovered there usually) in LSC region.
In addition, as with reference to Figure 47 and 48 discuss, LSC logic 936 also can apply radial gain component together with grid yield value.Radial gain component can be determined (equation 14-16) based on the distance of the center pixel of current pixel and image.As described in, use radial gain to allow to use single shared gain grid to all colours component, this stores total memory space needed for independent gain grid by being reduced to each color component widely.This minimizing of grid gain data can reduce and realizes cost, because grid gain data table can occupy a big chunk of memory in image processing hardware or chip area.
Then, the output referring again to original processing logic block diagram 900, the LSC logic 936 of Figure 68 is passed to the second gain, skew and clamper (GOC) block 938 subsequently.GOC logic 938 can be employed before solution mosaic (by logical block 940), and can be used to perform Automatic white balance to the output of LSC logic 936.In the embodiment depicted, GOC logic 938 can realize according to the mode identical with GOC logic 930 (with BLC logic 462).Therefore, according to above equation 11, first the input received by GOC logic 938 by skew signed value, is multiplied by gain subsequently.Then end value is punctured into minimum value and maximum range according to equation 12.
After this, the output of GOC logic 938 is forwarded to separates mosaic logic 940 for process, thus produces full-color (RGB) image based on original Bayer input data.It will be apparent that, the original output using the imageing sensor of color filter array (such as Bayer filter) is " incomplete " in the sense: each pixel is filtered only to obtain single color component.Therefore the data deficiencies of single pixel collection is only to determine color.Therefore, separate mosaic technology can be used to the color data by estimating to lose for each picture element interpolation and generate full color image from original bayer data.
With reference now to Figure 76, exemplified with providing about how solution mosaic being applied to original bayer images pattern 1034 to produce the graphics process flow process 692 of the general general survey of full-color RGB.As shown in the figure, 4 × 4 parts 1036 of original bayer images 1034 can comprise the passage separated for each color component, comprise green channel 1038, red channel 1040 and blue channel 1042.Because each imaging pixel in Bayer transducer only obtains the data about a kind of color, so the color data of each Color Channel 1038,1040 and 1042 is incomplete, its by symbol "? " instruction.Separate mosaic technology 1044 by application, interpolation can be carried out to obtain the loss color samples of each passage.Such as, as shown in reference marker 1046, the data G ' obtained through interpolation can be used to the loss sampling of filling in green color passage, similarly, the data R ' obtained through interpolation (can combine with the data G ' 1046 obtained through interpolation) and be used to the loss of filling in red color path 10 48 and sample, and the data B ' obtained through interpolation (can combine with the data G ' 1046 obtained through interpolation) and be used to the loss of filling in blue color channels 1050 and sample.Therefore, as separating the result of mosaic processing, each Color Channel (R, G, B) will have one group of complete color data, and these data can be used to reconstruct full-color RGB image 1052 subsequently.
The solution mosaic technology that can be realized by solution mosaic logic 940 is now described according to an embodiment.In green color passage, low pass directional filters can be used to known samples of green and the color samples that interpolation obtains losing is come to adjacent Color Channel (such as red and blue) use high pass (or gradient) filter.For red and blue color channels, carry out by similar mode the color samples that interpolation obtains losing, but low-pass filtering is used to known redness or blue valve and to and the green value obtained through interpolation deposited uses high-pass filtering.In addition, in one embodiment, 5 × 5 block of pixels edge self-adaption filters based on initial Bayer color data can be utilized to the solution mosaic of green color passage.As will be further discussed like that, use edge self-adaption filter can provide the continuous weighting based on the gradient of the value through horizontal and vertical filtering, this can reduce the appearance that tradition separates some artifact common in mosaic technology, such as aliasing, " gridiron pattern " or " rainbow " artifact.
During to green channel solution mosaic, use the initial value of the green pixel (Gr and Gb pixel) of bayer images pattern.But, in order to obtain one group of partial data for green channel, green pixel values can be obtained in the redness of bayer images pattern and blue pixel place interpolation.According to this technology, based on 5 × 5 above-mentioned block of pixels, first calculate at red and blue pixel place the horizontal and vertical energy component being called as Eh and Ev respectively.As will be further discussed like that, the value of Eh and Ev can be used to obtain from horizontal and vertical filter step Weighted Edges through filter value.
Exemplarily, Figure 77 is exemplified with to the calculating of Eh and Ev value of red pixel being positioned at 5 × 5 block of pixels centers (j, i), and wherein j corresponds to row and i corresponds to row.As shown in the figure, the calculating of Eh considers the centre three row (j-1, j, j+1) of 5 × 5 block of pixels, and the centre three that the calculating of Ev considers 5 × 5 block of pixels arranges (i-1, i, i+1).In order to calculate Eh, will at red column (i-2, i, i+2) (such as-1 for arranging i-2 and i+2 to be multiplied by coefficient of correspondence in, 2 for arrange pixel i) each and absolute value with at blue column (i-1, i+1) be multiplied by each of the pixel of coefficient of correspondence (such as 1 for arranging i-1, and-1 for arranging i+1) and absolute value be added.In order to calculate Ev, will at redness row (j-2, j, j+2) (such as-1 for row j-2 and j+2 to be multiplied by coefficient of correspondence in, 2 for row j) pixel each and absolute value with at blueness row (j-1, j+1) be multiplied by each of the pixel of coefficient of correspondence (such as 1 for row j-1, and-1 for row j+1) and absolute value be added.These calculating can illustrated in following equation 76 and 77:
Eh=abs[2((P(j-1,i)+P(j,i)+P(j+1,i))-(76)
(P(j-1,i-2)+P(j,i-2)+P(j+1,i-2))-
(P(j-1,i+2)+P(j,i+2)+P(j+1,i+2)]+
abs[(P(j-1,i-1)+P(j,i-1)+P(j+1,i-1))-
(P(j-1,i+1)+P(j,i+1)+P(j+1,i+1)]
Ev=abs[2(P(j,i-1)+P(j,i)+P(j,i+1))-(77)
(P(j-2,i-1)+P(j-2,i)+P(j-2,i+1))-
(P(j+2,i-1)+P(j+2,i)+P(j+2,i+1]+
abs[(P(j-1,i-1)+P(j-1,i)+P(j-1,i+1))-
(P(j+1,i-1)+P(j+1,i)+P(j+1,i+1)]
Therefore, gross energy and can being expressed as: Eh+Ev.In addition, although the example as shown in Figure 77 is exemplified with the calculating of Eh and Ev for the red center pixel at (j, i), should be understood that Eh and the Ev value that can be determined blue center pixel by similar mode.
Then, horizontal and vertical filtering can be applied to bayer-pattern to obtain value Gh through vertical and horizontal filtering and Gv, and it can represent the green value obtained through interpolation in the horizontal and vertical directions respectively.Except using the directional gradient of adjacent color (R and B) to obtain except high-frequency signal in the position of the samples of green of losing, also can use low pass filter to the sampling of known adjacent green, thus determining the value Gh through filtering and Gv.Such as, with reference to Figure 78, now the example being used for the Horizontal interpolation determining Gh will be illustrated.
As shown in Figure 78, five horizontal pixels (R0, G1, R2, G3 and R4) of the red row 1060 of bayer images can be considered when determining Gh, wherein supposing that R2 is the center pixel being positioned at (j, i).The filter factor being associated with these five pixels is respectively indicated by reference marker 1062.Therefore, the interpolation of green value that can determine center pixel R2 as follows, that be called as G2 ':
G 2 ' = G 1 + G 3 2 + 2 R 2 - ( R 0 + R 2 2 ) - ( R 2 + R 4 2 ) 2 - - - ( 78 )
So multiple mathematical operation can be utilized produce the expression formula of the G2 ' as shown in following equation 79 and 80:
G 2 ' = 2 G 1 + 2 G 3 4 + 4 R 2 - R 0 - R 2 - R 2 - R 4 4 - - - ( 79 )
G 2 ' = 2 G 1 + 2 G 3 + 2 R 2 - R 0 - R 4 4 - - - ( 80 )
Therefore, with reference to Figure 78 and above equation 78-80, the general expression of the Horizontal interpolation for the green value at (j, i) place can be derived:
Gh = ( 2 P ( j , i - 1 ) + 2 P ( j , i + 1 ) + 2 P ( j , i ) - P ( j , i - 2 ) - P ( j , i + 2 ) ) 4 - - - ( 81 )
Vertical filtering component Gv can be determined by the mode similar with Gh.Such as, with reference to Figure 79, considering five vertical pixels (R0, G1, R2, G3 and R4) of red column 1064 of bayer images and their respective filter factors 1068 when determining Gv, wherein supposing that R2 is the center pixel being positioned at (j, i) place.In vertical direction, low-pass filtering is used to known samples of green, high-pass filtering is used to red channel, can derive as follows for the expression formula of Gv:
Gv = ( 2 P ( j - 1 , i ) + 2 P ( j + 1 , i ) + 2 P ( j , i ) - P ( j - 2 , i ) - P ( j + 2 , i ) ) 4 - - - ( 82 )
Although the example that this place is discussed has shown the interpolation of the green value to red pixel, should be understood that, the expression formula set forth in equation 81 and 82 also can be used to the horizontal and vertical interpolation of blue pixel being carried out to green value.
Export (Gh and Gv) by using energy component as discussed above (Eh and Ev) to horizontal and vertical filter to be weighted and to determine center pixel (j, i) the final green value G ' obtained through interpolation, to produce following equation:
G ' ( j , i ) = ( Ev Eh + Ev ) Gh + ( Eh Eh + Ev ) Gv - - - ( 83 )
As mentioned above, energy component Eh and Ev can provide horizontal and vertical filter to export the edge self-adaption weighting of Gh and Gv, and this can contribute to the image artifacts reduced in the RGB image of reconstruct, such as rainbow, aliasing or gridiron pattern artifact.In addition, separating mosaic logic 940 can provide by Eh and Ev value is set to the option that 1 carrys out the adaptive weighted feature of bypass edge respectively, makes Gh and Gv by equally weighting.
In one embodiment, the horizontal and vertical weight coefficient as shown in above-mentioned equation 51 can be quantized, the precision of weight coefficient to be reduced to one group of " roughly " value.Such as, in one embodiment, weight coefficient can be quantified as eight possible weight ratios: 1/8,2/8,3/8,4/8,5/8,6/8,7/8 and 8/8.Weight coefficient can be quantified as 16 values (such as 1/16 to 16/16) by other embodiments, 32 values (1/32 to 32/32) etc.Can understand, when compared with use full accuracy value (such as 32 floating point values), the quantification of weight coefficient can be reduced in determines weight coefficient and the implementation complexity applied it to when horizontal and vertical filter exports.
In other embodiments, except determining with usage level with vertical energy component to be applied to by weight coefficient through level (Gh) except the value of vertical (Gv) filtering, technology disclosed herein also can be negative to angular direction being determined and utilizing energy component at positive diagonal sum.Such as, in such embodiments, also can be negative to angular direction to positive diagonal sum by filtering application.The weighting that filter exports can comprise selection two highest energy components, and the energy component selected by using is carried out their respective filters of weighting and exported.Such as, suppose that two highest energy components correspond to vertical and just to angular direction, then vertical and positive diagonal angle energy component is used to weighted vertical and the output of positive diagonal angle filter to determine the green value (redness such as in bayer-pattern or blue pixel location place) obtained through interpolation.
Then, by obtaining redness and blue valve in the green pixel place interpolation of bayer images pattern, obtaining red value in the blue pixel place interpolation of bayer images pattern and obtain blue valve in the red pixel place interpolation of bayer images pattern, and perform the solution mosaic to red and blue color channels.According to technology discussed herein, by use based on known adjacent redness and blue pixel low-pass filtering and based on and the green pixel values of depositing (depends on the position of current pixel, its value (the green channel solution mosaic processing from above-mentioned) that may be initial value or obtain through interpolation) high-pass filtering, come interpolation obtain lose redness and blue pixel value.Therefore, about this embodiment, should be understood that the interpolation that first can perform the green value of loss, to make can use complete green value set (initial value and the value obtained through interpolation) when interpolation obtains redness and the blue sample of loss.
With reference to Figure 80, interpolation that is red and blue pixel value can be described, what Figure 80 represented bayer images pattern can to red and blue various 3 × 3 pieces of separating mosaic of its application, and the green value (being called for short Interpolate Green value) (being expressed as G ') obtained through interpolation obtained during to green channel solution mosaic.First reference block 1070, can be following determine Gr pixel (G 11) red value (the being called for short interpolation red value) R ' obtained through interpolation 11:
R &prime; 11 = ( R 10 + R 12 ) 2 + ( 2 G 11 - G &prime; 10 - G &prime; 12 ) 2 , - - - ( 84 )
Wherein G ' 10and G ' 12represent Interpolate Green value, as shown in reference marker 1078.Similarly, can be following determine Gr pixel (G 11) blue valve (the being called for short interpolation blue valve) B ' obtained through interpolation 11:
B &prime; 11 = ( B 01 + B 21 ) 2 + ( 2 G 11 - G &prime; 01 - G &prime; 21 ) 2 , - - - ( 85 )
Wherein G ' 01and G ' 21represent Interpolate Green value (1078).
Then, reference pixel block 1072, wherein center pixel is Gb pixel (G 11), interpolation red value R ' can be determined according to equation 86 and 87 shown below 11with interpolation blue valve B ' 11:
R &prime; 11 = ( R 01 + R 21 ) 2 + ( 2 G 11 - G &prime; 01 - G &prime; 21 ) 2 - - - ( 86 )
B &prime; 11 = ( B 10 + B 12 ) 2 + ( 2 G 11 - G &prime; 10 - G &prime; 12 ) 2 - - - ( 87 )
In addition, reference pixel block 1074, determination that can be following is to blue pixel B 11the interpolation of red value:
R &prime; 11 = ( R 00 + R 02 + R 20 + R 22 ) 4 + ( 4 G &prime; 11 - G &prime; 00 - G &prime; 02 - G &prime; 20 - G &prime; 22 ) 4 , - - - ( 88 )
Wherein, G ' 00, G ' 02, G ' 11, G ' 20and G ' 22represent Interpolate Green value, as shown in reference marker 1080.Finally, as shown in block of pixels 1076, calculating that can be following is to the interpolation of the blue valve of red pixel:
B &prime; 11 = ( B 00 + B 02 + B 20 + B 22 ) 4 + ( 4 G &prime; 11 - G &prime; 00 - G &prime; 02 - G &prime; 20 - G &prime; 22 ) 4 - - - ( 89 )
Although embodiment discussed above depends on colour-difference (such as gradient) determine the red and blue value obtained through interpolation, another embodiment can use color ratio to provide interpolation red value and interpolation blue valve.Such as, Interpolate Green value (block 1078 and 1080) can be used to obtain at the redness of bayer images pattern and the color ratio at blue pixel location place, and the linear interpolation of these ratios can be used to determine the color ratio obtained through interpolation of the color samples for losing.Can be value or the initial value obtained through interpolation green value with obtain color ratio through interpolation and be multiplied by mutually and obtain final interpolation color value.Such as, can perform according to following formula and use the redness of color ratio and the interpolation of blue pixel value, wherein equation 90 and 91 represents for the redness of Gr pixel and the interpolation of blue valve, equation 92 and 93 represents for the redness of Gb pixel and the interpolation of blue valve, equation 94 represents the interpolation of the red value to blue pixel, and equation 95 represents the interpolation of the blue valve to red pixel:
R &prime; 11 = G 11 ( R 10 G &prime; 10 ) + ( R 12 G &prime; 12 ) 2 - - - ( 90 )
(work as G 11the R ' that when being Gr pixel, interpolation obtains 11)
B &prime; 11 = G 11 ( B 01 G &prime; 01 ) + ( B 21 G &prime; 21 ) 2 - - - ( 91 )
(work as G 11the B ' that when being Gr pixel, interpolation obtains 11)
R &prime; 11 = G 11 ( R 01 G &prime; 01 ) + ( R 21 G &prime; 21 ) 2 - - - ( 92 )
(work as G 11the R ' that when being Gb pixel, interpolation obtains 11)
B &prime; 11 = G 11 ( B 10 G &prime; 10 ) + ( B 12 G &prime; 12 ) 2 - - - ( 93 )
(work as G 11the B ' that when being Gb pixel, interpolation obtains 11)
R &prime; 11 = G &prime; 11 ( R 00 G &prime; 00 ) + ( R 02 G &prime; 02 ) + ( R 20 G &prime; 20 ) + ( R 22 G &prime; 22 ) 4 - - - ( 94 )
(to blue pixel B 11, through the R ' that interpolation obtains 11)
B &prime; 11 = G &prime; 11 ( B 00 G &prime; 00 ) + ( B 02 G &prime; 02 ) + ( B 20 G &prime; 20 ) + ( B 22 G &prime; 22 ) 4 - - - ( 95 )
(to red pixel R 11, through the B ' that interpolation obtains 11)
Once be the color samples of losing from each image pixel interpolation of bayer images pattern, the complete sample of the color value of each being used for redness, blueness and green color passage (in such as Figure 76 1046,1048 and 1050) can be combined to generate full-color RGB image.Such as, go back with reference to Figure 49 and 50, the output 910 of original pixels processing logic 900 can be with the RGB picture signal of 8,10,12 or 14 bit formats.
With reference now to Figure 81-84, it represents that the description according to disclosed embodiment is used for multiple flow chart to the process of original bayer images pattern solution mosaic.Especially, the process 1082 of Figure 81 describes for given input pixel P, determines which color component will by interpolation.Based on the determination of process 1082, can perform (such as by separating mosaic logic 940) for interpolation obtain green value process 1100 (Figure 82), obtain for interpolation red value process 1112 (Figure 83) or to obtain in the process 1124 (Figure 84) of blue valve for interpolation one or more.
From Figure 81, when receiving input pixel P, process 1082 is from step 1084.Decision logic 1086 determines the color inputting pixel.Such as, this may depend on the position of the pixel being positioned at bayer images pattern.Therefore, if P is identified as green pixel (such as Gr or Gb), processor 1082 proceeds to step 1088 to obtain the red and blue valve of interpolation for P.This can comprise, such as, proceed to the process 1112 and 1124 of Figure 83 and 84 respectively.If P is identified as red pixel, then processes 1082 and proceed to step 1090 to obtain for the Interpolate Green of P and blue valve.This can comprise the process 1100 and 1124 performing Figure 82 and 84 further respectively.In addition, if P is identified as blue pixel, then processes 1082 and proceed to step 1092 to obtain for the Interpolate Green of P and red value.This can comprise the process 1100 and 1112 performing Figure 82 and 83 further respectively.Below each of process 1100,1112 and 1124 will be discussed further.
Figure 82 represents the process 1100 for determining the Interpolate Green value inputting pixel P, and it comprises step 1102-1110.In step 1102, receive input pixel P (such as from process 1082).Then, in step 1104, identify the neighbor set of formation 5 × 5 block of pixels, wherein P is in this center of 5 × 5 pieces.After this, this block of pixels is analyzed to determine horizontal and vertical energy component in step 1106.Such as, horizontal and vertical energy component can be determined respectively according to the equation 76 and 77 for calculating Eh and Ev.As mentioned above, energy component Eh and Ev can be used as weight coefficient to provide edge self-adaption filtering, thus in final image, reduce the appearance of particular solution mosaic artifact.In step 1108, low-pass filtering and high-pass filtering are applied on horizontal and vertical direction to determine that horizontal and vertical filtering exports.Such as, calculated level and vertical filtering output Gh and Gv can be come according to equation 81 and 82.Then, process 1082 proceeds to step 1110, and as shown in equation 83, its value based on Gh and Gv after use energy component Eh and Ev weighting is carried out interpolation and obtained Interpolate Green value G '.
Then, about the process 1112 of Figure 83, the interpolation of red value starts from step 1114, and it receives input pixel P (such as from process 1082).In step 1116, identify the neighbor set of formation 3 × 3 block of pixels, wherein P is in the center of this 3 × 3 block of pixels.After this, in step 1118, low-pass filtering is applied to the neighboring red pixel in these 3 × 3 pieces, high-pass filtering (step 1120) is applied to and the green consecutive value of depositing (it may be the initial green value caught by Bayer image sensor or the value obtained through interpolation (such as being determined by the process 1100 of Figure 82)).As shown in step 1122, can export based on low pass and high-pass filtering the interpolation red value R ' determining P.Based on the color of P, R ' can be determined according in equation 84,86 or 88.
About the interpolation of blue valve, the process 1124 of Figure 84 can be applied.Its step 1126 is substantially identical with 1116 with the step 1114 of process 1112 (Figure 83) with 1128.In step 1130, low-pass filtering is applied to the adjacent blue pixel in 3 × 3 pieces, in step 1132, high-pass filtering is applied to and the green consecutive value of depositing (it may be the initial green value caught by Bayer image sensor or the value obtained through interpolation (such as being determined by the process 1100 of Figure 82)).As shown in step 1134, can export based on low pass and high-pass filtering the interpolation blue valve B ' determining P.Based on the color of P, B ' can be determined according in equation 85,87 or 89.In addition, as mentioned above, colour-difference (equation 84-89) or color ratio (equation 90-95) can be used to determine interpolation that is red and blue valve.Equally, should be understood that the interpolation that first can perform the green value of loss, the full set of green value (value that is initial and that obtain through interpolation) can be used like this when the redness that interpolation is lost and blue sample.Such as, the process 1100 can applying Figure 82 before the process 1112 and 1124 performing Figure 83 and 84 respectively obtains the green color sampling of all loss with interpolation.
With reference to figure 85-88, which provide the example of the colored graph of the image processed by the original pixels processing logic 900 in ISP flowing water 82.Figure 85 represents the initial pictures scene 1140 that can be caught by the imageing sensor 90 of imaging device 30.Figure 86 illustrates the original bayer images 1142 that can represent the raw pixel data caught by imageing sensor 90.As mentioned above, traditional solution mosaic technology possibly cannot provide the adaptive-filtering of the detection based on edge in view data (borders such as between two or more color regions), and this may produce less desirable artifact in the reconstruct obtained in full-color RGB image.Such as, Figure 87 represents the RGB image 1144 using tradition solution mosaic technology to reconstruct, and it may comprise artifact, such as, in " gridiron pattern " artifact 1146 at edge 1148 place.But, by image 1144 compared with the RGB image 1150 of Figure 88, Figure 88 can be an example of the image using solution mosaic technology described above to reconstruct, the visible gridiron pattern artifact 1146 occurred in Figure 87 no longer occurs, or at least sufficiently lower the outward appearance of artifact at edge 1148 place.Therefore, the image as shown in Figure 85-88 is intended to illustrate solution mosaic technology disclosed herein at least one advantage relative to conventional method.
Go back, with reference to Figure 67, to describe now the operation of original pixels processing logic 900 up hill and dale, its exportable RGB picture signal 910, this discussion will pay close attention to the process described by RGB processing logic 902 pairs of RGB picture signals 910 now.As shown in the figure, RGB picture signal 910 can be sent to and select logic 914 and/or memory 108.RGB processing logic 902 can receive input signal 916, and input signal 916 can be from signal 910 or the rgb image data as shown in signal 912 from memory 108, and this depends on the configuration selecting logic 914.By RGB processing logic 902 process rgb image data 916 with perform comprise color correction (such as using color correction matrix) color adjustment operation, for the application of the color gain of Automatic white balance and overall tone mapping etc.
Figure 89 represents the block diagram of the more detailed view for the embodiment describing RGB processing logic 902.As shown in the figure, RGB processing logic 902 comprises gain, skew and clamper (GOC) logical one 160, RGB color correction logical one 162, GOC logical one 164, RGB gamma adjustment logic and color notation conversion space logical one 168.First input signal 916 is received by gain, skew and clamper (GOC) logical one 160.In an illustrated embodiment, color correction logical one 162 perform process before, GOC logical one 160 can using gain with R, G or B Color Channel one or more on perform Automatic white balance.
GOC logical one 160 can be similar with the GOC logic 930 of original image processing logic 900, and except process is the color component in RGB territory instead of R, B, Gr and Gb component of bayer images data.In operation, as shown in above equation 11, first use signed value O [c] to offset the input value of current pixel, be then multiplied by gain G [c], wherein c represents R, G and B.As mentioned above, gain G [c] can be the unsigned number of 16, it has 2 integer-bit and 14 fractional bits (such as 2.14 floating point representations), can in advance during statistical disposition (such as in ISP front-end block 80) determine the value of gain G [c].Then according to equation 12, the pixel value Y calculated (based on equation 11) is punctured into minimum value and maximum range.As mentioned above, variable min [c] and max [c] represents signed 16 " brachymemma values " being used for minimum and maximum output valve respectively.In one embodiment, GOC logical one 160 also can be configured to into each color component R, G and B preserve respectively on maximum and under minimum value by the counting of the quantity of the pixel of brachymemma.
Then the output of GOC logical one 160 is forwarded to color correction logical one 162.According to technology disclosed herein, color correction logical one 162 can be configured to use color correction matrix (CCM) and apply color correction to rgb image data.In one embodiment, CCM can be 3 × 3RGB transformation matrix, although other embodiments can also utilize the matrix (such as 4 × 3 etc.) of other sizes.Therefore, expression that can be following performs the process of color correction to the input pixel with R, G and B component:
R &prime; G &prime; B &prime; = CCM 00 CCM 01 CCM 02 CCM 10 CCM 11 CCM 12 CCM 20 CCM 21 CCM 22 &times; R G B , - - - ( 96 )
Wherein R, G and B represent the current redness, green and the blue valve that input pixel, and CCM00-CCM22 represents the coefficient of color correction matrix, and redness, green and the blue valve after the correction of R ', G ' and B ' expression input pixel.Therefore, calculation correction color value can be carried out according to following equation 97-99:
R′=(CCM00×R)+(CCM01×G)+(CCM02×B)(97)
G′=(CCM10×R)+(CCM11×G)+(CCM12×B)(98)
B′=(CCM20×R)+(CCM21×G)+(CCM22×B)(99)
As mentioned above, the coefficient (CCM00-CCM22) of CCM can be determined during the statistical disposition of ISP front-end block 80.In one embodiment, can select the coefficient for given Color Channel, to make these coefficients (such as CCM00, CCM01 and CCM02 of red color correction) sum equal 1, this can contribute to maintaining lightness and color balance.In addition, coefficient is selected as making postiive gain to be applied to the color that will be corrected usually.Such as, correct for red color, coefficient CCM00 can be greater than 1, and one or two of coefficient CCM01 and CCM02 can be less than 1.Coefficient is set by such mode and can strengthens redness (R) component in the calibrated R ' value obtained, weaken blueness (B) and green (G) component simultaneously.It will be apparent that, this can tackle the problem of contingent colour superimposition during obtaining initial bayer images, and this problem is in the neighbor of different colours because of part possibility " infiltration " (bleed) for the light through filtering of particular color pixel.In one embodiment, the coefficient of CCM can be provided as the two's complement number of 16, and it has 4 integer-bit and 12 fractional bits (being expressed as 4.12 floating-points).In addition, if this value exceedes maximum or is less than minimum value, color correction logical one 162 can provide the brachymemma to the calibrated color value calculated.
Then the output of RGB color correction logical one 162 is passed to another GOC logical block 1164.GOC logical one 164 can realize according to the identical mode of GOC logical one 160, therefore, no longer repeats the detailed description to gain, skew and clamper function here.In one embodiment, apply the Automatic white balance that GOC logical one 164 can provide the view data based on the color value after correcting after colour correction, and it can also adjust and redly to change with the transducer of green ratio with green ratio and blueness.
Then, the output of GOC logical one 164 is sent to RGB gamma adjustment logical one 166 for further process.Such as, RGB gamma adjustment logical one 166 can provide gamma correction, tone mapping, Histogram Matching etc.According to disclosed embodiment, input rgb value can be mapped to and export rgb value accordingly by gamma adjustment logical one 166.Such as, gamma adjustment logic can provide the set comprising three look-up tables, and each table is used for one of R, G and B component.Exemplarily, each look-up table can be configured to 256 entries of storage 10 place value, and each value represents an output level.Table clause can be evenly distributed in the scope of input pixel value, thus when input value falls between two entries, can carry out linear interpolation and obtain output valve.In one embodiment, each of reproducible three look-up tables for R, G and B, to make look-up table by " dual buffer memory " in memory, thus allows to upgrade its copy when using a table during processing.Based on above-described 10 output valves, should be noted that by the gamma correction process in the present embodiment, 14 RGB picture signals will by down-sampling to 10 effectively.
The output of gamma adjustment logical one 166 can be sent to memory 108 and/or color notation conversion space logical one 168.Color notation conversion space (CSC) logical one 168 can be configured to the RGB output transform from gamma adjustment logical one 166 to YCbCr form, wherein Y represents luminance component, Cb represents blue difference chromatic component, Cr represents red difference chromatic component, be the dark conversion in position of 10 from 14 bit maps owing to performing RGB data during gamma adjustment operation, these components above can be 10 bit formats.As mentioned above, in one embodiment, the RGB of gamma adjustment logical one 166 exports and can be down sampled to 10, and is transformed to 10 YCbCr values by CSC logical one 168, then 10 YCbCr values can be forwarded to the following YCbCr processing logic 904 that will further describe.
Color notation conversion space matrix (CSCM) can be used to perform conversion from RGB territory to YCbCr color space.Such as, in one embodiment, CSCM can be 3 × 3 transformation matrixs.Can according to known conversion equation, such as BT.601 and BT.709 standard, arranges the coefficient of CSCM.In addition, CSCM coefficient can neatly based on the scope of the input and output expected.Therefore, in certain embodiments, can determine based on the data of collecting during the statistical disposition in ISP front-end block 80 and CSCM coefficient of programming.
The process performing YCbCr color notation conversion space for inputting pixel to RGB can represent as follows:
Y Cb Cr = CSCM 00 CSCM 01 CSCM 02 CSCM 10 CSCM 11 CSCM 12 CSCM 20 CSCM 21 CSCM 22 &times; R G B , - - - ( 100 )
Wherein R, G and B represent the current redness of 10 bit formats of input pixel, green and blue valve (such as being adjusted the process of logical one 166 by gamma), CSCM00-CSCM22 represents the coefficient of color notation conversion space matrix, and Y, Cb and Cr represent brightness and the chromatic component of the input pixel obtained.Therefore, the value of Y, Cb and Cr can be calculated according to following equation 101-103:
Y=(CSCM00×R)+(CSCM01×G)+(CSCM02×B)(101)
Cb=(CSCM10×R)+(CSCM11×G)+(CSCM12×B)(102)
Cr=(CSCM20×R)+(CSCM21×G)+(CSCM22×B)(103)
As will be discussed, after color notation conversion space operation, the YCbCr value obtained can export as signal 918 from CSC logical one 168, and it can be processed by YCbCr processing logic 904.
In one embodiment, the coefficient of CSCM can be the two's complement number of 16, and it has 4 integer-bit and 12 fractional bits (4.12).In another embodiment, CSC logical one 168 can be configured to apply side-play amount to each of Y, Cb and Cr value further, and end value is punctured into minimum and maximum.Only exemplarily, suppose that YCbCr value is 10 bit formats, side-play amount can in-512 scopes to 512, and minimum and maximum can be 0 and 1023 respectively.
Refer again to the block diagram of the ISP flowing water logic 82 in Figure 67, YCbCr signal 918 can be sent to selects logic 922 and/or memory 108.YCbCr processing logic 904 can receive input signal 924, and it can be from signal 918 or the YCbCr view data as shown in signal 920 from memory 108, and this depends on the configuration selecting logic 922.Then process YCbCR view data 924 by YCbCr processing logic 904 luminance sharpening to perform, colourity suppresses, chroma noise reduction, and brightness, contrast and color adjustment etc.In addition, YCbCr processing logic 904 may be provided on horizontal and vertical direction and maps and convergent-divergent the gamma of treated view data.
Figure 90 represents the block diagram of the more detailed view of the embodiment describing YCbCr processing logic 904.As shown in the figure, YCbCr processing logic 904 comprises image sharpening logical one 170, for adjusting logical one 172, the YCbCr gamma adjustment logical one 174 of brightness, contrast and/or color, colourity selection logical one 176 and scaling logic 1178.YCbCr processing logic 904 can be configured to the pixel data that the configuration of use 1 plane, 2 planes or 3 flat memories processes 4:4:4,4:2:2 or 4:2:0 form.In addition, in one embodiment, YCbCr input signal 924 can provide brightness and the chrominance information of 10 place values.
It will be apparent that, mention 1 plane, 2 planes or 3 planes, refer to the quantity of the imaging plane utilized in picture memory.Such as, in 3 planar formats, each of Y, Cb and Cr component can utilize independent respective memory plane.In 2 planar formats, the first plane can be provided to luminance component (Y), and the second plane of Cb and the Cr sampling that interweaves can be provided to chromatic component (Cb and Cr).In 1 planar format, the single plane in memory interweaves with brightness and chroma samples.In addition, about 4:4:4,4:2:2 and 4:2:0 form, what can understand is that 4:4:4 form represents with identical speed each sample format of sampling in three YCbCr components.In 4:2:2 form, with the half of the sample rate of luminance component Y, sub-sampling is carried out to chromatic component Cb and Cr, thus the resolution of chromatic component Cb and Cr is reduced half in the horizontal direction.Same, 4:2:0 form all sub-sampled chrominance components Cb and Cr in the vertical and horizontal direction.
The process of YCbCr information can occur in the source region of the activity of definition in the buffer of source, and wherein this active source region comprises " effectively " pixel data.Such as, with reference to Figure 91, it represents the source buffer 1180 defining active source region 1182 within it.In an example shown, source buffer can represent 1 planar format of the 4:4:4 of the source pixel providing 10 place values.Discriminably for brightness (Y) sampling and chroma samples (Cb and Cr) carry out specified activities source region 1182.Therefore, should be understood that in fact active source region 1182 can comprise the multiple active source regions for brightness and chroma samples.The side-play amount that can offset based on the base address (0,0) 1184 from source buffer and determine the starting point in the active source region 1182 of brightness and colourity.Such as, by defining the starting position (Lm_X, Lm_Y) 1186 for brightness active source region relative to the x side-play amount 1190 of base address 1184 and y side-play amount 1194.Same, by defining the starting position (Ch_X, Ch_Y) 1188 for colourity active source region relative to the x side-play amount 1192 of base address 1184 and y side-play amount 1196.It should be noted that in the present example, the y side-play amount 1192 and 1196 being respectively used to brightness and colourity can be equal.Based on starting position 1186, define brightness active source region by width 1193 and height 1200, width 1193 and height 1200 can be illustrated respectively in the quantity of the luma samples on x and y direction.In addition, based on starting position 1188, define colourity active source region by width 1202 and height 1204, width 1202 and height 1204 can be illustrated respectively in the quantity of the chroma samples on x and y direction.
Figure 92 further provides the example representing and how can determine the active source region of brightness and chroma samples in 2 planar formats.Such as, as shown in the figure, can by defining brightness active source region 1182 by relative to the width 1193 of starting position 1186 and the region highly specified by 1200 in the first source buffer 1180 (there is base address 1184).Can by defining colourity active source region 1208 relative to the width 1202 of starting position 1188 and the region highly specified by 1204 in the second source buffer 1206 (there is base address 1184).
Go back, with reference to Figure 90, first to receive YCbCr signal 924 by image sharpening logical one 170 with above-mentioned thought.Image sharpening logical one 170 can be configured to perform picture sharpening and edge and strengthen process to increase texture in image and edge details.It will be apparent that, image sharpening can improve the image resolution ratio of institute's perception.But, usually wish that the existing noise in image is not detected as texture and/or edge, thus be not exaggerated during Edge contrast.
According to this technology, image sharpening logical one 170 can use multiple dimensioned Unsharp Mask filter to perform picture sharpening to brightness (Y) component of YCbCr signal.In one embodiment, two or more low-pass Gaussian filter with different scale size can be provided.Such as, in the embodiment providing two Gaussian filters, from have the second radius (y) the second Gaussian filter output deduct the output (such as Gaussian Blur) of first Gaussian filter with the first radius (x) to generate Unsharp Mask, wherein x is greater than y.Also by deducting the output of Gaussian filter to obtain extra Unsharp Mask from Y input.In a particular embodiment, this technology also can provide self-adaptive kernel threshold value (coringthreshold) compare operation, Unsharp Mask can be used to perform this operation, make result based on the comparison, can be primary image and increase amount of gain to generate final output, wherein this primary image can be selected as the output of one of initial Y input picture or Gaussian filter.
With reference to Figure 93, it represents that the embodiment according to technology disclosed herein describes the block diagram of the example logic 1210 being used for carries out image sharpening.Logical one 210 represents the multiple dimensioned Unsharp Mask that can be applied to input luminance picture Yin.Such as, as shown in the figure, two low-pass Gaussian filter 1212 (G1) and 1214 (G2) receive and process Yin.In this example embodiment, filter 1212 can be 3 × 3 filters, and filter 1214 can be 5 × 5 filters.But should understand, in other embodiments, can also use the filter comprising different scale more than two Gaussian filters (such as 7 × 7,9 × 9 etc.).It will be apparent that, because low-pass filtering treatment, the high fdrequency component generally corresponding to noise can be removed to generate " vignette (unsharp) " image (G1out and G2out) from the output of G1 and G2.As will be discussed, vignette input picture is used noise reduction will to be allowed as a part for sharpening filter as primary image.
Definition 3 × 3 Gaussian filter 1212 and 5 × 5 Gaussian filter 1214 that can be following:
G 1 = G 1 1 G 1 1 G 1 1 G 1 1 G 1 0 G 1 1 G 1 1 G 1 1 G 1 1 256 G 2 = G 2 2 G 2 2 G 2 2 G 2 2 G 2 2 G 2 2 G 2 1 G 2 1 G 2 1 G 2 2 G 2 2 G 2 1 G 2 0 G 2 1 G 2 2 G 2 2 G 2 1 G 2 1 G 2 1 G 2 2 G 2 2 G 2 2 G 2 2 G 2 2 G 2 2 256
Only exemplarily, the value of selection Gaussian filter G1 and G2 that in one embodiment can be following:
G1= 28 28 28 28 32 28 28 28 28 256 G 2 = 9 9 9 9 9 9 12 12 12 9 9 12 16 12 9 9 12 12 12 9 9 9 9 9 9 256
Three Unsharp Masks Sharp1, Sharp2 and Sharp3 can be generated based on Yin, G1out and G2out.Sharp1 is determined by deducting the vignette image G2out of Gaussian filter 1214 from the vignette image G1out of Gaussian filter 1212.Because Sharp1 is the difference of two low pass filters in essence, it can be called as " Intermediate Gray (midband) " masking-out because the noise component(s) of higher frequency from G1out and G2out vignette image by filtering.In addition, by deducting G2out to calculate Sharp2 from input luminance picture Yin, from input luminance picture Yin, G1out is deducted to calculate Sharp3.As will be described below, Unsharp Mask Sharp1, Sharp2 and Sharp3 can be used to carry out application self-adapting threshold value core scheme.
With reference to selecting logical one 216, primary image can be selected based on control signal UnsharpSel.In an illustrated embodiment, primary image can be that input picture Yin or filtering export G1out or G2out.It will be apparent that, when initial pictures has noise variance (such as basic equally high with signal variance), use initial pictures Yin possibly during sharpening, noise component(s) cannot be reduced fully as the primary image performing sharpening.Therefore, when the noise content of specific threshold being detected in the input image, select logical one 216 to be adapted to be low-pass filtering that selection reduced the high frequency content comprising noise exports one in G1out or G2out.In one embodiment, by analyzing the statistics that obtains during the statistical disposition of ISP front-end block 80 to determine the noise content of image, to determine the value of control signal UnsharpSel.Exemplarily, if input picture Yin has low noise content, thus the noise shown in the result of Edge contrast may can not increase, then input picture Yin can be chosen as primary image (such as UnsharpSel=0).If input picture Yin is confirmed as comprising obvious noise level, thus Edge contrast may amplify these noises, then can select (such as respectively, UnsharpSel=1 or 2) in filtered image G1out or G2out.Therefore, by being used for selecting the adaptive technique of primary image, logical one 210 provides decrease of noise functions in fact.
Then, according to self-adaptive kernel threshold scheme as above, can by one or more in Sharp1, Sharp2 or Sharp3 masking-out of gain application.Then, by comparator block 1218,1220 and 1222, vignette value Sharp1, Sharp2 and Sharp3 and multiple threshold value SharpThd1, SharpThd2 and SharpThd3 (must not be point other) can be made comparisons.Such as, Sharp1 value is always made comparisons with SharpThd1 in comparator block 1218.About comparator block 1220, threshold value SharpThd2 can make comparisons with Sharp1 or Sharp2, and this depends on selects logical one 226.Such as, (such as SharpCmp2=1 selects Sharp1 to select logical one 226 can select Sharp1 or Sharp2 according to the state of control signal SharpCmp2; SharpCmp2=0 selects Sharp2).Such as, in one embodiment, the state of SharpCmp2 can be determined according to the noise variance/content of input picture (Yin).
In an illustrated embodiment, SharpCmp2 and SharpCmp3 value is generally preferably set to select Sharp1, unless detected that view data has relatively low noisiness.This is because Sharp1 is as the difference of the output of gauss low frequency filter G1 and G2, it is generally not too responsive to noise, and therefore this can contribute to the variable quantity being reduced in SharpAmt1, SharpAmt2 and SharpAmt3 value caused because of the fluctuation of noise level in " noisy " view data.Such as, if initial pictures has noise variance, when using fixed threshold, some high fdrequency components may not be caught in, and therefore may be exaggerated during Edge contrast.Therefore, if the noise content of input picture is high, then some noise content may be there are in Sharp2.In this case, SharpCmp2 can be set to 1 to select above-described Intermediate Gray masking-out Sharp1, Sharp1 owing to being the difference that two low pass filters export, so have the high frequency content of reduction, and therefore not too responsive to noise.
It will be apparent that, similar operation may be used on the selection of Sharp1 or Sharp3 selecting logical one 224 to perform under the control of SharpCmp3.In one embodiment, SharpCmp2 and SharpCmp3 is set to 1 (such as using Sharp1) by default, and is only identified as being set as 0 when generally having low noise variance at input picture.Which essentially provides a kind of self-adaptive kernel threshold scheme, wherein the selection (Sharp1, Sharp2 or Sharp3) of comparison value is noise variance based on input picture and adaptive.
The output of device block 1218,1220 and 1222 based on the comparison, by determining the output image Ysharp through sharpening to primary image (such as being selected by logical one 216) using gain Unsharp Mask.Such as, first reference comparator block 1222, SharpThd3 input with the B selecting logical one 224 to provide and makes comparisons, and B inputs and is here called as " SharpAbs ", and based on SharpCmp3 state and may Sharp1 or Sharp3 be equaled.If SharpAbs is greater than threshold value SharpThd3, then gain SharpAmt3 is applied to Sharp3, and end value is added on primary image.If SharpAbs is less than threshold value SharpThd3, then can apply fading gain Att3.In one embodiment, determination fading gain Att3 that can be following:
Att 3 = SharpAmt 3 &times; SharpAbs SharpThd 3 - - - ( 104 )
Wherein, SharpAbs is by Sharp1 or Sharp3 selecting logical one 224 to determine.Selection to the primary image adding complete gain (SharpAmt3) or fading gain (Att3) is performed by selecting the output of logical one 228 device block 1222 based on the comparison.It will be apparent that, the use of fading gain can be tackled SharpAbs and be not more than threshold value (such as SharpThd3), but the noise variance of image is still close to the situation of this given threshold value.It can contribute to being reduced in the significant transformation between sharp keen and not sharp keen pixel.Such as, if direct transmission does not use the view data of fading gain in this case, the pixel obtained may show as defect pixel (such as bright spot).
Then, also can process like application class comparator block 1220.Such as, according to the state of SharpCmp2, select logical one 226 can provide Sharp1 or Sharp2 as the input to comparator block 1220 to make comparisons with threshold value SharpThd2.As mentioned above, according to the output of comparator block 1220, be applied to Sharp2 by gain SharpAmt2 or based on the fading gain Att2 of SharpAmt2, and joined the output that the above selects logical one 228.It will be apparent that, mode that can be similar according to the equation 104 with above calculates fading gain Att2, difference be to SharpAbs application be gain SharpAmt2 and threshold value SharpThd2, wherein SharpAbs can be selected as Sharp1 or Sharp2.
After this, gain SharpAmt1 or fading gain Att1 is applied to Sharp1, and end value is added the output of selection logical one 230 exports Ysharp (from selection logical one 232) with the pixel generating sharpening.The output of the comparator block 1218 of Sharp1 and threshold value SharpThd1 based on the comparison can determine using gain SharpAmt1 or this selection of application fading gain Att1.Equally, determine fading gain Att1 by the mode that the equation 104 with above is similar, difference be to Sharp1 application be gain SharpAmt1 and threshold value SharpThd1.Use in three masking-outs each carry out convergent-divergent obtain sharpening pixel value and be added in input pixel Yin to generate the output Ysharp of sharpening, in one embodiment, the output Ysharp of sharpening can be punctured into 10 (supposing that YCbCr process is carried out with 10 precision).
It will be apparent that, when compared with traditional Unsharp Mask technology, the image sharpening techniques of setting forth in the disclosure can provide the enhancing at texture and edge, also reduces the noise in output image simultaneously.Particularly, the image (such as using the image captured by low-resolution cameras be integrated in portable set (such as mobile phone) under low lighting conditions) that this technology can be applicable to use such as cmos image sensor to catch well has the application of poor signal to noise ratio.Such as, when noise variance and signal variance comparable time, for sharpening use fixed threshold be difficulty thing because some noise component(s)s can with texture together with edge by sharpening.Therefore described above, technology mentioned herein can use multiple dimensioned Gaussian filter by noise filtering from input picture, to extract feature from vignette image (such as G1out and G2out), thus provide the sharpening image reducing noise content.
Before proceeding, should be understood that shown logical one 210 only aims to provide an exemplary embodiment of this technology.In other embodiments, image sharpening logical one 170 can provide extra or less feature.Such as, in certain embodiments, logical one 210 simply can transmit basic value instead of application fading gain.In addition, some embodiments can not comprise selection logical block 1224,1226 or 1216.Such as, comparator block 1220 and 1222 can distinguish the value receiving Sharp2 and Sharp3 simply, instead of receives respectively from selecting the selection of logical block 1224 and 1226 to export.Although these embodiments may not provide as the execution mode shown in Figure 93 robust for sharpening and/or noise reduction feature, to be such design alternative be cost and/or the business that should understand is correlated with the result of restriction.
In the present embodiment, once the image obtaining sharpening exports YSharp, image sharpening logical one 170 also can provide edge to strengthen and colourity inhibitory character.Each of these supplementary features will be discussed below.First with reference to Figure 94, its represent according to an embodiment can the downstream of sharpening logical one 210 shown in Figure 93 realize for performing the example logic 1234 that edge strengthens.As shown in the figure, initial input value Yin is processed to carry out rim detection by Sobel filter 1236.Sobel filter 1236 can determine Grad YEdge based on 3 × 3 block of pixels of initial pictures (being called " A ") (wherein Yin is this center pixel of 3 × 3 pieces).In one embodiment, Sobel filter 1236 is by carrying out convolution with the change on detection level and vertical direction to calculate YEdge to original data.This process is as shown in following equation 105-107:
S x = 1 0 - 1 2 0 - 2 1 0 - 1 S y = 1 2 1 0 0 0 - 1 - 2 - 1
G x=S x×A,(105)
G y=S y×A,(106)
YEdge=G x×G y,(107)
Wherein S xand S yrepresent the matrix operator being used for gradient edge intensity detection in the horizontal and vertical directions respectively, and wherein G xand G yrepresent the gradient image comprising horizontal and vertical change derivative respectively.Therefore can by G xand G yproduct determine export YEdge.
As described in above Figure 93, then select logical one 240 can receive YEdge and Intermediate Gray Sharp1 masking-out.Based on control signal EdgeCmp, in comparator block 1238 by Sharp1 or YEdge compared with threshold value EdgeThd.Such as can determine the state of EdgeCmp according to the noise content in image, thus be provided for the self-adaptive kernel threshold scheme of rim detection and enhancing.Then the output of comparator block 1238 can be provided to and select logical one 242, and can apply complete gain or fading gain.Such as, when being greater than EdgeThd to B input (Sharp1 or YEdge) selected by comparator block 1238, YEdge is multiplied by edge gain EdgeAmt to determine the amount that the edge that will apply strengthens.If the B input of comparator block 1238 is less than EdgeThd, then can apply decay edge gain AttEdge to avoid the remarkable transformation between the edge of enhancing and initial pixel.It will be apparent that, can by calculating AttEdge with the similar mode shown in above equation 104, but wherein EdgeAmt and EdgeThd is applied to " SharpAbs ", depend on the output selecting logical one 240, SharpAbs can be Sharp1 or YEdge.Therefore, use gain (EdgeAmt) or fading gain (AttEdge) and strengthen edge pixel can be added into YSharp (output of the logical one 210 of Figure 93) with obtain edge strengthen output pixel Yout, in one embodiment, Yout can be punctured into 10 (supposing that YCbCr process is carried out with 10 precision).
About the colourity inhibitory character provided by image sharpening logical one 170, this feature can decay to colourity at luminance edges place.In general, the cHroma gain (decay factor) being less than 1 by application performs colourity to be suppressed, and this cHroma gain depends on the value (YSharp, Yout) strengthening step from above-described luminance sharpening and/or edge and obtain.Exemplarily, Figure 95 represents the chart 1250 comprising curve 1252, and curve 1252 represents the cHroma gain can selected for the brightness value (YSharp) accordingly through sharpening.The data that chart 1250 represents can be implemented as the look-up table of YSharp value and corresponding cHroma gain (decay factor) between zero and one.Look-up table is used to curve of approximation 1252.For between the decay factor of two in look-up table and the YSharp value of depositing, can be positioned on current YSharp value and under corresponding two the decay factor application linear interpolations of YSharp value.In addition, in other embodiments, input brightness value also can be selected as one in Sharp1, Sharp2 or Sharp3 value determined by logical one 210 as described in above Figure 93, or the YEdge value determined by logical one 234 as described in Figure 94.
Then, adjust by brightness, contrast and color (BCC) output that logical one 172 processes image sharpening logical one 170 (Figure 90).Figure 96 represents that BCC adjusts the functional block diagram of an embodiment of logical one 172.As shown in the figure, logical one 172 comprises lightness and contrast processing block 1262, the phased clamp dog 1264 of global color and saturation controll block 1266.Embodiment shown here provides the process of the YCbCr data to 10 precision, although other embodiments can utilize not, coordination is dark.Each piece 1262, the function of 1264 and 1266 are below discussed.
First with reference to lightness and contrast processing block 1262, first from brightness (Y) data, side-play amount YOffset is deducted so that blackness is set to 0.This is to ensure that setting contrast can not change blackness.Then, brightness value is multiplied by contrast yield value to control with Comparison study degree.Exemplarily, contrast yield value can be comprise 2 integer-bit and 10 fractional bits 12 without value of symbol, thus provides the contrast gain ranging up to being four times in pixel value.After this, by increasing (or deducting) lightness deviant to brightness data to realize lightness adjustment.Exemplarily, the lightness side-play amount in the present embodiment can be the complement of two's two's complement value of 10 of the scope had between-512 to+512.In addition, it should be noted that after setting contrast, to perform lightness adjustment, to avoid the change that DC side-play amount occurs when changing contrast.After this, initial YOffset is added back to the brightness data after adjustment with reorientation blackness.
Block 1264 and 1266 provides the color based on the form and aspect characteristic of Cb and Cr data to adjust.As shown in the figure, from Cb and Cr data, the side-play amount (supposing 10 process) of 512 is first deducted so that scope is roughly navigated to 0.Then form and aspect are adjusted according to following equation:
Cb adj=Cbcos(θ)+Crsin(θ),(108)
Cr adj=Crcos(θ)-Cbsin(θ),(109)
Wherein Cb adjand Cr adjrepresent Cb and the Cr value after adjustment, and wherein θ represents hue angle, can be following calculate θ:
&theta; = arctan ( Cr Cb ) - - - ( 110 )
More than operate and all represented by the logic in the phased clamp dog 1264 of global color, and can be represented by following matrix operation:
Cb adj Cr adj = Ka Kb - Kb Ka Cb Cr , - - - ( 111 )
Wherein, Ka=cos (θ), Kb=sin (θ), θ is defined in above equation 110.
Then, as shown in saturation controll block 1266, saturation controls to be applied to Cb adjand Cr adjvalue.In the embodiment shown, by applying overall saturation multiplier for each of Cb and Cr value and perform saturation control based on the saturation multiplier of form and aspect.Saturation based on form and aspect controls the reproduction that can improve color.The form and aspect of color can represent in YCbCr color space, as shown in the color wheel Figure 127 0 in Figure 97.It will be apparent that, by by about for the displacement in hsv color space (form and aspect, saturation and brightness) of identical color wheel 109 degree and derive YCbCr tone and saturation color wheel 1270.As shown in the figure, chart 1270 is included in the circumference value of the expression saturation multiplier (S) in scope 0 to 1, and at 0 ° of angle value to the θ defined above of the expression within the scope of 360 °.Each θ can represent a kind of different color (such as 49 °=pinkish red, 109 °=red, 229 °=green etc.).By the form and aspect selecting suitable saturation multiplier S to adjust the color at special color phase angle theta place.
Go back the index that can be used as Cb saturation look-up table 1268 and Cr saturation look-up table 1269 with reference to Figure 96, hue angle θ (calculating in the phased clamp dog 1264 of global color).In one embodiment, saturation look-up table 1268 and 1269 can comprise 256 in 0-360 ° of hue range equally distributed intensity value (such as first lookup table entries is at 0 °, last entry is at 360 °), and can by just determining the intensity value S at given pixel place lower than carrying out linear interpolation with the intensity value higher than current color phase angle theta in look-up table.By overall intensity value (can be the global contrast of each for Cb and Cr) being multiplied by the determined intensity value based on form and aspect to obtain the final intensity value of each for Cb and Cr component.Therefore, as shown in the saturation controll block 1266 based on form and aspect, by by Cb adjand Cr adjbe multiplied by their respective final intensity value to determine Cb ' and the Cr ' value of correction of a final proof.
After this, the output of BCC logical one 172 is sent to YCbCr gamma adjustment logical one 174, as shown in Figure 90.In one embodiment, gamma adjustment logical one 174 can provide the Nonlinear Mapping function of Y, Cb and Cr passage.Such as, input Y, Cb and Cr value and be mapped to corresponding output valve.Again, suppose to process YCbCr data with 10, interpolation 10 256 entry lookup table can be utilized.The look-up table that three are such can be provided, each for one of Y, Cb and Cr channel.Each of 256 entries that can distribute equably, and by be mapped to just on current input index and under the output valve of index carry out linear interpolation and determine to export.In certain embodiments, also can use the non-interpolative look-up table with 1024 entries (for 10 bit data), but it can have much higher storage requirement.It will be apparent that, by adjusting the output valve of look-up table, YCbCr gamma adjustment function can be used to perform specific image filter effect, such as black and white, brown tune, negative film, overexposure etc.
Then, colourity selection can be applied to the output of gamma adjustment logical one 174 by colourity selection logical one 176.In one embodiment, colourity selection logical one 176 can be configured to executive level selection with by YCbCr data from 4:4:4 format conversion for 4:2:2 form, wherein with the half of the speed of brightness data, sub-sampling is carried out to colourity (Cr and Cb) information.Only exemplarily, perform selection by applying 7 tap low pass filters (such as half band lanczos filter) to the set of 7 horizontal pixel compositions, as follows:
Out = C 0 &times; in ( i - 3 ) + C 1 &times; in ( i - 2 ) + C 2 &times; in ( i - 1 ) + C 3 &times; in ( i ) + C 4 &times; in ( i + 1 ) + C 5 &times; in ( i + 2 ) + C 6 &times; in ( i + 3 ) 512 , - - - ( 112 )
Wherein, in (i) represents input pixel (Cb or Cr), and C0-C6 represents the filter factor of 7 tap filters.Each input pixel have independently filter coefficient (C0-C6) to allow for the phase deviation of colourity filtering sampling applying flexible.
In addition, in some cases, filtering also can not be used to perform colourity selection.When the source images be initially received is 4:2:2 form, but when it has been up-sampled to 4:4:4 form for YCbCr process, this is useful.In this case, the 4:2:2 image of the selection obtained is identical with initial pictures.
Subsequently, before exporting from YCbCr processing block 904, scaling logic 1178 can be used to carry out the YCbCr data of convergent-divergent from the output of colourity selection logical one 176.The function class of the scaling logic 368,370 in the function of scaling logic 1178 and the potting gum compensating filter 300 of front end pixel processing unit 130 as above described in reference diagram 28 seemingly.Such as, scaling logic 1178 comes executive level and vertically scale by two steps.In one embodiment, 5 tap polyphase filter can be used to vertically scale, and 9 tap polyphase filter can be used to horizontal scaling.The pixel selected from source images can be multiplied by weighted factor (such as filter coefficient) by many tap polyphase filter, and then sues for peace to output to form object pixel.Selected pixel can be selected according to the quantity of current pixel position and filter tap.Such as, for vertical 5 tap filters, two neighbors of each vertical side at current pixel can be selected, for level 9 tap filter, four neighbors of each horizontal side of current pixel can be selected.Filter factor can be provided by look-up table, and determine filter factor by fractional position between current pixel.Then the output 926 of scaling logic 1178 is exported from YCbCr processing block 904.
Go back with reference to Figure 67, treated output signal 926 can be sent to memory 108, or can be used as picture signal 114 and from ISP stream treatment logic 82, export to viewing hardware (such as display 28) for user's viewing, or export to compression engine (such as encoder 118).In certain embodiments, process picture signal 114 further by Graphics Processing Unit and/or compression engine, and decompressed and it is being stored before being supplied to display.In addition, one or more frame buffer also can be provided to control to output to the buffering, particularly vedio data of the view data of display.
Should be understood that, only provide various image processing techniques described above and the defect pixel detection related to and correction, camera lens light and shade herein by way of example and correct, separate mosaic and image sharpening, etc.Therefore, should be understood that the disclosure should not be interpreted as only being limited to above provided example.In fact, example logic described herein can experience various deformation and/or other features in other embodiments.In addition, what should understand is can also realize above-described technology by any suitable mode.Such as can use hardware (circuit of such as appropriate configuration), software (by comprising the computer program of the executable code be stored on one or more tangible computer computer-readable recording medium), or realized the parts of image processing circuit 32 by the combination of hardware and software element, especially ISP front-end block 80 and ISP flowing water block 82.
Specific embodiment described above is only exemplarily, should be understood that these embodiments have various deformation and replacement form.It is to be further understood that claim is not intended to be limited to particular form disclosed herein, and be intended to cover all modifications fallen in spirit and scope of the present disclosure, equivalents and replacement.

Claims (23)

1. an image-signal processing system, comprising:
Front end pixel processing unit, be configured to the frame of receiving package containing the raw image data of pixel, described frame uses the imaging device with digital image sensor to obtain, wherein said front end pixel processing unit comprises the statistical information with auto-focusing statistic logic and collects engine, described auto-focusing statistic logic is configured to process described raw image data to collect rough auto-focusing statistical information and meticulous auto-focusing statistical information, wherein, described rough auto-focusing statistical information is at least in part based on the view data through down-sampling about raw image data, described rough auto-focusing statistical information utilizes RGB gradient or uses the conversion of the raw image data of Scharr operator and produced, and wherein, described meticulous auto-focusing statistical information is based on raw image data, and
Control logic, be configured to use based on the rough auto-focusing mark of described rough auto-focusing statistical information and the best focusing position determining the camera lens of described imaging device based on the meticulous auto-focusing mark of described meticulous auto-focusing statistical information, and between the minimum position and maximum position of the whole focusing length of definition, adjust the focusing position of described camera lens to reach described best focusing position based on described rough auto-focusing mark and described meticulous auto-focusing mark at least in part.
2. image-signal processing system as claimed in claim 1, wherein said control logic is configured to the best focusing position being determined described camera lens by following steps:
Along described whole focusing length with start from described minimum position and focusing position described in the first direction stepping ending at described maximum position with by the multiple rough fractional position about described rough auto-focusing mark;
For each in described multiple rough fractional position determines rough auto-focusing mark;
Identify in described multiple rough fractional position which there is the rough auto-focusing mark of following correspondence: the rough auto-focusing mark of this correspondence declines to some extent relative to the rough auto-focusing mark corresponding with last rough fractional position;
From identified rough fractional position, with second direction opposite to the first direction and backward towards focusing position described in described minimum position stepping with by the multiple meticulous fractional position about described meticulous auto-focusing mark;
For each of described multiple meticulous fractional position determines meticulous auto-focusing mark; And
Identify described multiple meticulous fractional position which correspond to peak value in meticulous auto-focusing mark, and identified meticulous fractional position is set to described best focusing position.
3. image-signal processing system as claimed in claim 2, wherein, the step size between each in multiple rough fractional position is greater than the step size between each in multiple meticulous fractional position.
4. image-signal processing system as claimed in claim 2, wherein, at least in part based on the amplitude of variation of the rough auto-focusing mark corresponding to adjacent rough fractional position, the step size between each in described multiple rough fractional position is variable.
5. image-signal processing system as claimed in claim 4, wherein, along with the reduction of the amplitude of variation of the rough auto-focusing mark corresponding to rough fractional position, the step size between described rough fractional position reduces.
6. image-signal processing system as claimed in claim 2, wherein, described control logic is configured to use coil to adjust the focusing position of described camera lens, and wherein said control logic steps through rough fractional position to tackle the impact of coil stabilization time along described whole focusing length.
7. image-signal processing system as claimed in claim 1, wherein, described auto-focusing statistic logic is configured to be provided described rough auto-focusing statistical information by being applied to by the first and second filters from by least one of the raw image data after selection or the camera brightness value of deriving from the raw image data selected, and by the third and fourth filter being applied to by providing described meticulous auto-focusing statistical information by the brightness value applying the transformation to described raw image data or obtain by horizontal filtering is applied to described raw image data.
8. image-signal processing system as claimed in claim 7, wherein, at least in part based on described first and second filters output and determine at each rough position place described rough auto-focusing mark, and wherein, at least in part based on described third and fourth filter output and determine at each fine location place described meticulous auto-focusing mark.
9. image-signal processing system as claimed in claim 7, wherein, described first and second filters for camera brightness value described in filtering comprise 3 × 3 filters based on Scharr operator.
10. the method for using auto-focusing statistical information to determine best focusing position, comprising:
Along the rough auto-focusing mark that the focusing length of the camera lens of image-capturing apparatus is determined about the view data through down-sampling in different stepping place, described rough auto-focusing mark utilizes RGB gradient or uses the conversion of the raw image data of Scharr operator and produced;
Identify the stepping of the auto-focusing mark of the correspondence of rough auto-focusing mark relative to last stepping decline place to some extent;
Described stepping at least in part based on auto-focusing mark decline place to some extent of described correspondence identifies best focusing area near focusing position; And
Analyze the meticulous auto-focusing mark about raw image data in described best focusing area to determine the best focusing position of described camera lens.
11. methods as claimed in claim 10, wherein, analyze meticulous auto-focusing mark in described best focusing area to determine that described best focusing position comprises: search provides the focusing position of maximum meticulous auto-focusing mark in described best focusing area.
12. methods as claimed in claim 10, wherein, described rough auto-focusing mark and described meticulous auto-focusing mark are at least in part based on the white balance brightness of deriving from Bayer RGB data.
13. methods as claimed in claim 12, wherein, derive the described white balance brightness value being used for described rough auto-focusing mark from the Bayer RGB data of selection.
14. 1 kinds of electronic equipments, it comprises:
Comprise the imaging device of digital image sensor and camera lens;
Interface, is configured to communicate with described digital image sensor;
Memory device;
Display device, is configured to the visual representation showing the image scene corresponding with the raw image data obtained by described digital image sensor; And
Picture signal processing subsystem, comprise the front end pixel processing unit being configured to receiving package and containing the frame of the raw image data of pixel, described frame uses the described imaging device with described digital image sensor to obtain, wherein, described front end pixel processing unit comprises the statistical information with auto-focusing statistic logic and collects logic, and described auto-focusing statistic logic is configured to process described raw image data to collect following statistical information:
At least in part based on the rough auto-focusing statistical information of the view data through down-sampling about raw image data, described rough auto-focusing statistical information utilizes RGB gradient or uses the conversion of the raw image data of Scharr operator and produced; With
Based on the meticulous auto-focusing statistical information of raw image data; And
Control logic, be configured to use respectively the best focusing position determining the camera lens of described imaging device based on the rough auto-focusing mark of described rough auto-focusing statistical information and described meticulous auto-focusing statistical information and meticulous auto-focusing mark, wherein, described control logic determines the best focusing position of described camera lens by following steps: be determine rough auto-focusing mark about each in the multiple rough fractional position of described rough auto-focusing mark along whole focusing length with first direction; Identify in described multiple rough fractional position which there is the rough auto-focusing mark of following correspondence: the rough auto-focusing mark of this correspondence declines to some extent relative to the rough auto-focusing mark corresponding with last rough fractional position; From identified rough fractional position, with focusing position described in second direction stepping opposite to the first direction to pass through the one or more meticulous fractional position about described meticulous auto-focusing mark, and search for the peak value in described meticulous automatic focus mark; And the focusing position corresponding with described peak value is set to described best focusing position.
15. electronic equipments as claimed in claim 14, wherein, the step size substantially constant between each in described multiple rough fractional position.
16. electronic equipments as claimed in claim 15, wherein, the step size substantially constant between each in described meticulous fractional position, but be less than the step size between each in described rough fractional position.
17. electronic equipments as claimed in claim 14, wherein, described sensor interface comprises Standard Mobile Imager framework (SMIA) interface.
18. electronic equipments as claimed in claim 14, comprise at least one in desktop PC, laptop computer, flat computer, mobile cellular telephone, portable electronic device, or its combination in any.
19. electronic equipments as claimed in claim 14, wherein said digital image sensor comprises the digital camera integrated with described electronic equipment, via the external digital magazine at least one of interface coupling to described electronic equipment, or its combination.
20. 1 kinds, for the equipment using auto-focusing statistical information to determine best focusing position, comprising:
Determine the device of rough auto-focusing mark in different stepping place for the focusing length of the camera lens along image-capturing apparatus, based on the described rough auto-focusing mark of rough auto-focusing statistical information at least in part based on the view data through down-sampling about raw image data, described rough auto-focusing statistical information utilizes RGB gradient or uses the conversion of the raw image data of Scharr operator and produced;
For identifying the device of corresponding auto-focusing mark relative to the last stepping stepping of decline place to some extent;
For identifying the device of best focusing area near focusing position; And
For analyzing meticulous auto-focusing mark in described best focusing area to determine the device of the best focusing position of described camera lens, based on the described meticulous auto-focusing mark of meticulous auto-focusing statistical information based on raw image data.
21. equipment as claimed in claim 20, wherein, for analyzing meticulous auto-focusing mark in described best focusing area to determine that the device of described best focusing position comprises: provide the device of the focusing position of maximum meticulous auto-focusing mark for search in described best focusing area.
22. equipment as claimed in claim 20, wherein, described rough and meticulous auto-focusing mark is the white balance brightness based on deriving from Bayer RGB data.
23. equipment as claimed in claim 22, wherein, derive the described white balance brightness value being used for described rough auto-focusing mark from the Bayer RGB data of selection.
CN201110399082.8A 2010-09-01 2011-09-01 The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control Active CN102572265B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/873,978 2010-09-01
US12/873,978 US9398205B2 (en) 2010-09-01 2010-09-01 Auto-focus control using image statistics data with coarse and fine auto-focus scores

Publications (2)

Publication Number Publication Date
CN102572265A CN102572265A (en) 2012-07-11
CN102572265B true CN102572265B (en) 2016-02-24

Family

ID=44533240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110399082.8A Active CN102572265B (en) 2010-09-01 2011-09-01 The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control

Country Status (11)

Country Link
US (1) US9398205B2 (en)
EP (1) EP2599301A1 (en)
CN (1) CN102572265B (en)
AU (1) AU2011296296B2 (en)
BR (1) BR112013005140A2 (en)
HK (1) HK1173294A1 (en)
IN (1) IN2013CN01958A (en)
MX (1) MX2013002455A (en)
RU (1) RU2543974C2 (en)
TW (1) TW201216207A (en)
WO (1) WO2012030617A1 (en)

Families Citing this family (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488055B2 (en) 2010-09-30 2013-07-16 Apple Inc. Flash synchronization using image sensor interface timing signal
KR20120118383A (en) * 2011-04-18 2012-10-26 삼성전자주식회사 Image compensation device, image processing apparatus and methods thereof
KR101615332B1 (en) * 2012-03-06 2016-04-26 삼성디스플레이 주식회사 Pixel arrangement structure for organic light emitting display device
US10832616B2 (en) 2012-03-06 2020-11-10 Samsung Display Co., Ltd. Pixel arrangement structure for organic light emitting diode display
US11089247B2 (en) 2012-05-31 2021-08-10 Apple Inc. Systems and method for reducing fixed pattern noise in image data
US9025867B2 (en) 2012-05-31 2015-05-05 Apple Inc. Systems and methods for YCC image processing
US9142012B2 (en) 2012-05-31 2015-09-22 Apple Inc. Systems and methods for chroma noise reduction
US9105078B2 (en) 2012-05-31 2015-08-11 Apple Inc. Systems and methods for local tone mapping
US8953882B2 (en) 2012-05-31 2015-02-10 Apple Inc. Systems and methods for determining noise statistics of image data
US9332239B2 (en) 2012-05-31 2016-05-03 Apple Inc. Systems and methods for RGB image processing
US8917336B2 (en) 2012-05-31 2014-12-23 Apple Inc. Image signal processing involving geometric distortion correction
US9743057B2 (en) 2012-05-31 2017-08-22 Apple Inc. Systems and methods for lens shading correction
US9014504B2 (en) 2012-05-31 2015-04-21 Apple Inc. Systems and methods for highlight recovery in an image signal processor
US8817120B2 (en) 2012-05-31 2014-08-26 Apple Inc. Systems and methods for collecting fixed pattern noise statistics of image data
US9031319B2 (en) 2012-05-31 2015-05-12 Apple Inc. Systems and methods for luma sharpening
US8872946B2 (en) 2012-05-31 2014-10-28 Apple Inc. Systems and methods for raw image processing
US9077943B2 (en) 2012-05-31 2015-07-07 Apple Inc. Local image statistics collection
WO2014001844A1 (en) 2012-06-27 2014-01-03 Nokia Corporation Imaging and sensing during an auto-focus procedure
TWI511549B (en) * 2012-07-12 2015-12-01 Acer Inc Image processing method and image processing system
JP2014064251A (en) * 2012-09-24 2014-04-10 Toshiba Corp Solid state imaging device and imaging method
TWI495862B (en) 2012-10-04 2015-08-11 Pixart Imaging Inc Method of testing image sensor and realted apparatus thereof
TWI470300B (en) * 2012-10-09 2015-01-21 Univ Nat Cheng Kung Image focusing method and auto-focusing microscopic apparatus
TWI461812B (en) * 2012-10-19 2014-11-21 Method for automatically focusing applied to camera module
US8782558B1 (en) * 2012-11-28 2014-07-15 Advanced Testing Technologies, Inc. Method, program and arrangement for highlighting failing elements of a visual image
US9443290B2 (en) * 2013-04-15 2016-09-13 Apple Inc. Defringing RAW images
CN103235397B (en) * 2013-04-28 2016-05-25 华为技术有限公司 A kind of Atomatic focusing method and equipment
US9443292B2 (en) 2013-06-26 2016-09-13 Apple Inc. Blind defringing for color images
US9088708B2 (en) 2013-07-19 2015-07-21 Htc Corporation Image processing device and method for controlling the same
US9183620B2 (en) 2013-11-21 2015-11-10 International Business Machines Corporation Automated tilt and shift optimization
US9310603B2 (en) 2013-12-17 2016-04-12 Htc Corporation Image capture system with embedded active filtering, and image capturing method for the same
CN104980717A (en) * 2014-04-02 2015-10-14 深圳市先河***技术有限公司 Image processing device, video photographing lens, video camera body and video camera equipment
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
US9619862B2 (en) 2014-05-30 2017-04-11 Apple Inc. Raw camera noise reduction using alignment mapping
CN104079837B (en) * 2014-07-17 2018-03-30 广东欧珀移动通信有限公司 A kind of focusing method and device based on imaging sensor
US10417525B2 (en) * 2014-09-22 2019-09-17 Samsung Electronics Co., Ltd. Object recognition with reduced neural network weight precision
CN105635554B (en) * 2014-10-30 2018-09-11 展讯通信(上海)有限公司 Auto-focusing control method and device
CN104469168A (en) * 2014-12-29 2015-03-25 信利光电股份有限公司 Shooting module and automatic focusing method thereof
FR3035720B1 (en) * 2015-04-30 2017-06-23 Thales Sa OPTICAL SYSTEM AND METHOD FOR LASER POINTING ACROSS THE ATMOSPHERE
CN105049723B (en) * 2015-07-13 2017-11-21 南京工程学院 Atomatic focusing method based on defocusing amount difference qualitative analysis
JP6600217B2 (en) * 2015-09-30 2019-10-30 キヤノン株式会社 Image processing apparatus, image processing method, imaging apparatus, and control method thereof
CN106921828B (en) * 2015-12-25 2019-09-17 北京展讯高科通信技术有限公司 A kind of calculation method and device of auto-focusing statistical information
CN105657279A (en) * 2016-02-29 2016-06-08 广东欧珀移动通信有限公司 Control method, control device and electronic device
US9699371B1 (en) * 2016-03-29 2017-07-04 Sony Corporation Image processing system with saliency integration and method of operation thereof
US10602111B2 (en) 2016-09-06 2020-03-24 Apple Inc. Auto white balance control algorithm based upon flicker frequency detection
CN106534676B (en) * 2016-11-02 2019-03-26 西安电子科技大学 Autofocus adjustment method towards zooming camera system
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
CN110603533A (en) 2017-05-01 2019-12-20 讯宝科技有限责任公司 Method and apparatus for object state detection
US11600084B2 (en) 2017-05-05 2023-03-07 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
CN107277348B (en) * 2017-06-16 2019-08-16 Oppo广东移动通信有限公司 Focusing method, device, computer readable storage medium and mobile terminal
CN108009987B (en) * 2017-12-01 2021-08-20 中国科学院长春光学精密机械与物理研究所 Image zooming device and zooming method
US10306152B1 (en) * 2018-02-14 2019-05-28 Himax Technologies Limited Auto-exposure controller, auto-exposure control method and system based on structured light
IL264937B (en) * 2018-02-25 2022-09-01 Orbotech Ltd Range differentiators for auto-focusing in optical imaging systems
CN109064513B (en) * 2018-08-14 2021-09-21 深圳中科精工科技有限公司 Six-degree-of-freedom automatic calibration algorithm in camera packaging
CN109274885B (en) * 2018-09-11 2021-03-26 广东智媒云图科技股份有限公司 Fine adjustment method for photographing
CN108989690B (en) * 2018-09-28 2020-07-17 深圳市盛世生物医疗科技有限公司 Multi-mark-point focusing method, device, equipment and storage medium for linear array camera
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US20200118786A1 (en) * 2018-10-15 2020-04-16 Applied Materials, Inc. System and method for selective autofocus
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
KR102575126B1 (en) * 2018-12-26 2023-09-05 주식회사 엘엑스세미콘 Image precessing device and method thereof
CA3028708A1 (en) 2018-12-28 2020-06-28 Zih Corp. Method, system and apparatus for dynamic loop closure in mapping trajectories
CN110349371B (en) * 2019-03-15 2020-12-11 青田县君翔科技有限公司 Safety monitoring type wireless communication system
KR102609559B1 (en) 2019-04-10 2023-12-04 삼성전자주식회사 Image sensors including shared pixels
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11341663B2 (en) * 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US10805549B1 (en) * 2019-08-20 2020-10-13 Himax Technologies Limited Method and apparatus of auto exposure control based on pattern detection in depth sensing system
CN110530291A (en) * 2019-08-26 2019-12-03 珠海博明视觉科技有限公司 A kind of auto-focusing algorithm that grating project height is rebuild
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
EP3879811B1 (en) 2020-03-09 2021-12-15 Axis AB Determining whether a camera is out-of-focus
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11356626B2 (en) 2020-04-22 2022-06-07 Omnivision Technologies, Inc. Flexible exposure control for image sensors with phase detection auto focus pixels
EP3940633B1 (en) 2020-05-29 2023-08-16 Beijing Xiaomi Mobile Software Co., Ltd. Nanjing Branch Image alignment method and apparatus, electronic device, and storage medium
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
EP4194944A4 (en) * 2020-09-11 2024-04-03 Siemens Ltd. China Method and apparatus for realizing focusing of industrial camera
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
CN115037867B (en) * 2021-03-03 2023-12-01 Oppo广东移动通信有限公司 Shooting method, shooting device, computer readable storage medium and electronic equipment
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices
US11539875B1 (en) * 2021-08-27 2022-12-27 Omnivision Technologies Inc. Image-focusing method and associated image sensor
US11683598B1 (en) 2022-02-24 2023-06-20 Omnivision Technologies, Inc. Image sensor with on-chip occlusion detection and methods thereof
CN114500859B (en) * 2022-04-13 2022-08-02 国仪量子(合肥)技术有限公司 Automatic focusing method, photographing apparatus, and storage medium
CN115641368B (en) * 2022-10-31 2024-06-04 安徽农业大学 Out-of-focus checkerboard image feature extraction method for calibration
CN117452619B (en) * 2023-12-26 2024-03-05 西华大学 Sparse target microscopic imaging automatic focusing method, system and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1646964A (en) * 2002-08-07 2005-07-27 松下电器产业株式会社 Focusing device
CN101496413A (en) * 2006-08-01 2009-07-29 高通股份有限公司 Real-time capturing and generating stereo images and videos with a monoscopic low power mobile device

Family Cites Families (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4589089A (en) 1978-05-30 1986-05-13 Bally Manufacturing Corporation Computer-peripheral interface for a game apparatus
US4475172A (en) 1978-05-30 1984-10-02 Bally Manufacturing Corporation Audio/visual home computer and game apparatus
US4799677A (en) 1983-09-02 1989-01-24 Bally Manufacturing Corporation Video game having video disk read only memory
US4979738A (en) 1983-12-06 1990-12-25 Midway Manufacturing Corporation Constant spatial data mass RAM video display system
US4742543A (en) 1983-12-22 1988-05-03 Frederiksen Jeffrey E Video transmission system
US4605961A (en) 1983-12-22 1986-08-12 Frederiksen Jeffrey E Video transmission system using time-warp scrambling
US4682360A (en) 1983-12-22 1987-07-21 Frederiksen Jeffrey E Video transmission system
US4694489A (en) 1983-12-22 1987-09-15 Frederiksen Jeffrey E Video transmission system
US4743959A (en) 1986-09-17 1988-05-10 Frederiksen Jeffrey E High resolution color video image acquisition and compression system
WO1991002428A1 (en) 1989-08-08 1991-02-21 Sanyo Electric Co., Ltd Automatically focusing camera
US5227863A (en) 1989-11-14 1993-07-13 Intelligent Resources Integrated Systems, Inc. Programmable digital video processing system
US5272529A (en) 1992-03-20 1993-12-21 Northwest Starscan Limited Partnership Adaptive hierarchical subband vector quantization encoder
US5247355A (en) 1992-06-11 1993-09-21 Northwest Starscan Limited Partnership Gridlocked method and system for video motion compensation
EP0658263B1 (en) 1992-09-01 2003-11-05 Apple Computer, Inc. Improved vector quantization
US6122411A (en) 1994-02-16 2000-09-19 Apple Computer, Inc. Method and apparatus for storing high and low resolution images in an imaging device
US5694227A (en) 1994-07-15 1997-12-02 Apple Computer, Inc. Method and apparatus for calibrating and adjusting a color imaging system
US5764291A (en) 1994-09-30 1998-06-09 Apple Computer, Inc. Apparatus and method for orientation-dependent camera exposure and focus setting optimization
US5496106A (en) 1994-12-13 1996-03-05 Apple Computer, Inc. System and method for generating a contrast overlay as a focus assist for an imaging device
US5640613A (en) 1995-04-14 1997-06-17 Apple Computer, Inc. Corrective lens assembly
US6011585A (en) 1996-01-19 2000-01-04 Apple Computer, Inc. Apparatus and method for rotating the display orientation of a captured image
US5867214A (en) 1996-04-11 1999-02-02 Apple Computer, Inc. Apparatus and method for increasing a digital camera image capture rate by delaying image processing
US5809178A (en) 1996-06-11 1998-09-15 Apple Computer, Inc. Elimination of visible quantizing artifacts in a digital image utilizing a critical noise/quantizing factor
US6031964A (en) 1996-06-20 2000-02-29 Apple Computer, Inc. System and method for using a unified memory architecture to implement a digital camera device
US6157394A (en) 1996-08-29 2000-12-05 Apple Computer, Inc. Flexible digital image processing via an image processing chain with modular image processors
US5991465A (en) 1996-08-29 1999-11-23 Apple Computer, Inc. Modular digital image processing via an image processing chain with modifiable parameter controls
US6028611A (en) 1996-08-29 2000-02-22 Apple Computer, Inc. Modular digital image processing via an image processing chain
US5790705A (en) 1996-09-13 1998-08-04 Apple Computer, Inc. Compression techniques for substantially lossless digital image data storage
US6141044A (en) 1996-09-26 2000-10-31 Apple Computer, Inc. Method and system for coherent image group maintenance in memory
US6198514B1 (en) 1998-02-27 2001-03-06 Apple Computer, Inc. Color misconvergence measurement using a common monochrome image
US6151415A (en) 1998-12-14 2000-11-21 Intel Corporation Auto-focusing algorithm using discrete wavelet transform
JP2001281529A (en) 2000-03-29 2001-10-10 Minolta Co Ltd Digital camera
US6954193B1 (en) 2000-09-08 2005-10-11 Apple Computer, Inc. Method and apparatus for correcting pixel level intensity variation
US6745012B1 (en) 2000-11-17 2004-06-01 Telefonaktiebolaget Lm Ericsson (Publ) Adaptive data compression in a wireless telecommunications system
EP1368689B1 (en) 2001-02-02 2006-06-14 Cellomics, Inc. Method for estimating the best initial focus position
US6959044B1 (en) 2001-08-21 2005-10-25 Cisco Systems Canada Co. Dynamic GOP system and method for digital video encoding
US7170938B1 (en) 2001-08-21 2007-01-30 Cisco Systems Canada Co. Rate control method for video transcoding
US7277595B1 (en) 2003-01-06 2007-10-02 Apple Inc. Method and apparatus for digital image manipulation to remove image blemishes
US20040165090A1 (en) * 2003-02-13 2004-08-26 Alex Ning Auto-focus (AF) lens and process
US7310371B2 (en) 2003-05-30 2007-12-18 Lsi Corporation Method and/or apparatus for reducing the complexity of H.264 B-frame encoding using selective reconstruction
US7327786B2 (en) 2003-06-02 2008-02-05 Lsi Logic Corporation Method for improving rate-distortion performance of a video compression system through parallel coefficient cancellation in the transform
JP4022828B2 (en) 2003-06-30 2007-12-19 カシオ計算機株式会社 Imaging apparatus, autofocus control method, and autofocus control program
US7324595B2 (en) 2003-09-22 2008-01-29 Lsi Logic Corporation Method and/or apparatus for reducing the complexity of non-reference frame encoding using selective reconstruction
US7170529B2 (en) * 2003-10-24 2007-01-30 Sigmatel, Inc. Image processing
US7602849B2 (en) 2003-11-17 2009-10-13 Lsi Corporation Adaptive reference picture selection based on inter-picture motion measurement
US7362804B2 (en) 2003-11-24 2008-04-22 Lsi Logic Corporation Graphical symbols for H.264 bitstream syntax elements
US7362376B2 (en) 2003-12-23 2008-04-22 Lsi Logic Corporation Method and apparatus for video deinterlacing and format conversion
US7345708B2 (en) 2003-12-23 2008-03-18 Lsi Logic Corporation Method and apparatus for video deinterlacing and format conversion
US7304681B2 (en) * 2004-01-21 2007-12-04 Hewlett-Packard Development Company, L.P. Method and apparatus for continuous focus and exposure in a digital imaging device
US7515765B1 (en) 2004-01-30 2009-04-07 Apple Inc. Image sharpness management
US7231587B2 (en) 2004-03-29 2007-06-12 Lsi Corporation Embedded picture PSNR/CRC data in compressed video bitstream
US7620103B2 (en) 2004-12-10 2009-11-17 Lsi Corporation Programmable quantization dead zone and threshold for standard-based H.264 and/or VC1 video encoding
US7492958B2 (en) 2005-01-05 2009-02-17 Nokia Corporation Digital imaging with autofocus
US7612804B1 (en) 2005-02-15 2009-11-03 Apple Inc. Methods and apparatuses for image processing
US7480452B2 (en) * 2005-03-15 2009-01-20 Winbond Electronics Corporation System and method for image capturing
US7949044B2 (en) 2005-04-12 2011-05-24 Lsi Corporation Method for coefficient bitdepth limitation, encoder and bitstream generation apparatus
US8031766B2 (en) 2005-08-02 2011-10-04 Lsi Corporation Performance adaptive video encoding with concurrent decoding
US8208540B2 (en) 2005-08-05 2012-06-26 Lsi Corporation Video bitstream transcoding method and apparatus
US8155194B2 (en) 2005-08-05 2012-04-10 Lsi Corporation Method and apparatus for MPEG-2 to H.264 video transcoding
US7903739B2 (en) 2005-08-05 2011-03-08 Lsi Corporation Method and apparatus for VC-1 to MPEG-2 video transcoding
US8045618B2 (en) 2005-08-05 2011-10-25 Lsi Corporation Method and apparatus for MPEG-2 to VC-1 video transcoding
US7881384B2 (en) 2005-08-05 2011-02-01 Lsi Corporation Method and apparatus for H.264 to MPEG-2 video transcoding
US7596280B2 (en) 2005-09-29 2009-09-29 Apple Inc. Video acquisition with integrated GPU processing
TWI285500B (en) 2005-11-11 2007-08-11 Primax Electronics Ltd Auto focus method for digital camera
KR100691245B1 (en) * 2006-05-11 2007-03-12 삼성전자주식회사 Method for compensating lens position error in mobile terminal
CN1892401A (en) 2006-05-25 2007-01-10 南京大学 Multi-stage automatic focusing method according to time-frequency domain catched by iris picture
CN100451807C (en) * 2006-08-22 2009-01-14 宁波大学 Hadamard transformation based digital micro imaging automatic focusing method
CN100451808C (en) 2006-08-24 2009-01-14 宁波大学 Automatic focusing method of digital imaging based on Contourilct transform
CN100543576C (en) 2006-08-24 2009-09-23 宁波大学 Based on the digital image-forming of the Contourlet conversion method of focusing automatically
US7773127B2 (en) 2006-10-13 2010-08-10 Apple Inc. System and method for RAW image processing
US7893975B2 (en) 2006-10-13 2011-02-22 Apple Inc. System and method for processing images using predetermined tone reproduction curves
JP4254841B2 (en) 2006-10-20 2009-04-15 ソニー株式会社 Imaging apparatus, imaging method, image processing apparatus, image processing method, and image processing program
TWI324015B (en) 2006-12-22 2010-04-21 Ind Tech Res Inst Autofocus searching method
EP1988705A1 (en) * 2007-04-30 2008-11-05 STMicroelectronics (Research & Development) Limited Improvements in or relating to image sensors
JP2009192960A (en) * 2008-02-16 2009-08-27 Sanyo Electric Co Ltd Electronic camera
US8405727B2 (en) 2008-05-01 2013-03-26 Apple Inc. Apparatus and method for calibrating image capture devices
JP5132507B2 (en) * 2008-09-26 2013-01-30 オリンパス株式会社 Focus detection apparatus and camera system
RU2389050C1 (en) 2008-10-30 2010-05-10 Общество с ограниченной ответственностью Научно-Производственная компания "СенсорИС" Automatic focusing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1646964A (en) * 2002-08-07 2005-07-27 松下电器产业株式会社 Focusing device
CN101496413A (en) * 2006-08-01 2009-07-29 高通股份有限公司 Real-time capturing and generating stereo images and videos with a monoscopic low power mobile device

Also Published As

Publication number Publication date
AU2011296296B2 (en) 2015-08-27
WO2012030617A1 (en) 2012-03-08
AU2011296296A1 (en) 2013-03-21
TW201216207A (en) 2012-04-16
BR112013005140A2 (en) 2016-05-10
US9398205B2 (en) 2016-07-19
HK1173294A1 (en) 2013-05-10
MX2013002455A (en) 2013-08-29
RU2543974C2 (en) 2015-03-10
US20120051730A1 (en) 2012-03-01
IN2013CN01958A (en) 2015-07-31
CN102572265A (en) 2012-07-11
RU2013114371A (en) 2014-10-10
EP2599301A1 (en) 2013-06-05

Similar Documents

Publication Publication Date Title
CN102572265B (en) The auto-focusing with the image statistics data of rough and meticulous auto-focusing mark is used to control
CN102404582B (en) Flexible color space selection for auto-white balance processing
CN102640184B (en) Temporal filtering techniques for image signal processing
CN102640499B (en) System and method for demosaicing image data using weighted gradients
CN102640489B (en) System and method for detecting and correcting defective pixels in an image sensor
CN102523456B (en) Dual image sensor image processing system and method
CN102572443B (en) For isochronous audio in image-signal processing system and the technology of video data
CN104902250B (en) Flash synchronization using image sensor interface timing signal
CN102547301B (en) Use the system and method for image-signal processor image data processing
CN102572316B (en) Overflow control techniques for image signal processing
TWI492609B (en) Image signal processor line buffer configuration for processing raw image data
CN103327218A (en) System and method for image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1173294

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1173294

Country of ref document: HK