EP2649788A1 - Auto-focus image system - Google Patents

Auto-focus image system

Info

Publication number
EP2649788A1
EP2649788A1 EP11748716.5A EP11748716A EP2649788A1 EP 2649788 A1 EP2649788 A1 EP 2649788A1 EP 11748716 A EP11748716 A EP 11748716A EP 2649788 A1 EP2649788 A1 EP 2649788A1
Authority
EP
European Patent Office
Prior art keywords
edge
gradient
image
gradients
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP11748716.5A
Other languages
German (de)
English (en)
French (fr)
Inventor
Hiok Nam Tay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/IB2010/055641 external-priority patent/WO2011070513A1/en
Application filed by Individual filed Critical Individual
Publication of EP2649788A1 publication Critical patent/EP2649788A1/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the phase difference method includes splitting an incoming image into two images that are captured by separate image sensors. The two images are compared to determine a phase difference. The focus position is adjusted until the two images match.
  • the phase difference method requires additional parts such as a beam splitter and an extra image sensor.
  • the phase difference approach analyzes a relatively small band of fixed detection points. Having a small group of detection points is prone to error because noise may be superimposed onto one or more points. This technique is also ineffective if the detection points do not coincide with an image edge.
  • the phase difference method splits the light the amount of light that impinges on a light sensor is cut in half or even more. This can be problematic in dim settings where the image light intensity is already low.
  • FIG. 6A, 6B are illustrations of a calculation of an edge width of a vertical edge having a slant angle ⁇ ;
  • FIG. 9B is a graph of an image signal across FIG. 9A;
  • FIG. 24C shows the positive gradients from FIG. 23B and illustrates a length of a segment of the gradient profile between two gradient levels, an area of a region vertically under the segment and the lower gradient level, and a width of a base of the region;
  • FIG. 24D shows the positive gradients from FIG. 23B and illustrates a method for estimating the first derivative
  • a gradient i.e. first derivative
  • the edge detector uses a first-order edge detection operator
  • a gradient i.e. first derivative
  • There are various methods available to calculate the gradient including using any one of various first order edge detection operators such the Sobel operator, the Prewitt operator, the Roberts Cross operator, and the Roberts operator.
  • the Roberts operator has two kernels which are single column or single row matrices: [-1 +1] and its transpose.
  • Edge width may be calculated in any one of known methods.
  • One method of calculating edge width is simply counting the number of pixels within an edge.
  • FIG. 5 a first fractional pixel position (2.4) is found between a first outer pixel (pixel 3) of a refined edge and the adjacent outside pixel (pixel 2) by an interpolation from the refinement threshold 304.
  • a second fractional pixel position (5.5) is found between a second outer pixel (pixel 5) and its adjacent outside pixel (pixel 6) .
  • Another alternative edge width calculation method is to calculate a difference of the image signal across the edge (with or without edge refinement) and divide it by a peak gradient of the edge.
  • edge width may be a distance between a pair of positive and negative peaks (or interpolated peak(s)) of the second order derivative of the image signal across the edge.
  • edge-sharpness may be a distance between a pair of positive and negative peaks (or interpolated peak(s)) of the second order derivative of the image signal across the edge.
  • direction e.g horizontal direction or vertical
  • a boundary (shaded band) is shown to be inclined at a slant angle ⁇ with respect to the vertical dashed line, and a width a is shown to be measured in the perpendicular direction (i.e. horizontal direction) .
  • a width b (as indicated in the drawing) measured in a direction perpendicular to the direction of the boundary (also direction of an edge that forms a part of the boundary) is more appropriate as the width of the boundary (and also of the edge) than width a .
  • the edge widths measured in one or the other of those prescribed directions are to be corrected by reducing them down to be widths in directions
  • Figure 6A, 6B illustrate a correction calculation for an edge width measured in the horizontal direction for a boundary (and hence edges that form the boundary) that has a slant from the vertical line.
  • a slant angle ⁇ is found. For each vertical edge, at step 502, locate the column position where the horizontal gradient magnitude peaks, and find the horizontal gradient x. At step 504, find where the vertical gradient magnitude peaks along the column position and within two pixels away, and find the vertical gradient y.
  • the slant angle may be found by looking up a lookup table.
  • step 508 scale down the edge width by multiplying with cos ( ⁇ ) , or with an approximation thereto as one skilled in the art usually does in practice.
  • a first modification of the process shown in Figure 7 is to substitute for step 506 and part of step 508 by providing a lookup table that has entries for various combinations of input values of x and y. For each
  • the lookup table returns an edge width correction factor.
  • the edge width correction factor output by the lookup table may be an approximation to cos (tan -1 (y/x) ) to within 20%, preferably within 5%.
  • the edge width is then multiplied with this correction factor to produce a slant-corrected edge width .
  • a second modification is to calculate a quotient y/x between a vertical gradient y and a horizontal gradient x to produce a quotient q, then use q to input to a lookup table that has entries for various values of q. For each value of q, the lookup table returns an edge width
  • the edge width correction factor may be an approximation to cos (tan (q) ) to within 20%, preferably within 5%.
  • the values of x and y may be obtained in steps 502 to 506, but other methods may be employed instead.
  • Adjacent edges may be prevented altogether from contributing to a focus signal, or have their
  • Figure 9A, 9B, and 9C illustrate a problem that is being
  • Figure 9A illustrates three vertical white bars separated by two narrow black spaces each 2 pixels wide.
  • the middle white bar is a narrow bar 2 pixels wide.
  • Figure 9B shows an image signal plotted horizontally across the image in Figure 9A for each of a sharp image and a blurred image.
  • Figure 9C plots Sobel-x gradients of Figure 9B for the sharp image and blurred image.
  • the first edge (pixels 2-5) for the blurred image is wider than that for the sharp image, and
  • the two narrowest edges (pixels 9 & 10, and pixels 11 & 12) have widths of two in both images.
  • the corresponding slopes at pixels 9 & 10, and pixels 11 & 12 each takes two pixels to complete a transition.
  • the blurred image has a
  • the minimum edge gap is in terms of a number of pixels, e.g. 1, or 2, or in between.
  • edges may have been eliminated due to having a peak gradient less than the elimination threshold, two successive edges having an identical gradient polarity and spaced no more than two times the minimum edge gap plus a sharp_edge_width
  • sharp_edge_width is a number assigned to designate an edge width of a sharp edge
  • an edge may be eliminated unless one of the following conditions is true: (a) the screen flag is off for this edge, (b) a peak gradient magnitude of the edge is not smaller than the screen threshold for this edge.
  • condition (c) the edge width is not less than sharp_edge_width + 1, where a number has been assigned for sharp_edge_width to designate an edge width of a sharp edge, and where the "+1" may be varied to set a range of edge widths above the sharp_edge_width within which edges may be eliminated if they fail (a) and (b) .
  • sharp_edge_width may be 2.
  • Figure 10 is a flowchart to determine a screen threshold and a screen flag for each edge. For vertical edges, assume scanning from left to right along a row, though this is not required. (For horizontal edges, assume scanning from top to bottom along a column, though this is not required.) A number is assigned for vertical edges. For vertical edges, assume scanning from left to right along a row, though this is not required. (For horizontal edges, assume scanning from top to bottom along a column, though this is not required.) A number is assigned for
  • sharp_edge_width the value of one being the minimum edge gap value used for this illustration, but a different value may be used, such as between 0.5 and 2.0. If yes, the edge is a wider edge, and step 706 follows to set the screen threshold for the immediate next edge that has an opposite polarity to beta times a peak gradient magnitude of the edge, beta being from 0.3 to 0.7, preferably 0.55, then step 708 follows to turn on the screen flag for the next edge, then proceed to the next edge.
  • step 730 follows to check whether the spacing from the prior edge of the same gradient polarity is greater than two times the minimum edge gap (or a different predetermined number) plus sharp_edge_width and the immediate prior edge of an opposite polarity, if any, is more than the minimum edge gap away. If yes, step 710 follows to turn off the screen flag for the next edge. If no, keep the screen flag and the screen threshold for the next edge and proceed to the next edge.
  • Beta may be a predetermined fraction, or it may be a fraction calculated following a predetermined formula, such as a function of an edge width. In the latter case, beta may vary from one part of the image to another part.
  • the image input by the focus signal generator 120 may have pixels laid out in a rectangular grid ("pixel grid") rotated at 45 degrees with respect to a rectangular frame of the image.
  • pixel grid rectangular grid
  • the X- and Y-directions of the edge detection operations and width measurement operations may be rotated likewise.
  • edge-sharpness measure that is
  • any edge- sharpness measure that has the above characteristic of being independent of or essentially independent of 20% scaling down of the image data in addition is a good alternative to the width measured from a gradient or interpolated gradient to another gradient or interpolated gradient of a same gradient value.
  • the alternative edge-sharpness measure preferably has a unit that does not include a unit of energy.
  • the unit of the edge-sharpness measure is determined on basis two points: (a) each sample of the image data on which the first-order edge-detection operator operates on has a unit of energy, (b) distance between samples has a unit of length. On basis of points (a) and (b) , a gradient value has a unit of a unit of energy divided by a unit of length. Likewise, contrast across the edge or across any undivided portion of the edge has a unit of energy.
  • the contrast is not a good edge-sharpness measure, as the unit reveals that it is affected by illumination of the scene and reflectivity of the object. Neither is peak gradient of the edge, because the unit of the peak gradient has a unit of energy in it, indicating also that it is responsive to a change in illumination of the scene.
  • peak gradient of the edge divided by a contrast of the edge is a good edge- sharpness measure, as it has a unit of the reciprocal of a unit of length.
  • predetermine fraction of the peak gradient is a good edge-sharpness measure, as the count is simply a measure of distance quantized to the size of the spacing between contiguous gradients, hence having a unit of length.
  • a gradient may be generated from a first-order edge detection operator used to detect the edge, or may be generated from a different first- derivative operator (i.e. gradient operator) .
  • the Sobel operator or even a second-order edge detection operator, such as a Laplacian operator
  • the Roberts operator whose kernels are simply [-1, +1] and its transpose, which is simply subtracting one sample of the image data from the next sample in the orientation of the gradient operator, with the resulting gradient located midway between the two samples.
  • Edges may be detected with a higher-order edge detection operator than first-order independently of one or more derivative operators used in generating the edge-sharpness measure or any of the shape measures described in the next section.
  • the edge-sharpness measure should have a unit of a power of a unit of length, for example a square of a unit of length, a reciprocal of a unit of length, the unit of length itself, or a square- root of a unit of length. Any such alternative edge-sharpness measure can replace the edge width in the focus signal generator 120.
  • the correction factor as described above with reference to Figures 6A-6D and Figure 7 should be converted to adopt the same power.
  • the edge-sharpness measure is peak gradient divided by a contrast, which gives it a unit of the reciprocal of a unit of length
  • the appropriate correction factor for the edge-sharpness measure is the reciprocal of the correction factor described with reference to Figures 6A- 6D and Figure 7 above.
  • the slant correction factor for the edge- sharpness measure should be a square of the width
  • FIG. 27 illustrates a sequence of gradients across an edge plotted against distance in multiples of a spacing between successive gradients, and an area A3 of a shaded region under the plotted sequence of gradients.
  • the region is defined between two gradient levels Li and L ⁇ , which may be defined with respect to an interpolated peak gradient value (alternatively, the peak gradient value) of the sequence of gradients as, for example, predetermined portion of the interpolated peak gradient value.
  • the shaded region has four corners of interpolated gradients.
  • the area divided by the interpolated peak gradient value is a good edge-sharpness measure, as it has a unit of length. It is noted that alternative definitions of the region are possible. For example, the region may be bounded from above not by the gradient level Li but by the sequence of gradients.
  • FIG. 28 illustrates a sequence of gradients of samples of the image data across an edge plotted against distance in multiples of a spacing between successive gradients, a center of gravity 3401 (i.e. center of moment), and distances u ⁇ , U 3 , U 4 , U 5 and ue of the gradients (having gradient values g ⁇ , gs, g 4 , g 5 and ge) from the center of gravity.
  • a good edge- sharpness measure is a if-th central moment of the gradients about the center of gravity, namely a weighted average of the distances of the gradients from the center of gravity with the weights being magnitudes of the respective gradients, k being an even integer.
  • k can be 2, which makes the edge-sharpness measure a variance as if the sequence of gradients were a probability distribution.
  • the edge-sharpness measure has a unit of a square of a unit of length. More generally, the edge-sharpness measure may be a function of distances of a plurality of gradients of a
  • the predefined position may be an interpolated peak position for the sequence of gradients.
  • a proper subset of the gradients of edge may be chosen according to a predefined criterion to participate in this calculation.
  • the gradients may be required to have gradient values at least a predetermined fraction of the peak gradient or gradient value of an interpolated peak of the sequence of gradients.
  • FIG. 25 illustrates a sequence of second derivatives of a sequence of samples of image data across an edge plotted against distance in multiples of a spacing between successive second derivatives, showing (a) a width W s between a pair of positive and negative peaks, (b) a width Wi between a pair of outermost interpolated second derivatives that have a given magnitude h lr (c) a width i3 ⁇ 4> between an inner pair of
  • interpolated second derivatives that have the given magnitude h lr and (d) a distance Di from a zero-crossing (between the pair of positive and negative peaks) to an outermost
  • any one of the three widths W s , Wi and i3 ⁇ 4> may used as the edge- sharpness measure.
  • the edge- sharpness measure may be a weighted sum of distances from the zero-crossing (between the pair of positive and negative peaks, and may be interpolated) of the second derivatives with the weights being magnitudes of the respective second
  • predefined position may be the midway point between the pair of positive and negative gradients.
  • -A ⁇ X-Y ⁇ B where A and B are positive numbers, such that values of X-Y greater than -A and less than B do not result in a determination of asymmetry, whereas values of X-Y either more positive than B or more negative than -A will result in determination of excessive lack of symmetry.
  • the range of values of X-Y less negative than -A and less positive than B is referred to hereinafter as tolerance region, and the limits of the tolerance region are the asymmetry thresholds.
  • -A and B are both asymmetry thresholds that delimit the tolerance region for the asymmetry that X and Y measures.
  • exceeding the (relevant) asymmetry threshold For example, if (X-Y) is more
  • This interpolated peak position may be used to calculate the distances to the left and to the right as described below.
  • the interpolated peak gradient may also be used to calculate the gradient level at which those distances are measured or above/below which pixels are counted. For example, in Figure 24A, a vertical dash- dot line is drawn under an interpolated peak 3270, a horizontal dotted line 3275 is drawn across the
  • a modification to the above method is to determine the distances to the left and the right, respectively, from the peak gradient to where the gradient profile is interpolated to cross a certain gradient level that is a fraction (preferably between 10% and 90%, more preferably between 20% and 80%) (the "crossings") of the gradient value of the peak gradient 3212 (alternatively, the interpolated peak 3270), and find a lack of symmetry if the larger distance exceeds the smaller distance by a certain width asymmetry threshold or more. In other words, one distance subtracts the other distance being more negative than - (width asymmetry threshold) or more positive than the width asymmetry threshold will cause a determination of lack of symmetry.
  • the tolerance region thus occupies an interval of number symmetrical about zero.
  • Figure 24A also illustrates this asymmetry detection method.
  • the distances may be measured from the peak gradient 3212 (at position 6) , or alternatively from the interpolated peak 3270 (at approximately position 5.8).
  • the distances W L and W R are measured from the interpolated peak 3270, giving approximately 2.5 and 1.3,
  • Each of the two areas may be bounded on one side by a vertical line below the peak gradient (or the interpolated peak) , on the other side by the interpolated gradient (in solid curve) (or, alternatively, straight lines connecting consecutive gradients) , and from the top and bottom by an upper gradient level and a lower gradient level each at a different predetermined fraction of the peak gradient level (or, alternatively, interpolated peak gradient level, i.e. gradient level of the interpolated peak) (alternatively, no upper gradient level limits the area but just the gradients or interpolated gradient profile) .
  • an upper gradient level 3276 is drawn at 0.75 and a lower gradient level 3274 at 0.2.
  • a region 3277 (with area A L ) (left of the positive interpolated peak 3270) is bounded from above by the upper gradient level 3276, from below by the lower gradient level 3274, from the right by the vertical dash-dot line under the interpolated peak, and from the left by the interpolated gradient profile (solid curve) .
  • a region 3278 (having area A R ) (right of the same peak 3270) is similarly bounded from above and below, and is bounded from the right by the interpolated gradient profile and from the left by the vertical dash-dot line. A lack of symmetry is detected when the areas A L and A R differ beyond a
  • the asymmetry may be detected when the larger area exceeds the smaller area by an area asymmetry threshold or more.
  • the area asymmetry threshold may be expressed in one of various different ways. It may be expressed in terms of a percentage (of the lesser area) , which may be a fixed number for the image or, alternatively, a function of the edge width of the associated edge. Alternatively, it may be expressed in terms of an area difference for the normalized gradient profile. Other reasonable dependencies based on how the image signal (from which the gradients in the gradient profile are generated) and/or how the gradients in the gradient profile are generated are acceptable for
  • a common distance W 0 is measured from the interpolated peak 3270 (or, alternatively, peak gradient 3212) to the left and right sides of the gradient profile.
  • interpolated gradients are calculated (or gradient is found) such that their distances from the vertical dash- dot line under the interpolated peak 3270 (or peak gradient 3212) are both W 0 .
  • both interpolated gradients would be at a common gradient level.
  • the interpolated gradients lie on different gradient levels G 2 3252, G h 3253.
  • the common W 0 may be selected to be a predetermine fraction of the edge width, such as a fraction between 0.1 and 0.5, preferably between 0.2 and 0.4.
  • W 0 may be selected as the lesser of the two distances from the interpolated peak 3270 (or, alternatively, the peak gradient 3212) to a pair of interpolated gradients or gradients at a given gradient level that is a
  • G h alone can be the parameter to indicate a degree of asymmetry.
  • a gradient asymmetry threshold then may be set such that when G h exceeds the threshold the lack of asymmetry is detected.
  • a modification of the immediate above method is to compare between first or second derivatives at those two interpolated gradients at gradient levels W 1 and W h , respectively.
  • both interpolated gradients would be have first derivatives that are opposite in their signs but
  • the interpolated gradients usually differ in first and second derivatives.
  • a lack of symmetry is detected when the magnitude of the first derivative differs between the two interpolated gradients (or possibly gradients) beyond a predetermined limit according to a prescribed criterion.
  • the asymmetry may be detected when the larger first
  • the asymmetry threshold may be expressed in one of various different ways. It may be expressed in terms of a
  • a function of the edge width of the associated edge is acceptable for determining the gradient asymmetry threshold.
  • a vertical line can be drawn from a midpoint between the pair of intersections between the upper gradient level
  • the interpolated gradient curve has a segment on the left (having a length L L ) between normalized gradient levels of 0.25 and 0.75, longer than a segment on the right, whose length L R is clearly shorter, indicating a lack of symmetry.
  • a lack of symmetry is detected when the lengths L L and L R differ beyond a predetermined limit according to a prescribed criterion.
  • the asymmetry may be detected when the longer length exceeds the shorter length by a length asymmetry threshold or more.
  • the length asymmetry threshold may be expressed in one of various different ways. It may be expressed in terms of a
  • the length method described immediately above and illustrated using Figure 24C may be modified in yet another way.
  • the distance on the left (W BL ) between where the interpolated gradient curve intersects the upper and lower gradients, respectively, with that on the right (W BR ) is compared.
  • the a lack of symmetry is found if W BL and W BR differ too much according to a prescribed
  • inter- midpoint distance a distance between the first 3281 and second 3280 midpoints
  • a lack of symmetry is detected when the inter-midpoint distance exceeds a certain inter-midpoint- distance asymmetry threshold.
  • the inter-midpoint distance is twice of ⁇ W BL - W BR ⁇ . In a variation on this method, only one gradient level 3274 is used and only the
  • first derivatives on both sides that only differ in sign but are identical in magnitude.
  • the first derivatives may be calculated approximately by an
  • Still another method is to find second derivatives of the gradient profile on two sides of the peak gradient 3212 (alternatively, the interpolated peak 3270) and compare the second derivatives under a prescribed
  • the asymmetry can be detected in the image sample values across the edge. Referring to Figure 26, for the two narrowest undivided portions of the edge that have contrasts CI and C2 respectivey, their centers will match if there is perfect symmetry in the gradients.
  • a first method is to eliminate edges that belong to vertical/horizontal concatenated edges having lengths lesser than a concatenated length threshold.
  • the concatenated length threshold may be larger when the region of interest is dimmer. For example, the
  • Figure 8 illustrates a vertical concatenated edge and its length.
  • cells R2C3 and R2C4 form a first vertical edge
  • cells R3C3, R3C4, and R3C5 together form a second vertical edge
  • cells R4C4 and R4C5 together form a third vertical edge.
  • the first and the third vertical edges each touches only one other vertical edge
  • the second vertical edge touches two other vertical edges.
  • the first, second and third vertical edges together form a vertical concatenated edge having a length of 3.
  • (horizontal) concatenated edge has two or more branches, i.e. having two edges in a row (column), the length may be defined as the total number of edges within the
  • the fine switch 220 may be removed so that focus signal calculation unit 210 receives a first set data not filtered by the width filter 209 and a second set filtered, and for each calculates a different focus signal, gross focus signal for the former, fine focus signal for the latter, and outputs both to the processor 112, 112' .
  • a focus control system may use narrow- edge count to trigger a change from a search mode to a tracking mode.
  • the focus control system uses the fine focus signal to "lock" the object.
  • the focus control system may use the gross focus signal to identify the direction to move and regulate the speed of movement of the lens.
  • narrow-edge count peaks sharply.
  • the processor 112, 112', 112" may switch into the tracking mode and use the fine focus signal for focus position control upon detection of a sharp rise in the narrow-edge count or a peaking or both.
  • a threshold which may be different for each different sharp focus position, may be assigned to each group of objects found from an end-to-end focus position "scan", and subsequently when the narrow-edge count surpasses this threshold the corresponding group of objects is detected.
  • an end-to-end focus position scan can return a list of maximum counts, one maximum count for each peaking of the narrow-edge count.
  • a list of thresholds may be generated from the list of maximum counts, for example by taking 50% of the maximum counts.
  • the second split beam may be reflected by the full mirror 2852 before finally reaching the auxiliary pixel array 108", which corresponds to the pixel array 108 in system 102 shown in Figure 1.
  • the ratio of light intensity of the first beam to the second beam may be 1-to-l or greater than 1-to-l.
  • the ratio may be 4-to-l.
  • An array of photodetectors in the auxiliary pixel array 108" may have a pixel width ("auxiliary pixel width") that is smaller than a pixel width of the main pixel array 2808 ("main pixel width”) .
  • the auxiliary pixel width may be as small as half of the main pixel width. If an auxiliary pixel is covered by a color filter and the auxiliary pixel width is less than 1.3 times the smallest spot of visible light without optical lowpass filtering, a second optical lowpass filter may be inserted in front of the auxiliary array 108" to increase the smallest diameter on the auxiliary pixel array 108" ("smallest auxiliary diameter") to between 1.3 to 2 times as large but still smaller than the smallest main
  • One or more parameters for use in the system may be stored in a non ⁇ volatile memory in a device within the system.
  • the device may be a flash memory device, the processor, or the image sensor, or the focus signal generator as a separate device from those.
  • One or more formulae for use in the system for example for calculating the sharp_edge_width

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Focusing (AREA)
EP11748716.5A 2010-12-07 2011-06-09 Auto-focus image system Ceased EP2649788A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PCT/IB2010/055641 WO2011070513A1 (en) 2009-12-07 2010-12-07 Auto-focus image system
PCT/IB2011/052529 WO2012076993A1 (en) 2010-12-07 2011-06-09 Auto-focus image system

Publications (1)

Publication Number Publication Date
EP2649788A1 true EP2649788A1 (en) 2013-10-16

Family

ID=44511103

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11748716.5A Ceased EP2649788A1 (en) 2010-12-07 2011-06-09 Auto-focus image system

Country Status (8)

Country Link
EP (1) EP2649788A1 (es)
JP (1) JP6057086B2 (es)
AU (1) AU2011340208A1 (es)
CA (1) CA2820847A1 (es)
DE (1) DE112011104233T5 (es)
MX (1) MX2013006516A (es)
SG (1) SG190755A1 (es)
WO (1) WO2012076993A1 (es)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111769904B (zh) * 2020-06-23 2021-08-17 电子科技大学 一种反向散射通信***中多反射设备并行传输的检测方法
CN112529876B (zh) * 2020-12-15 2023-03-14 天津大学 一种隐形眼镜边缘缺陷的检测方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH066661A (ja) * 1992-06-19 1994-01-14 Canon Inc 合焦検出装置
JP2002189164A (ja) * 2000-12-21 2002-07-05 Minolta Co Ltd 光学系制御装置、光学系制御方法および記録媒体
JP4334179B2 (ja) * 2002-03-07 2009-09-30 シャープ株式会社 電子カメラ
JP4493416B2 (ja) * 2003-11-26 2010-06-30 富士フイルム株式会社 画像処理方法および装置並びにプログラム
JP2006024193A (ja) * 2004-06-07 2006-01-26 Fuji Photo Film Co Ltd 画像補正装置、画像補正プログラム、画像補正方法、および画像補正システム
WO2010061250A1 (en) * 2008-11-26 2010-06-03 Hiok-Nam Tay Auto-focus image system
GB2488482A (en) 2009-12-07 2012-08-29 Hiok-Nam Tay Auto-focus image system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2012076993A1 *

Also Published As

Publication number Publication date
AU2011340208A1 (en) 2013-07-18
JP2014513809A (ja) 2014-06-05
DE112011104233T5 (de) 2013-12-12
WO2012076993A1 (en) 2012-06-14
SG190755A1 (en) 2013-07-31
CA2820847A1 (en) 2012-06-14
JP6057086B2 (ja) 2017-01-11
MX2013006516A (es) 2013-12-12

Similar Documents

Publication Publication Date Title
US9734562B2 (en) Auto-focus image system
US8630504B2 (en) Auto-focus image system
US9065999B2 (en) Method and apparatus for evaluating sharpness of image
US20140022443A1 (en) Auto-focus image system
EP2719162B1 (en) Auto-focus image system
WO2012076992A1 (en) Auto-focus image system
WO2012076993A1 (en) Auto-focus image system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130708

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20150130

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20171229