US20180189934A1 - System and Method of Reducing Noise Using Phase Retrieval - Google Patents

System and Method of Reducing Noise Using Phase Retrieval Download PDF

Info

Publication number
US20180189934A1
US20180189934A1 US15/738,306 US201615738306A US2018189934A1 US 20180189934 A1 US20180189934 A1 US 20180189934A1 US 201615738306 A US201615738306 A US 201615738306A US 2018189934 A1 US2018189934 A1 US 2018189934A1
Authority
US
United States
Prior art keywords
image
magnitude
coherence
initial
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/738,306
Inventor
David C. Hyland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/738,306 priority Critical patent/US20180189934A1/en
Publication of US20180189934A1 publication Critical patent/US20180189934A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the present invention relates to image production, in particular, constructing high quality images despite large amounts of noise in the coherence magnitude measurement data.
  • phase retrieval is the nonlinear estimation problem in which the magnitude of the Fourier transform of the quantity of interest is known or measured and the phase is unknown and must be recovered by exploiting certain constraints on the object of interest.
  • Prior attempts at solving the phase retrieval problem have included error-reduction through the Gerchberg-Saxton algorithm using a four-step iterative process, hybrid input-output where the fourth step of error-reduction is replaced with another function that reduces the probability of stagnation, and the shrinkwrap solution.
  • error-reduction through the Gerchberg-Saxton algorithm using a four-step iterative process, hybrid input-output where the fourth step of error-reduction is replaced with another function that reduces the probability of stagnation, and the shrinkwrap solution.
  • it is the image of an object in the midst of a dark background that must be determined.
  • the magnitude of the optical coherence is measured at various points; and this is the magnitude of the Fourier transform of the image by virtue of the Van Cittert-Zernike theorem.
  • ICI Intensity Correlation Imaging
  • the invention relates to a system and method of image construction from light collecting apertures.
  • the method comprises receiving, by a plurality of light collecting apertures, photo data from a plurality of photo sensors, evaluating the photo data to determine the absolute magnitude of optical coherence, sending the photo data to an image assessment module, reducing the noise of the photo data by manipulating the absolute magnitude of the optical coherence to conform to an assumed initial image having an initial image magnitude and initial image phase resulting in an estimated magnitude, taking the Fourier transform of multiplying the estimated magnitude by the initial image phase to determine an estimated image, applying constraints to the estimated image to determine a desired image, testing the desired image for convergence and whether it is different from the assumed initial image, if the desired image fails the tests it becomes the assumed initial image, and the process reiterates.
  • the present invention is a method that, from very noisy coherence magnitude data, simultaneously estimates the true coherence magnitudes and constructs the image in a novel way by doing so in one step. It is shown that because of the numerous constraints on both the image and the coherence magnitudes, a substantial portion of the measurement noise can be suppressed.
  • FIG. 1 shows a flow chart showing the method and system of the present invention.
  • embodiments of the invention provide a system and method for constructing a high-quality image.
  • Photo data is received from light-collecting apertures distributed on a surface and sent to a central collection point where it can be evaluated.
  • the photo data includes magnitude measurements of an optical coherence.
  • An image assessment module receives this photo data and constructs a high-quality image based on the data.
  • FIG. 1 provides a flow chart illustrating the present invention.
  • a plurality of light collecting apertures measures photo data, including the magnitude of an optical coherence at many relative locations.
  • Each light collecting aperture comprises a sensor to receive photons and record output subtracting the average light intensity resulting in the absolute magnitude of the optical coherence.
  • this absolute magnitude data is sent to a central collection point having an image assessment module.
  • the image assessment module is configured to reduce the noise of an image by applying an iterative phase retrieval algorithm as described herein and in boxes 24 - 34 .
  • the input of the phase retrieval algorithm includes the absolute magnitude of the optical coherence where the number of coherence measurements equals the number of pixels in a desired optical image.
  • the phase retrieval algorithm takes an initial image.
  • the initial image is an assumed image pixilated into a grid, N pixels on each side. It is further assumed that at the outset, the foreground of the initial image can be bounded by a simple boundary. As is typical, there may additionally be many “background” or zero intensity pixels within the rectangle as well.
  • the initial image is a square with the same number of pixels on each side however, one having knowledge in the art understands that the initial image may comprise any geometric shape.
  • the optical coherence magnitudes comprise a nonnegative matrix of the same dimensions. It is convenient to consider both image and coherence as N 2 -dimensional vectors.
  • the algorithm is mathematically described below with the steps A-F below corresponding to boxes 24 - 34 in FIG. 1 .
  • the notation is defined by:
  • g ⁇ C N 2 Current value of the estimated image (pixellated)
  • optical coherence magnitude data is represented by:
  • N 1,k and N 2,k are all mutually independent Gaussian random variables of zero mean and unit variance.
  • the algorithm recognizes that much of the noise in the averaging data is inconsistent with the image domain constraints, and can be rendered harmless if both the Fourier domain and Image domain constraints can be made to intersect.
  • the usual image domain constraints (the background pixels are zero) are augmented by the requirement that the foreground pixels be real-valued and positive.
  • the algorithm accepts the noisy coherence magnitude data and uses a relaxation technique to project this data onto a subspace wherein the image domain constraints can be satisfied. Run to completion for a single set of coherence magnitude data, we have shown by example, that the impact of much of the noise is eliminated, even for extremely large amounts of noise.
  • each pixel represents a random number between 0 and 0.1. In an embodiment of the invention, during the first iteration, all pixels will be zero. In an alternate embodiment of the invention, it does not matter what number each pixel represents.
  • the system then proceeds to box 26 where the measurement of the magnitude of the optical coherence is modified from the photo data but calculated closer to the initial image data resulting in an estimate of the magnitude of the coherence. From the outset, the magnitude of ⁇ obtained in step B and in box 26 is very large owing to the noise component, and likewise the magnitudes of g p and g are similarly large. However the average intensity of the image is immaterial to image interpretation, so we often normalize each image result by its infinity norm, that is: g ⁇ g/ ⁇ g ⁇ ⁇ ,where
  • ⁇ g ⁇ ⁇ max k ⁇ ⁇ g k ⁇ .
  • the estimate of the magnitude of the coherence calculated at box 26 is multiplied by the phase of the initial image.
  • the system continues to box 30 , where the Fourier transform of the result of box 28 is taken to determine an estimated image and at box 32 image conditions are imposed on the estimated image and at box 34 , the image is assessed to determine whether it converges or not.
  • the image conditions include removal of imaginary numbers forcing any complex numbers to be real numbers.
  • the image conditions include requiring all negative values to be positive.
  • the value of ⁇ is set to 0 if the pixel is in the foreground and 1 if it is in the background.
  • Steps D, E, and F and corresponding boxes 30 - 34 imply that when the algorithm converges, the following constraints are satisfied:
  • the rank of ⁇ is the number of pixels in the background, M.
  • the first equation amounts to 2M constraints on the noise component, G .
  • the second condition supplies N 2 ⁇ M constraints. Since G ⁇
  • steps D, E, and F; and especially E strongly drive the algorithm to increase the effective SNR of the computed image.
  • the phase retrieval algorithm is applied to each measurement for at least one iteration and the Fourier transform is applied to construct the desired optical image as shown in box 34 .
  • the system proceeds to box 36 , where the desired optical image is evaluated to determine how much it has changed. Multiple iterations may be applied by returning to box 26 until the change in the image falls into a tolerance level and essentially ceases to change as determined in box 38 .
  • the above algorithm can be complemented by one of several existing methods of incorporating all the background pixels within the projection ⁇ ; i.e., filling in the empty spaces in the rectangle in FIG. 1 .
  • Graph 1 shows the constraint violation as a function of iteration, along with the image and ⁇ values corresponding to various stages of development. It is evident that ⁇ rapidly evolves into a tight boundary demarcating the background pixels. The complete projection can be found in this way during the processing of the first set of coherence magnitude data, then in the processing of subsequent data sets, the projection can be held constant.
  • Graph 2 shows the evolution of the constraint violation over a longer period. After the first, brief oscillation, associated with refining ⁇ , the constraint violation steadily decreases by over three orders of magnitude in 4000 iterations. In the following, we assume the refined value of ⁇ and focus on the noise reduction characteristics of the algorithm.
  • the magnitude of ⁇ obtained in step B is very large owing to the noise component, and likewise the magnitudes of g p and g are similarly large.
  • the average intensity of the image is immaterial to image interpretation, so we often normalize each image result by its infinity norm, that is: g ⁇ g/ ⁇ g ⁇ ⁇ , where
  • ⁇ g ⁇ ⁇ max k ⁇ ⁇ g k ⁇ .
  • the rank of ⁇ is the number of pixels in the background, M.
  • the second condition supplies N 2 ⁇ M constraints. Since G ⁇
  • the algorithm drives to impose 3 ⁇ 4N 2 +1 ⁇ 2M constraints on G , while 2N 2 independent conditions would be needed to determine G uniquely.
  • steps D, E, and F; and especially E strongly drive the algorithm to increase the effective SNR of the computed image.
  • steps D, E, and F; and especially E strongly drive the algorithm to increase the effective SNR of the computed image.
  • steps D, E, and F; and especially E strongly drive the algorithm to increase the effective SNR of the computed image.
  • steps D, E, and F; and especially E strongly drive the algorithm to increase the effective SNR of the computed image.
  • Illustration 4 shows the only image domain constraints that remain operative.
  • Illustration 5 shows the resulting development.
  • the range of variation along the real axis remains centered at the eventually determined value, and continually decreases, while the imaginary part of (g p ) 15,15 converges to zero.
  • g p ) 15,15 converges to a real-valued and positive value.
  • Steps B and D work to shift all pixels to the right, until all their real parts are positive.
  • the real parts of g do not change appreciably, rather it is the imaginary parts of g that are diminished.
  • the pixel values move upward until they come to rest on the real axis.
  • steps B and D work to increase the estimated “true” coherence magnitudes, and correspondingly the estimated image until the variability of the real parts of the image values increases beyond the noise levels of the measured coherence data.
  • the numerous constraints on the problem allow us to estimate and suppress much of the noise until the effective SNR of the image estimate is greater than one.
  • ⁇ L g L / ⁇ g L ⁇ ⁇
  • the fundamental data to be collected for ICI consists in recording the intensity fluctuations observed at each of a pair of apertures (separated by some position vector that is in proportion to the relative position in the Fourier, or “u-v”, plane), using appropriate photodetectors. The two data streams are multiplied and time averaged.
  • This (ensemble) averaged intensity fluctuation cross correlation is proportional to the square of the magnitude of the mutual coherence.
  • the SNR of the time averaged intensity fluctuation cross correlation is identical to the SNR of ⁇ 2 . Retaining only the dominant terms, this takes the simple form:
  • ⁇ _ ⁇ 4 ⁇ D 2 ⁇ ⁇ ⁇ ⁇ n _
  • the conventional approach is to let the averaging time increase until SN becomes sufficiently large that the time average well approximates the ensemble average,
  • ⁇ ⁇ ⁇ T 1 v d ⁇ ( 2 ⁇ ⁇ SNR G ⁇ 2 ⁇ _ ⁇ ⁇ ⁇ ⁇ 2 ) 2 ,

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A system and method for producing an image is provided in which magnitude data related to photo data received from multiple apertures configured with photo detectors is manipulated to conform to an assumed image, phase data related to the photo data received from multiple apertures configured with photo detectors is multiplied by the manipulated magnitude resulting in an image function, imaging constraints are applied to the Fourier transform of the image function to create a desired image, and the desired image is tested to determine whether additional iterations of the method are necessary, with the desired image becoming the assumed image in subsequent iterations, until the desired image is no long substantially different from the assumed image.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • The present application claims the benefit of U.S. provisional application No. 62/184,557 filed Jun. 25, 2015 which is incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates to image production, in particular, constructing high quality images despite large amounts of noise in the coherence magnitude measurement data.
  • BACKGROUND INFORMATION
  • The present invention relates to producing images by solving the problem of loss of phase information when solely utilizing photo data, known as phase retrieval. Phase retrieval is the nonlinear estimation problem in which the magnitude of the Fourier transform of the quantity of interest is known or measured and the phase is unknown and must be recovered by exploiting certain constraints on the object of interest. Prior attempts at solving the phase retrieval problem have included error-reduction through the Gerchberg-Saxton algorithm using a four-step iterative process, hybrid input-output where the fourth step of error-reduction is replaced with another function that reduces the probability of stagnation, and the shrinkwrap solution. As applied to astronomy, it is the image of an object in the midst of a dark background that must be determined. In some cases, as in flux collector astronomy, only the magnitude of the optical coherence is measured at various points; and this is the magnitude of the Fourier transform of the image by virtue of the Van Cittert-Zernike theorem.
  • Presently, in applications to flux collector astronomy using a plethora of large, cheap, “light bucket” apertures implementing Intensity Correlation Imaging (ICI). Based upon the Brown-Twiss effect, ICI involves only intensity fluctuation measurements at each telescope. The time averaged cross-correlation of these measurements produces estimates of the coherence magnitudes from which the image is computed via known phase retrieval algorithms. In contrast to amplitude interferometry, no combiner units are required, and the sensitivity to phase and intensity scintillations due to atmospheric conditions is negligible. Thus, ICI has the potential to enormously reduce hardware costs and complexity. However, the multiplier between the intensity fluctuation cross-correlation and the coherence magnitude is very small, so adequate signal-to-noise ratio in the coherence magnitude estimates requires long integration times. The crux of the problem seems to be that, heretofore, the measurement of coherence magnitude values and determination of the image via phase retrieval are conceived to be two separate steps.
  • SUMMARY OF THE INVENTION
  • In general, in one aspect, the invention relates to a system and method of image construction from light collecting apertures. The method comprises receiving, by a plurality of light collecting apertures, photo data from a plurality of photo sensors, evaluating the photo data to determine the absolute magnitude of optical coherence, sending the photo data to an image assessment module, reducing the noise of the photo data by manipulating the absolute magnitude of the optical coherence to conform to an assumed initial image having an initial image magnitude and initial image phase resulting in an estimated magnitude, taking the Fourier transform of multiplying the estimated magnitude by the initial image phase to determine an estimated image, applying constraints to the estimated image to determine a desired image, testing the desired image for convergence and whether it is different from the assumed initial image, if the desired image fails the tests it becomes the assumed initial image, and the process reiterates.
  • The present invention is a method that, from very noisy coherence magnitude data, simultaneously estimates the true coherence magnitudes and constructs the image in a novel way by doing so in one step. It is shown that because of the numerous constraints on both the image and the coherence magnitudes, a substantial portion of the measurement noise can be suppressed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other features and advantages of the present invention will be better understood by reading the following Detailed Description, taken together with the Drawings wherein:
  • FIG. 1 shows a flow chart showing the method and system of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details to avoid unnecessary complicating the description.
  • In general, embodiments of the invention provide a system and method for constructing a high-quality image. Photo data is received from light-collecting apertures distributed on a surface and sent to a central collection point where it can be evaluated. The photo data includes magnitude measurements of an optical coherence. An image assessment module receives this photo data and constructs a high-quality image based on the data.
  • The embodiment of FIG. 1 provides a flow chart illustrating the present invention. Referring to FIG. 1, in box 20, a plurality of light collecting apertures measures photo data, including the magnitude of an optical coherence at many relative locations. Each light collecting aperture comprises a sensor to receive photons and record output subtracting the average light intensity resulting in the absolute magnitude of the optical coherence. In an embodiment of the invention the photo data is sent to all other light collecting apertures and at least one aperture is further configured to evaluate the absolute magnitude of the optical coherence from the photo data. It is assumed that an image produced from this photo data would have substantial amounts of noise such that SN
    Figure US20180189934A1-20180705-P00001
    =10−8(σ≅104/√{square root over (2)}).
  • The system then proceeds to box 22 where, this absolute magnitude data is sent to a central collection point having an image assessment module. The image assessment module is configured to reduce the noise of an image by applying an iterative phase retrieval algorithm as described herein and in boxes 24-34. The input of the phase retrieval algorithm includes the absolute magnitude of the optical coherence where the number of coherence measurements equals the number of pixels in a desired optical image.
  • In an embodiment of the invention, the phase retrieval algorithm takes an initial image. During the first iteration, the initial image is an assumed image pixilated into a grid, N pixels on each side. It is further assumed that at the outset, the foreground of the initial image can be bounded by a simple boundary. As is typical, there may additionally be many “background” or zero intensity pixels within the rectangle as well. In the present embodiment of the invention, the initial image is a square with the same number of pixels on each side however, one having knowledge in the art understands that the initial image may comprise any geometric shape. Likewise, the optical coherence magnitudes comprise a nonnegative matrix of the same dimensions. It is convenient to consider both image and coherence as N2-dimensional vectors.
  • The algorithm is mathematically described below with the steps A-F below corresponding to boxes 24-34 in FIG. 1.

  • A G′=ℑg

  • B Ĝ⇐(1−∈)Ĝ+∈|G′|, ∈≅0.002

  • C G p=ĜG′(1/|G′|)

  • D g p=ℑH G p

  • E g pp =τg p +[I−τ]max {0,Re(g p)}

  • F g=(I−τ)g pp+τ(g−βg pp), β≅0.7
  • The notation is defined by:
  • g∈CN 2 =Current value of the estimated image (pixellated)
  • g∈RN 2 =The true image
  • ℑ∈CN 2 ×N 2 =Discrete Fourier transform (unitary matrix, ℑ−1=ℑH)
  • G=ℑg=The true coherence
  • τ∈RN 2 ×N 2 =Projection onto the image pixels constrained to have zero intensity (all elements zero or unity)
  • Ĝ∈RN 2 =Measured coherence magnitude
  • Where the optical coherence magnitude data is represented by:

  • Ĝ=|G+G|

  • G k= G kσ(N 1,k+iN 2,k), k=1, . . . , N2
  • Where σ is a positive, real number and N1,k and N2,k are all mutually independent Gaussian random variables of zero mean and unit variance. The algorithm here recognizes that much of the noise in the averaging data is inconsistent with the image domain constraints, and can be rendered harmless if both the Fourier domain and Image domain constraints can be made to intersect. Here, the usual image domain constraints (the background pixels are zero) are augmented by the requirement that the foreground pixels be real-valued and positive. The algorithm accepts the noisy coherence magnitude data and uses a relaxation technique to project this data onto a subspace wherein the image domain constraints can be satisfied. Run to completion for a single set of coherence magnitude data, we have shown by example, that the impact of much of the noise is eliminated, even for extremely large amounts of noise. By running the algorithm for multiple, independent data sets, and averaging the results one can achieve further substantial improvement in image quality.
  • In box 24, an initial image is assumed where each pixel represents a random number between 0 and 0.1. In an embodiment of the invention, during the first iteration, all pixels will be zero. In an alternate embodiment of the invention, it does not matter what number each pixel represents. The system then proceeds to box 26 where the measurement of the magnitude of the optical coherence is modified from the photo data but calculated closer to the initial image data resulting in an estimate of the magnitude of the coherence. From the outset, the magnitude of Ĝ obtained in step B and in box 26 is very large owing to the noise component, and likewise the magnitudes of gp and g are similarly large. However the average intensity of the image is immaterial to image interpretation, so we often normalize each image result by its infinity norm, that is: g→g/∥g∥,where
  • g = max k g k .
  • At box 28, the estimate of the magnitude of the coherence calculated at box 26 is multiplied by the phase of the initial image.
  • The system continues to box 30, where the Fourier transform of the result of box 28 is taken to determine an estimated image and at box 32 image conditions are imposed on the estimated image and at box 34, the image is assessed to determine whether it converges or not. In an embodiment of the invention, the image conditions include removal of imaginary numbers forcing any complex numbers to be real numbers. In an embodiment of the invention the image conditions include requiring all negative values to be positive. In an embodiment of the image the value of τ is set to 0 if the pixel is in the foreground and 1 if it is in the background. In an embodiment of the invention, each pixel in the initial image when the image domain constraint violation, ∥τg∥2 2, a minimum, and test to see if it is less that 0.01∥g∥.If so, the value of τ for that pixel is set to unity.
  • Steps D, E, and F and corresponding boxes 30-34 imply that when the algorithm converges, the following constraints are satisfied:

  • τℑH G=0

  • [I−τ]Im(ℑH G)=0

  • [I−τ]{|Re(ℑH( G+G))|−Re(ℑH( G+G))}=0
  • The rank of τ is the number of pixels in the background, M. Hence the first equation amounts to 2M constraints on the noise component, G . The second condition supplies N2−M constraints. Since G<<|G|, and the phases of elements of G are uniformly distributed, one half of the N2−M constraints are operative in the early iterations of the algorithm, and these constrain G. Thus, initially, the algorithm drives to impose ¾N2+½M constraints on G, while 2N2 independent conditions would be needed to determine G uniquely. Thus, it is not surprising that steps D, E, and F; and especially E, strongly drive the algorithm to increase the effective SNR of the computed image.
  • By relaxing the imposition of zero intensity conditions on the background portion of the image, this method greatly reduces the incidence of stagnation. However, if there is excessive noise in the coherence magnitude data the algorithm can still fail to converge. If the Fourier domain constraints consist of noisy coherence magnitude values, it is generally impossible to satisfy both image domain and Fourier domain constraints, leading to oscillation and stalled convergence. This issue can be addressed by proposing a formulae of the type step B above, which relaxes the Fourier domain constraint in a manner which harmonizes the two classes of constraint, achieving intersection between them. A significant difference from prior approaches is that the relaxation parameter is chosen to be a positive constant much less than unity. Another point of difference is step E which demands that intensity values within the image foreground be real and positive. The present approach can suppress substantial amounts of noise in the computed when used on multiple sets of coherence magnitude data. Specifically, very large magnitudes of noise (very small data SNR) can be successfully handled.
  • In an embodiment of the invention, the phase retrieval algorithm is applied to each measurement for at least one iteration and the Fourier transform is applied to construct the desired optical image as shown in box 34. The system proceeds to box 36, where the desired optical image is evaluated to determine how much it has changed. Multiple iterations may be applied by returning to box 26 until the change in the image falls into a tolerance level and essentially ceases to change as determined in box 38.
  • The following example illustrates various aspects of the invention and is not intended to limit the scope of the invention.
  • EXAMPLE
  • To illustrate results, we use a fictitious satellite image introduced and shown here in Illustration 1 below.
  • Our example also involves a huge amount of noise, e.g. SN
    Figure US20180189934A1-20180705-P00001
    =10−8(σ≅104/√{square root over (2)}) It is assumed that at the outset, that the foreground of the image can be bounded by a simple boundary (a rectangle in this case), as illustrated by the dashed red line in Illustration 1. As is typical, the example has many “background” (zero intensity) pixels within the rectangle as well. The above algorithm can be complemented by one of several existing methods of incorporating all the background pixels within the projection τ; i.e., filling in the empty spaces in the rectangle in FIG. 1. To illustrate how this can be done in the initial stages of the algorithm, we examine each pixel in the initial rectangle when the image domain constraint violation, ∥τg∥2 2, is a minimum, and test to see if it is less that 0.01∥g∥. If so, the value of τ for that pixel is set to unity.
  • Graph 1 shows the constraint violation as a function of iteration, along with the image and τ values corresponding to various stages of development. It is evident that τ rapidly evolves into a tight boundary demarcating the background pixels. The complete projection can be found in this way during the processing of the first set of coherence magnitude data, then in the processing of subsequent data sets, the projection can be held constant. Graph 2 shows the evolution of the constraint violation over a longer period. After the first, brief oscillation, associated with refining τ, the constraint violation steadily decreases by over three orders of magnitude in 4000 iterations. In the following, we assume the refined value of τ and focus on the noise reduction characteristics of the algorithm.
  • From the outset, the magnitude of Ĝ obtained in step B is very large owing to the noise component, and likewise the magnitudes of gp and g are similarly large. However the average intensity of the image is immaterial to image interpretation, so we often normalize each image result by its infinity norm, that is: g→g/∥g∥, where
  • g = max k g k .
  • The rank of τ is the number of pixels in the background, M. Hence the first equation amounts to 2M constraints on the noise component, G . The second condition supplies N2−M constraints. Since G<<|G|, and the phases of elements of G are uniformly distributed, one half of the N2−M constraints are operative in the early iterations of the algorithm, and these constrain G . Thus, initially, the algorithm drives to impose ¾N2+½M constraints on G , while 2N2 independent conditions would be needed to determine G uniquely. Thus, it is not surprising that steps D, E, and F; and especially E, strongly drive the algorithm to increase the effective SNR of the computed image. One may illustrate this by following the trajectory of a typical pixel value in the complex plane as a function of the iteration number.
  • For our example, we choose to follow the evolution of matrix element (gp)k,j; k=j=15 which is arrived at in step D. gp is computed before the positivity of its real part is imposed in step E, and thus displays the full extent to which the current image estimate fails to satisfy the above constraints. Pixel (15,15) is located on the main body of the spacecraft in Illustration 1.
  • During the first hundred iterations (Illustration 2) (gp)15,15 makes very large excursions, starting with a substantial region in the left half plane. However, one immediately sees the influence of Step E, because there is a constant drift to the right as shown in Illustration 3, for iterations 100 to 200. Further the extent of variation in the real and imaginary parts remains relatively constant. The trend continues until (iterations 200-300, Illustration 4) the real part of (gp)15,15 remains entirely positive. At this point, the positivity constraint listed above becomes inoperative, and step E has no effect. Moreover, in the evolution in Illustration 4, the range of variation contains the value of the real part of (gp)15 15 that the algorithm will ultimately converge to. Hence at this stage, which occurs early in the convergence process, the variability of (gp)15,15 is comparable to the “signal” that will be converged to. One can say that the signal-to-noise ratio is approximately one or greater. This is attained even though the supplied coherence magnitude data has an SNR of one in one hundred million. Note that the algorithm (principally steps B and E) increases the SNR not by reducing the noise component of the image, but by increasing the signal component. These statements hold for all of the foreground pixels. Thus the algorithm quickly reaches a stage where the overall SNR is of order unity—a regime in which the constraints have been shown to effect further reduction of noise.
  • Beyond the situation shown in Illustration 4, the only image domain constraints that remain operative are the zeroing out of the background pixels and the imaginary parts of the image in the foreground. Illustration 5 shows the resulting development. The range of variation along the real axis remains centered at the eventually determined value, and continually decreases, while the imaginary part of (gp)15,15 converges to zero. At roughly 1500 iterations, gp)15,15 converges to a real-valued and positive value.
  • Thusly, at the very start of the algorithm the foreground image values are widely disbursed, with numerous pixels in the left-half plane. Steps B and D, however, work to shift all pixels to the right, until all their real parts are positive. After this stage, the real parts of g do not change appreciably, rather it is the imaginary parts of g that are diminished. The pixel values move upward until they come to rest on the real axis. Note that steps B and D work to increase the estimated “true” coherence magnitudes, and correspondingly the estimated image until the variability of the real parts of the image values increases beyond the noise levels of the measured coherence data. In effect, the numerous constraints on the problem allow us to estimate and suppress much of the noise until the effective SNR of the image estimate is greater than one.
  • Now we examine the statistics of the performance of the algorithm when it is used to process several independent magnitude measurements, each a realization of the statistical ensemble given by (2.b). To illustrate results, we again consider the value of (gp)15,15 as in the previous discussion. In this case we are concerned with the values obtained with each independent set of measurements once the algorithm is run to a high degree of convergence. Clearly, since there is random variation of the noise components of the different sets of coherence magnitude measurements, there should also be statistical variation in the converged values of (gp)15,15. To explore this, 500 different realizations of the noisy coherence magnitude measurements were created, and for each, the algorithm was run to convergence (to a high degree of approximation, using 2000 iterations in each case). The histogram of the resulting real, positive values of (gp)15,15 is shown in Graph 3. The results suggest that the probability density of (gp)15,15 is unimodal (indeed, approximately Gaussian) with an average value of 3.06×105 and a standard deviation of 0.75×105. This is an SNR of approximately four—despite the extremely noisy data, e.g. SN
    Figure US20180189934A1-20180705-P00001
    =10−8.
  • Similar results are observed for all foreground pixels. This indicates that the converged algorithm creates a projection, call it P that removes from G+G that portion which is inconsistent with the satisfaction of the image domain constraints. Since G is the coherence magnitude of the actual image, P[G] is equal to some real, positive multiple, call it μ, of G P[G]=μG.
  • Any two coherence magnitude data sets of the same object have a common G but different noise components, say G(k) and G(j). It is observed that G(k) and G(j),when operated on by P, are uncorrelated. Moreover, the standard deviation of each ∥G(k)∥ is of order μ∥G∥. Since the Fourier operator is unitary, the same properties hold for the computed image, g+g(k), where g is the actual image and g(k) is the kth realization of its noisy component. Letting Pg=P[ℑ[. . . ]], we have:
  • P g [ g _ ] = v g _ E [ P g [ g ( k ) ] P g [ g ( j ) ] ] v 2 g _ 2 δ k j
  • where ν is a real, positive constant. These relations confirm that a refined image estimate can be found by averaging the results of the algorithm for each of several independent coherence magnitude data sets. Suppose there are L such sets, then:
  • g L = 1 L m = 1 L g m = g _ + R s . d . [ R ] g _ / L
  • The improved convergence to the true image is illustrated for our example case for increasing values of L in Illustrations 6 (a)-(d) where we display gL/∥gL.
  • Comparing with Illustration 1, even with only ten measurements (Illustration 6.b) we see that there is considerable clarity to the image. The relatively small contrasts between different components of the image are mostly evident. Later images (Illustrations 6.b,c) show increasing fidelity, and using the color bar, one notes errors in the 10% to 1% range. More quantitatively, consider the standard deviation of the image error within the foreground pixels, defined by:

  • E g=∥[I−τ](ĝ L ĝ )∥2/rank[I−τ]

  • ĝ L=g L/∥g L

  • ĝ=g/∥g
  • Note that each element of both ĝL and ĝ are contained in (0,1]. Graph 4 shows Eg as a function of the number of data sets used in the average. As anticipated, the function is approximately proportional to 1/√{square root over (L)}.
  • The above results pertain to the coherence magnitude squared signal-to-noise-ratio, SN
    Figure US20180189934A1-20180705-P00013
    , equal to 10−8. Now we consider the effects of various values of SN
    Figure US20180189934A1-20180705-P00013
    . Graph 5 shows Eg versus L for SN
    Figure US20180189934A1-20180705-P00013
    ranging from 10−2 to 10−10. Clearly the larger SN
    Figure US20180189934A1-20180705-P00013
    , the smaller the initial error. For very small values, we notice a “bottoming out” or apparent lower bound to the image error for the larger values of L. This appears to be the result of the formidable extent of computation and resultant round-off error and numerical conditioning. Pending further numerical refinements, SN
    Figure US20180189934A1-20180705-P00013
    no smaller than 10−10 to 10−11 seems to be the limit to performance.
  • Implications for ICI Signal Processing
  • The fundamental data to be collected for ICI consists in recording the intensity fluctuations observed at each of a pair of apertures (separated by some position vector that is in proportion to the relative position in the Fourier, or “u-v”, plane), using appropriate photodetectors. The two data streams are multiplied and time averaged. The basic discovery in [1] was that this (ensemble) averaged intensity fluctuation cross correlation is proportional to the square of the magnitude of the mutual coherence. Of course, the time average, not the ensemble average can be measured, so the basic data consists of the square of the modulus of the “true” coherence plus noise, as in the model for Ĝ=|G+G| above. Thus the SNR of the time averaged intensity fluctuation cross correlation is identical to the SNR of Ĝ2. Retaining only the dominant terms, this takes the simple form:
  • SN
    Figure US20180189934A1-20180705-P00001
    β√{square root over (νdΔT)}|γ|2
    β=Average number of photodetections per sec, per Hz (for one aperture)
    νd=Photodetector frequency bandwidth
    ΔT=Averaging time period
    |γ|=Normalized coherence magnitude=|G|/∥G∞∈[0,1)
  • Further, assuming the apertures are identical and circular, β is given by:
  • β _ = π 4 D 2 η n _
  • D=Aperture diameter
    η=Detector quantum efficiency
    n=Photons per second, per Hz, per unit area (spectral irradiance)
  • Now, the conventional approach is to let the averaging time increase until SN
    Figure US20180189934A1-20180705-P00001
    becomes sufficiently large that the time average well approximates the ensemble average, |G|2. Then the resulting time averaged data for sufficiently many points in the u-v plane is input to a phase retrieval algorithm and the image reconstructed. Let us explore how long this might take using relatively inexpensive hardware on a fairly dim object.
  • For the above purpose, we take a 14th magnitude G-class star. Using a black body model and assuming ˜50% attenuation through the atmosphere, we estimate: n≅7.5×10−11
  • Also, suppose 0.5 m apertures and a modest detector efficiency of 20%. Then β≅3×10−12. Further, assume an inexpensive detector with bandwidth of only 10 MHz. To obtain reasonable image detail one must be able to detect |γ| of order 0.1, and a minimal required SNR is ˜10. Evaluating
  • Δ T = 1 v d ( 2 SNR G ^ 2 β _ γ 2 ) 2 ,
  • we obtain the necessary averaging time:

  • ΔT=1.4×1017 years.
  • Next, we consider the present algorithm, which views data collection and image reconstruction as a unified process. We take L time averages of the intensity fluctuation cross-correlations, each of duration ΔTL, over non-overlapping time intervals. We accept the noisy data and run the algorithm to completion for each data set. Then we average the images resulting from all L data sets to obtain the normalized image error illustrated in Graphs 4 and 5. Consider: How long should ΔTL be? How many data sets, L, are required for our 14th magnitude example?
  • To address the first question, we set SN
    Figure US20180189934A1-20180705-P00001
    to the value of the SNR we are prepared to process for each data set. At this time as noted under Graph 5, an SNR in the range 10−10 to 10−11 seems to result in the limiting performance. Using the same parameter values as above, and setting SN
    Figure US20180189934A1-20180705-P00001
    =6.4×10−11, Equation implies that roughly: ΔTL˜100s.
  • Graph 5 shows that the limits to accuracy are achieved after about 30 data sets, Therefore, Total integration time=LΔTL=3000s. Note from Graph 5 that the normalized average error is of order 0.005, implying a final image SNR that is well above 100.
  • While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. Further embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims.

Claims (1)

What is claimed is:
1. A method for producing an image, comprising:
receiving photo data by a plurality of apertures disposed on a surface, said photo data including a magnitude of an optical coherence;
communicating said photo data to an image assessment module configured reduce noise within said photo data, wherein said image assessment performs a phase retrieval process, said phase retrieval process comprising the steps of:
(a) assuming an initial image comprising initial image magnitude and initial image phase;
(b) manipulating said magnitude of said optical coherence to conform to said initial image magnitude resulting in an estimated magnitude;
(c) obtaining an image function from said estimated magnitude and said initial image phase;
(d) transforming said image function into an estimated image;
(e) applying imaging constraints to said estimated image to create an end image;
(f) determining an image differential between said initial image and said end image;
(g) evaluating whether said image differential has reached a predetermined no-change threshold; and
(h) reiterating steps (b)-(g) to attain the predetermined no-change threshold between said initial image and said end image, wherein said end image becomes said initial image.
US15/738,306 2015-06-25 2016-06-27 System and Method of Reducing Noise Using Phase Retrieval Abandoned US20180189934A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/738,306 US20180189934A1 (en) 2015-06-25 2016-06-27 System and Method of Reducing Noise Using Phase Retrieval

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562184557P 2015-06-25 2015-06-25
PCT/US2016/039658 WO2016210443A1 (en) 2015-06-25 2016-06-27 System and method of reducing noise using phase retrieval
US15/738,306 US20180189934A1 (en) 2015-06-25 2016-06-27 System and Method of Reducing Noise Using Phase Retrieval

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/039658 A-371-Of-International WO2016210443A1 (en) 2015-06-25 2016-06-27 System and method of reducing noise using phase retrieval

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/940,847 Division US11386526B2 (en) 2015-06-25 2020-07-28 System and method of reducing noise using phase retrieval

Publications (1)

Publication Number Publication Date
US20180189934A1 true US20180189934A1 (en) 2018-07-05

Family

ID=57586575

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/738,306 Abandoned US20180189934A1 (en) 2015-06-25 2016-06-27 System and Method of Reducing Noise Using Phase Retrieval
US16/940,847 Active 2036-09-27 US11386526B2 (en) 2015-06-25 2020-07-28 System and method of reducing noise using phase retrieval

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/940,847 Active 2036-09-27 US11386526B2 (en) 2015-06-25 2020-07-28 System and method of reducing noise using phase retrieval

Country Status (2)

Country Link
US (2) US20180189934A1 (en)
WO (1) WO2016210443A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109724534A (en) * 2019-02-01 2019-05-07 吉林大学 A kind of Research on threshold selection and device for iteration relevance imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS604323B2 (en) * 1977-05-28 1985-02-02 石倉 富子 Methods of constructing quays, etc. and their frame structures
US20040052426A1 (en) * 2002-09-12 2004-03-18 Lockheed Martin Corporation Non-iterative method and system for phase retrieval
US20140169524A1 (en) * 2012-12-19 2014-06-19 General Electric Company Image reconstruction method for differential phase contrast x-ray imaging
US20140253987A1 (en) * 2011-10-26 2014-09-11 Two Trees Photonics Limited Iterative Phase Retrieval With Parameter Inheritance
US20180217064A1 (en) * 2017-02-02 2018-08-02 Korea Basic Science Institute Apparatus and method for acquiring fluorescence image

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873744B2 (en) * 2002-04-17 2005-03-29 Regents Of The University Of Minnesota Image restoration from transformed component data
GB0201496D0 (en) * 2002-01-23 2002-03-13 Seos Ltd Illumination apparatus
JP2005539255A (en) * 2002-09-12 2005-12-22 エヌライン、コーパレイシャン System and method for capturing and processing composite images System and method for capturing and processing composite images
US8103045B2 (en) * 2005-01-04 2012-01-24 Stc.Unm Structure function monitor
EP2461131B1 (en) * 2005-03-17 2017-08-09 The Board of Trustees of The Leland Stanford Junior University Apparatus and method for frequency-domain optical coherence tomography
EP1866616B1 (en) * 2005-04-05 2013-01-16 The Board Of Trustees Of The Leland Stanford Junior University Optical image processing using minimum phase functions
JP6004323B2 (en) * 2011-04-25 2016-10-05 国立大学法人北海道大学 Fourier iteration phase recovery method
GB2501112B (en) * 2012-04-12 2014-04-16 Two Trees Photonics Ltd Phase retrieval
JP2016529676A (en) * 2013-08-28 2016-09-23 フィリップス ライティング ホールディング ビー ヴィ System for sensing user exposure to light

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS604323B2 (en) * 1977-05-28 1985-02-02 石倉 富子 Methods of constructing quays, etc. and their frame structures
US20040052426A1 (en) * 2002-09-12 2004-03-18 Lockheed Martin Corporation Non-iterative method and system for phase retrieval
US20140253987A1 (en) * 2011-10-26 2014-09-11 Two Trees Photonics Limited Iterative Phase Retrieval With Parameter Inheritance
US9857771B2 (en) * 2011-10-26 2018-01-02 Two Trees Photonics Limited Iterative phase retrieval with parameter inheritance
US20140169524A1 (en) * 2012-12-19 2014-06-19 General Electric Company Image reconstruction method for differential phase contrast x-ray imaging
US20180217064A1 (en) * 2017-02-02 2018-08-02 Korea Basic Science Institute Apparatus and method for acquiring fluorescence image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109724534A (en) * 2019-02-01 2019-05-07 吉林大学 A kind of Research on threshold selection and device for iteration relevance imaging

Also Published As

Publication number Publication date
US20200394761A1 (en) 2020-12-17
WO2016210443A1 (en) 2016-12-29
US11386526B2 (en) 2022-07-12

Similar Documents

Publication Publication Date Title
Garg et al. Comparison of various noise removals using Bayesian framework
Rostami et al. Image deblurring using derivative compressed sensing for optical imaging application
Cunnington et al. 21-cm foregrounds and polarization leakage: cleaning and mitigation strategies
Connor et al. Deep radio-interferometric imaging with POLISH: DSA-2000 and weak lensing
US9759995B2 (en) System and method for diffuse imaging with time-varying illumination intensity
US11386526B2 (en) System and method of reducing noise using phase retrieval
Donghua et al. A multiscale transform denoising method of the bionic polarized light compass for improving the unmanned aerial vehicle navigation accuracy
Burtscher et al. Observing faint targets with MIDI at the VLTI: the MIDI AGN large programme experience
Bos et al. Robustness of speckle-imaging techniques applied to horizontal imaging scenarios
US11348205B2 (en) Image processing apparatus, image processing method, and storage medium
Fischer et al. Signal processing efficiency of Doppler global velocimetry with laser frequency modulation
CN106991659A (en) A kind of multi-frame self-adaption optical image restoration methods for adapting to atmospheric turbulance change
Molnar et al. Spectral deconvolution with deep learning: removing the effects of spectral PSF broadening
Gladysz et al. Characterization of the Lick adaptive optics point spread function
Samed et al. An Improved star detection algorithm using a combination of statistical and morphological image processing techniques
Wang et al. Optical satellite image MTF compensation for remote-sensing data production
Malygin et al. Medium-band photometric reverberation mapping of AGNs at $0.1< z< 0.8$. Techniques and sample
Hadj-Youcef et al. Spatio-spectral multichannel reconstruction from few low-resolution multispectral data
Turrisi et al. Image deconvolution techniques for motion blur compensation in DIC measurements
Anconelli et al. Iterative methods for the reconstruction of astronomical images with high dynamic range
Jenkin Fast Prediction of Contrast Detection Probability
Chatellier et al. Assessment of the statistical relevance of TR-PIV datasets
WO2024111145A1 (en) Noise elimination method, noise elimination program, noise elimination system, and learning method
Woods et al. Spatial-frequency-based metric for image superresolution
Mercier et al. Design rules for Background Oriented Schlieren experiments with least-squares based displacement calculation

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION