WO2018173031A1 - Système et procédé de compensation de diffraction - Google Patents

Système et procédé de compensation de diffraction Download PDF

Info

Publication number
WO2018173031A1
WO2018173031A1 PCT/IL2017/050368 IL2017050368W WO2018173031A1 WO 2018173031 A1 WO2018173031 A1 WO 2018173031A1 IL 2017050368 W IL2017050368 W IL 2017050368W WO 2018173031 A1 WO2018173031 A1 WO 2018173031A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
imaging
filter
linear filter
Prior art date
Application number
PCT/IL2017/050368
Other languages
English (en)
Inventor
Erel Granot
Shalom BLOCH
Shmuel Sternklar
Original Assignee
Ariel Scientific Innovations Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ariel Scientific Innovations Ltd. filed Critical Ariel Scientific Innovations Ltd.
Priority to PCT/IL2017/050368 priority Critical patent/WO2018173031A1/fr
Publication of WO2018173031A1 publication Critical patent/WO2018173031A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Definitions

  • the present invention in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
  • Wave diffraction is a known physical phenomenon, in which the propagation direction of a wave is redirected due to interaction between the wave and small size objects.
  • the diffraction may generate spatial distortion of the image information.
  • the distortion can reduce the sharpness or produce other undesired aberrations.
  • many imaging modalities utilize optical elements such as lenses for compensating diffraction.
  • the present Inventors discovered a technique that can be used for compensating diffraction by means of image processing.
  • a method of processing image data generated by capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object comprises: applying a linear filter to the image data to provide filtered image data; down- shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down-shifted image data, thereby processing the image data.
  • a method of imaging comprises: capturing an image of an object through a grating selected to up-shift a spatial frequency of radiation received from the object, thereby providing image data; applying a linear filter to the image data to provide filtered image data; down-shifting a spatial frequency of the filtered image data; and reconstructing an image based on the down- shifted image data, thereby processing the image data.
  • the method comprises applying a band pass filter to the image data prior to the application of the linear filter.
  • the linear filter is a spatial filter and the method is executed without transforming the image data to a spectral domain.
  • the image is an electron beam image. According to some embodiments of the invention the image is an X-ray image. According to some embodiments of the invention the image is an ultrasound image. According to some embodiments of the invention the image is a thermal image. According to some embodiments of the invention the image is an ultraviolet image. According to some embodiments of the invention the image is a visible image. According to some embodiments of the invention the image is an infra-red image. According to some embodiments of the invention the image is captured by an imaging system that is devoid of any diffraction compensating optical element.
  • the filter is a spectral filer
  • the method comprises transforming the image data to a spectral domain prior to application of the linear filter.
  • the linear filter is characterized by a cutoff parameter ⁇ , which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
  • the linear filter is characterized by a cutoff parameter ⁇ , which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
  • the cutoff parameter ⁇ is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image.
  • the cutoff parameter ⁇ equals about one third of the difference between the maximal intensity and the average intensity of the image.
  • an imaging kit comprising a grating selected to up- shift a spatial frequency of radiation received from an object, an imaging system configured for imaging the object though the grating to provide image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, and to down- shift a spatial frequency of the filtered image data.
  • the imaging kit wherein the image processor is configured for applying a band pass filter to the up-shifted image data prior to the application of the linear filter.
  • the imaging system is an electron beam imaging system. According to some embodiments of the invention the imaging system is an X-ray imaging system. According to some embodiments of the invention the imaging system is an ultrasound imaging system. According to some embodiments of the invention the imaging system is a thermal imaging system. According to some embodiments of the invention the imaging system is an ultraviolet imaging system. According to some embodiments of the invention the imaging system is devoid of any diffraction compensating optical element.
  • a system for processing image data comprises an input for receiving image data, and an image processor configured to apply a linear filter to the image data to provide filtered image data, to down-shift a spatial frequency of the filtered image data, and to reconstruct an image based on the down- shifted image data.
  • the filter is a spatial filer.
  • the filter is a spectral filer
  • the image processor is configured for transforming the image data to a spectral domain prior to application of the linear filter.
  • the linear filter is characterized by a cutoff parameter ⁇ , which is higher than 10% of a difference between a maximal intensity and an average intensity of the image.
  • the linear filter is characterized by a cutoff parameter ⁇ , which is less than 90% of a difference between the maximal intensity and an average intensity of the image.
  • the cutoff parameter ⁇ is less than 90% and above 10% of the difference between the maximal intensity and the average intensity of the image.
  • the cutoff parameter ⁇ equals about one third of the difference between the maximal intensity and the average intensity of the image.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention
  • FIG. 2 is a schematic illustration of an imaging kit according to some embodiments of the present invention.
  • FIGs. 3A-C are images demonstrating diffraction compensation without up- shifting a spatial frequency of the image data, as obtained in experiments performed according to some embodiments of the present invention.
  • FIG. 4 is a graph that compares between the performances of the technique of the present embodiments when up- shifting of the spatial frequency is employed, and the performances of the technique of the present embodiments when up- shifting of the spatial frequency is not employed;
  • FIGs. 5A-C are images comparing the diffraction compensation without spatial frequency up-shifting to diffraction compensation with spatial frequency up-shifting, as obtained in experiments performed according to some embodiments of the present invention
  • FIG. 6 is a graph showing a relation between a cutoff parameter and image contrast, as obtained in experiments performed according to some embodiments of the present invention.
  • FIG. 7 shows the difference between the maximal intensity and the average intensity of an image, as a function of the maximal variation of the absorption coefficient of an imaged object.
  • the present invention in some embodiments thereof, relates to image processing and, more particularly, but not exclusively, to a system and a method for compensating diffraction.
  • FIG. 1 is a flowchart diagram describing a method suitable for processing image data, according to some embodiments of the present invention.
  • the method begins at 10 and optionally and preferably continues to 11 at which an image of an object is captured through a grating to provide image data.
  • the image can be obtained from a different source (e.g. , a computer readable medium, the internet, a cloud storage facility, etc.).
  • the capturing, when executed, is by an imaging system that receives radiation, optionally and preferably coherent radiation, from the object, through the grating, and outputs the image data.
  • the grating is preferably placed in proximity to or on the object. When the grating is placed in proximity to the object the distance between the grating and the object is larger than the distance between the grating and the imaging system that capture the image.
  • the position and size of the grating is such that the entire field-of-view of the object, from the viewpoint of the imaging system, is behind the grating with respect to the imaging system.
  • the method of the present embodiments is particularly useful when at least a portion of the radiation from the object experiences diffraction before being recorded by the imaging system.
  • the imaging system records the diffracted portions of the radiation without correcting it by optical means (e.g. , lenses and the like).
  • the imaging system optionally and preferably records these portions as well.
  • the imaging system typically employs a pixelated imager (e.g. , a CMOS or a CCD imager) that resolves the spatial distribution of the received radiation.
  • the radiation can be of any type, including, without limitation, electromagnetic radiation, electron beam radiation and ultrasound radiation.
  • electromagnetic radiation it can be a visible light radiation, an infrared radiation, an ultraviolet radiation, an X-ray radiation, and the like.
  • the present embodiments can be used for processing images captured by many types of imaging techniques, including, without limitation, electron beam imaging, X-ray imaging, computerized tomography, ultrasound imaging, thermal imaging, ultraviolet imaging, infrared imaging, visible light imaging, and the like.
  • the image data is typically arranged gridwise in a plurality of picture-elements (e.g. , pixels, arrangements of pixels) representing the image.
  • picture-elements e.g. , pixels, arrangements of pixels
  • pixel is sometimes abbreviated herein to indicate a picture-element. However, this is not intended to limit the meaning of the term “picture-element” which refers to a unit of the composition of an image.
  • references to an "image” herein are, inter alia, references to values at picture- elements treated collectively as an array, typically a two-dimensional array.
  • image as used herein also encompasses a mathematical object which does not necessarily correspond to a physical object.
  • the original and processed images certainly do correspond to physical objects which are the object from which the imaging data are acquired.
  • Each pixel in the image can be associated with a single digital intensity value, in which case the image is a grayscale image.
  • each pixel is associated with three or more digital intensity values sampling the amount of light at three or more different color channels (e.g. , red, green and blue) in which case the image is a color image.
  • images in which each pixel is associated with a mantissa for each color channels and a common exponent e.g. , the so-called RGBE format. Such images are known as "high dynamic range" images.
  • diffraction is a linear optical process that can be corrected by means of linear optics
  • the diffraction is not a linear process once captured by an imager since the electrical signal generated by the imager is indicative of the optical intensity, which is proportional to the square of the optical electromagnetic field.
  • the imager destroys the linearity by generating an electrical signal which is nonlinear with respect to the optical electromagnetic field.
  • the technique of the present embodiments successfully mitigates diffraction effects exhibited by the optical signal after image capture, even though the diffraction is non-linear with respect to the signal generated by the imager.
  • the technique of the present embodiments is particularly useful when the imaged object is characterized by sufficiently small variation (e.g., variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5%) of an absorption coefficient or a refraction index across a surface of the object from which the radiation is received.
  • sufficiently small variation e.g., variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5%
  • the technique of the present embodiments is particularly useful when the image is characterized by sufficiently small intensity variations, e.g., intensity variation of less than 50% or less than 40% or less than 30% or less than 20% or less than 10% or less than 5% from the average intensity of the image.
  • the intensity of a picture-element of the image can be expressed, for example, in gray-level units.
  • the grating is optionally and preferably characterized by a spatial frequency that is higher than the maximal spatial frequency that an image would have had, had the image been captured by the same imaging device but without the grating.
  • the spatial frequency DF of the grating optionally and preferably satisfies DF>X*FM, where X is at least 1.1 or at least 1.2 or at least 1.3 or at least 1.4 or at least 1.5.
  • X is at least 1.1 or at least 1.2 or at least 1.3 or at least 1.4 or at least 1.5.
  • the method optionally and preferably continues to 13 at which a spatial band pass filter is applied to the image data.
  • the cutoff frequencies of the band pass filter are optionally and preferably selected so as to remove frequencies below DF-FM and above DF+FM.
  • the method preferably continues to 14 at which a linear filter is applied to the image data, optionally and preferably the band -pass filtered image data.
  • the filter can be applied by an image processor, which can process the image data to provide output image data composed of a summation of successive data samples weighted by individual coefficients.
  • the data samples are optionally and preferably two-dimensional data samples, and the filter is optionally and preferably a two- dimensional filter.
  • the operation of the image processor when applying the filter is typically described by a mathematical filtering function, which optionally and preferably provides the individual coefficients and the number of samples. Representative examples of filtering functions suitable for the present embodiments are provided below. It was found by the present Inventors that such a combination between the shift in spatial frequencies and application of linear filter mitigates the diffraction effect in the image.
  • the filtering function that describes the filter applied at 14 is characterized by a cutoff parameter ⁇ .
  • the cutoff parameter ⁇ is preferably dimensionless.
  • the cutoff parameter ⁇ can correspond to a number of terms in a series expansion of the filter. For example, the number of terms can be about l/ ⁇ , e.g., the nearest integer of l/ ⁇ or the floor of l/ ⁇ or the ceiling of 1/ ⁇ .
  • the cutoff parameter ⁇ is selected based on the maximal variation of the absorption coefficient or refraction index across a surface of the object. As shown in the Examples section that follows, the maximal variation of the absorption coefficient or refraction index can be approximated using the difference between the maximal intensity and the average intensity of the image.
  • the cutoff parameter ⁇ can be about Y*m, where m is the difference between the maximal intensity and the average intensity of the image, and where Y is a positive number which is preferably less than 1, e.g., from about 0.1 to about 0.9, is from about 0.1 to about 0.8, or from about 0.1 to about 0.7, or from about 0.1 to about 0.6, or from about 0.1 to about 0.5, or from about 0.1 to about 0.4, or from about 0.2 to about 0.4, e.g., about 0.3.
  • Other values of Y are also contemplated.
  • the filter that is applied at 14 can be embodied in more than one way.
  • a spectral filter is applied globally to the entire image, as disclosed, for example, in Gureyev, Opt. Commun. 220, 49-58 (2003), or Gureyev et al., Opt. Commun. 231, 53-70 (2004), or Gureyev et al., Appl. Opt. 43, 2418-2430 (2004).
  • the image data are transformed to the spectral domain and a spectral filter is applied to the transformed image data. Thereafter, the image data is transformed back to the spatial domain.
  • the transform from the spatial domain to the spectral domain can be by applying a Fourier Transform (FT) or a Fast FT (FFT) to the image data I(x, y) to provide a spectral image data I(co x , Q Y ), where x and y are spatial coordinates of picture-elements in the image and co x and Q Y are frequencies of the spectral image.
  • FT Fourier Transform
  • FFT Fast FT
  • the transform from I(Q X , Q Y ) to I(x, y) can be by applying the inverse of the transformed applied to I(x, y).
  • a representative example of a filtering function G(Q X , Q Y ) suitable for describing a spectral filter that can be applied to I(Q X , Q Y ) for the case of monochromatic imaging, is:
  • a spatial filter is employed.
  • a representative example of a filtering function g(x,y) suitable for describing a spatial filter that can be a lied to I(x, y) for the case of monochromatic imaging, is:
  • the method can optionally and preferably continue to 15 at which the spatial frequency of the image data is down-shifted, optionally and preferably to the original spatial frequency, and to 16 at which an image is reconstructed based on the downshifted image data.
  • FIG. 2 is a schematic illustration of a kit 20 according to some embodiments of the present invention.
  • Kit 30 comprises data processing system 30 having a computer 32, which typically comprises an input/output (I/O) circuit 34, an image processor, such as a central processing unit (CPU) 36 (e.g., a microprocessor), and a memory 46 which typically includes both volatile memory and non-volatile memory.
  • I/O circuit 34 is used to communicate information in appropriately structured form to and from other CPU 36 and other devices or networks external to system 30.
  • CPU 36 is in communication with I O circuit 34 and memory 38.
  • a display device 40 is shown in communication with computer 32, typically via I/O circuit 34.
  • Computer 32 issues to display device 40 output images generated by CPU 36.
  • a keyboard 42 is also shown in communication with computer 32, typically I/O circuit 34.
  • system 30 can be part of a larger system.
  • system 30 can also be in communication with a network, such as connected to a local area network (LAN), the Internet or a cloud computing resource of a cloud computing facility.
  • LAN local area network
  • the Internet or a cloud computing resource of a cloud computing facility.
  • Kit 20 can additionally comprise a grating 47 selected to up-shift a spatial frequency of radiation 50 received from an object 52, and an imaging system 44 that image object 52 though grating 47 to provide image data.
  • Imaging system 44 is preferably devoid of any diffraction compensating optical element.
  • imaging systems suitable for the present embodiments including, without limitation, an electron beam imaging system, an X-ray imaging system, an ultrasound imaging system, a thermal imaging system, an ultraviolet imaging system, an infrared imaging system and a visible light imaging system.
  • computer 32 of system 30 is configured for receiving the image data from imaging system 44 or a computer readable storage 46, applying a linear filter to the image data, down- shifting the spatial frequency of the filtered image data, reconstructing an image based on the down-shifted image data, as further detailed hereinabove, and displaying the image on display 40.
  • system 30 communicates with a cloud computing resource (not shown) of a cloud computing facility, wherein the cloud computing resource receives the image data, applies the linear filter, reconstructs an image based on the down-shifted image data as further detailed hereinabove, and transmits it to system 30 for displaying the image on display 40.
  • the method as described above can be implemented in computer software executed by system 30.
  • the software can be stored in of loaded to memory 38 and executed on CPU 36.
  • some embodiments of the present invention comprise a computer software product which comprises a computer-readable medium, more preferably a non-transitory computer-readable medium, in which program instructions are stored. The instructions, when read by computer 32, cause computer 32 to receive the image data and execute the method as described above.
  • compositions, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • FIGs. 3A-C are images demonstrating the diffraction compensation without up- shifting the spatial frequency of the image data.
  • FIG. 3A shows the original image
  • FIG. 3B shows the diffracted image from which the input image data was obtained
  • FIG. 3C shows the output reconstructed (after applying the filter) image.
  • FIG. 4 is a graph that compares between the performances of the technique when up-shifting of the spatial frequency is employed, and the performances of the technique when up-shifting of the spatial frequency is not employed. The comparison is for the case in which the modulation depth is 100%.
  • phase and amplitude linear ratio correspond to the following set of parameters: image size 512x512 pixels, detector array size 256x256 ⁇ , radiation wavelength of 1 angstrom, diffraction distance of 1 m, max phase difference of 1 radian, and phase and amplitude linear ratio of about 10.
  • the dot-dashed line corresponds to the original image data
  • the dotted line corresponds to the diffracted image without filtering and without spatial frequency up-shifting
  • the dashed line corresponds to a corrected image after filtering but without spatial frequency up-shifting
  • the solid line corresponds to a corrected image after spatial frequency up-shifting and filtering.
  • FIGs. 5A-C are images comparing the diffraction compensation without the spatial frequency up- shifting to the diffraction compensation with the spatial frequency up-shifting.
  • FIG. 5A shows the original image
  • FIG. 5B shows the diffraction compensation without the spatial frequency up-shifting
  • FIG. 5C shows the diffraction compensation with the spatial frequency up-shifting.
  • both techniques successfully reconstruct the original image, with a substantial improvement in FIG. 5C relative to FIG. 5B.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé d'imagerie consistant : à capturer une image d'un objet par l'intermédiaire d'un réseau sélectionné afin de décaler vers le haut une fréquence spatiale de rayonnement reçu en provenance de l'objet, ce qui permet de fournir des données d'image ; à appliquer un filtre linéaire aux données d'image afin de fournir des données d'image filtrées ; à décaler vers le bas une fréquence spatiale des données d'image filtrées ; et à reconstruire une image en fonction des données d'image décalées vers le bas, ce qui permet de traiter les données d'image.
PCT/IL2017/050368 2017-03-24 2017-03-24 Système et procédé de compensation de diffraction WO2018173031A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IL2017/050368 WO2018173031A1 (fr) 2017-03-24 2017-03-24 Système et procédé de compensation de diffraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IL2017/050368 WO2018173031A1 (fr) 2017-03-24 2017-03-24 Système et procédé de compensation de diffraction

Publications (1)

Publication Number Publication Date
WO2018173031A1 true WO2018173031A1 (fr) 2018-09-27

Family

ID=63585943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2017/050368 WO2018173031A1 (fr) 2017-03-24 2017-03-24 Système et procédé de compensation de diffraction

Country Status (1)

Country Link
WO (1) WO2018173031A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040245439A1 (en) * 2002-07-26 2004-12-09 Shaver David C. Optical imaging systems and methods using polarized illumination and coordinated pupil filter
US20060164287A1 (en) * 2005-01-21 2006-07-27 Safeview, Inc. Depth-based surveillance image reconstruction
US20150077755A1 (en) * 2003-10-27 2015-03-19 The General Hospital Corporation Method and apparatus for performing optical imaging using frequency-domain interferometry
US20150330775A1 (en) * 2012-12-12 2015-11-19 The University Of Birminggham Simultaneous multiple view surface geometry acquisition using structured light and mirrors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040245439A1 (en) * 2002-07-26 2004-12-09 Shaver David C. Optical imaging systems and methods using polarized illumination and coordinated pupil filter
US20150077755A1 (en) * 2003-10-27 2015-03-19 The General Hospital Corporation Method and apparatus for performing optical imaging using frequency-domain interferometry
US20060164287A1 (en) * 2005-01-21 2006-07-27 Safeview, Inc. Depth-based surveillance image reconstruction
US20150330775A1 (en) * 2012-12-12 2015-11-19 The University Of Birminggham Simultaneous multiple view surface geometry acquisition using structured light and mirrors

Similar Documents

Publication Publication Date Title
JP6952277B2 (ja) 撮像装置および分光システム
Ancuti et al. Single-scale fusion: An effective approach to merging images
Branchitta et al. New technique for the visualization of high dynamic range infrared images
JP6653460B2 (ja) 撮像装置、撮像システム、画像生成装置およびカラーフィルタ
US10373296B2 (en) Method and system for reducing ringing artifacts in X-ray image
WO2021192891A1 (fr) Procédé de traitement de signal, dispositif de traitement de signal et système de capture d'image
JP2022107641A (ja) 撮像方法
US10012953B2 (en) Method of reconstructing a holographic image and apparatus therefor
US9014506B2 (en) Image processing method, program, image processing apparatus, image-pickup optical apparatus, and network device
Ogawa et al. Demosaicking method for multispectral images based on spatial gradient and inter-channel correlation
Rafinazari et al. Demosaicking algorithm for the Kodak-RGBW color filter array
US20170171476A1 (en) System and method for spectral imaging
WO2018173031A1 (fr) Système et procédé de compensation de diffraction
Yamashita et al. Low-light color image enhancement via iterative noise reduction using RGB/NIR sensor
JP6835227B2 (ja) 画像処理装置、画像処理方法およびコンピュータプログラム
US11378916B2 (en) Holographic display method and holographic display device
Mikelsons et al. A fast and robust implementation of the adaptive destriping algorithm for SNPP VIIRS and Terra/Aqua MODIS SST
Rao et al. Digital Image Processing and Applications
Wang et al. Polarization image fusion algorithm based on global information correction
Klein Multispectral imaging and image processing
JP4712436B2 (ja) デジタル信号における周期的変動の抑制
Lagunas et al. Human eye visual hyperacuity: Controlled diffraction for image resolution improvement
Kamlah et al. Wavelength dependence of image quality metrics and seeing parameters and their relation to adaptive optics performance
Yang et al. Detail-enhanced and brightness-adjusted exposure image fusion
Picone et al. Pansharpening of images acquired with color filter arrays

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17901579

Country of ref document: EP

Kind code of ref document: A1