CN113711133A - System and method for color holographic microscope based on deep learning - Google Patents

System and method for color holographic microscope based on deep learning Download PDF

Info

Publication number
CN113711133A
CN113711133A CN202080030303.1A CN202080030303A CN113711133A CN 113711133 A CN113711133 A CN 113711133A CN 202080030303 A CN202080030303 A CN 202080030303A CN 113711133 A CN113711133 A CN 113711133A
Authority
CN
China
Prior art keywords
sample
image
color
images
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080030303.1A
Other languages
Chinese (zh)
Inventor
阿伊多根·奥兹坎
亚伊尔·里文森
刘泰然
张一勃
魏赈嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of California
Original Assignee
University of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of California filed Critical University of California
Publication of CN113711133A publication Critical patent/CN113711133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0866Digital holographic imaging, i.e. synthesizing holobjects from holograms
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/0443Digital holography, i.e. recording holograms with digital recording means
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0808Methods of numerical synthesis, e.g. coherent ray tracing [CRT], diffraction specific
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • G03H1/2645Multiplexing processes, e.g. aperture, shift, or wavefront multiplexing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H2001/005Adaptation of holography to specific applications in microscopy, e.g. digital holographic microscope [DHM]
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/0443Digital holography, i.e. recording holograms with digital recording means
    • G03H2001/0447In-line recording arrangement
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • G03H1/2645Multiplexing processes, e.g. aperture, shift, or wavefront multiplexing
    • G03H2001/266Wavelength multiplexing
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2210/00Object characteristics
    • G03H2210/10Modulation characteristics, e.g. amplitude, phase, polarisation
    • G03H2210/11Amplitude modulating object
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2210/00Object characteristics
    • G03H2210/10Modulation characteristics, e.g. amplitude, phase, polarisation
    • G03H2210/12Phase modulating object, e.g. living cell
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2210/00Object characteristics
    • G03H2210/10Modulation characteristics, e.g. amplitude, phase, polarisation
    • G03H2210/13Coloured object
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2222/00Light sources or light beam properties
    • G03H2222/10Spectral composition
    • G03H2222/13Multi-wavelengths wave with discontinuous wavelength ranges
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2222/00Light sources or light beam properties
    • G03H2222/10Spectral composition
    • G03H2222/17White light
    • G03H2222/18RGB trichrome light
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2222/00Light sources or light beam properties
    • G03H2222/34Multiple light sources
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2227/00Mechanical components or mechanical aspects not otherwise provided for
    • G03H2227/03Means for moving one component
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2240/00Hologram nature or properties
    • G03H2240/50Parameters or numerical values associated with holography, e.g. peel strength
    • G03H2240/56Resolution
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2240/00Hologram nature or properties
    • G03H2240/50Parameters or numerical values associated with holography, e.g. peel strength
    • G03H2240/62Sampling aspect applied to sensor or display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Holo Graphy (AREA)

Abstract

A method for performing color image reconstruction of a single super-resolved holographic sample image, comprising obtaining a plurality of sub-pixel shifted lower resolution holographic images of the sample using an image sensor by simultaneous illumination at a plurality of color channels. A super-resolved holographic intensity image for each color channel is digitally generated based on the lower resolution holographic image. The super-resolved holographic intensity image of each color channel is backpropagated to the object plane using image processing software to generate real and virtual input images of the sample for each color channel. A trained deep neural network is provided and executed by image processing software using one or more processors of a computing device and configured to receive real and virtual input images of samples for each color channel and generate color output images of the samples.

Description

System and method for color holographic microscope based on deep learning
RELATED APPLICATIONS
This application claims priority from U.S. provisional patent application No. 62/837,066 filed on 22/4/2019, the entire contents of which are incorporated herein by reference. Priority is required according to American code 35, section 119 and any other applicable regulations.
Statement regarding federally sponsored research and development
The invention was made with government support under grant number EEC 1648451 awarded by the national science foundation. The government has certain rights in this invention.
Technical Field
The technical field generally relates to methods and systems for performing high fidelity color image reconstruction using a single super-resolved hologram using a trained deep neural network. In particular, the system and method uses a single super-resolved hologram obtained from a sample as an input to a trained deep neural network that outputs a high-fidelity color image of the sample while illuminating the sample at a plurality of different wavelengths.
Background
Histological staining of fixed thin tissue sections mounted on slides is a fundamental step required for the diagnosis of various medical conditions. Histological staining is used to highlight tissue components for microscopic examination by enhancing the contrast of the cellular and subcellular components. Therefore, accurate color representation of stained pathological sections is an important prerequisite for making a reliable and consistent diagnosis. Unlike bright field microscopy, another method for acquiring color information from a sample using a coherent imaging system requires the acquisition of at least three holograms in the red, green and blue portions of the visible spectrum to form the red-green-blue (RGB) color channels used to reconstruct the composite color image. However, this colorization method used in coherent imaging systems suffers from color inaccuracies and may be considered unacceptable for histopathological and diagnostic applications.
To improve color accuracy using a coherent imaging system, a computational hyperspectral imaging method may be used. However, such systems typically require engineered illumination, e.g., tunable lasers, to effectively sample the visible light band. Previous contributions have demonstrated that the number of sampling locations required in the visible band is successfully reduced in order to generate accurate color images. For example, Peercy et al demonstrate a wavelength selection method that uses gaussian or riemann summation to reconstruct a color image of a sample imaged in reflection mode holography, thereby suggesting that at least four wavelengths are required to generate an accurate color image of a natural object. See Peercy et al, "wavelength selection for true color holography," optics 33, 6811-6817 (1994).
Later, a wiener estimation based method was demonstrated to quantify the spectral reflectance distribution of an object at four fixed wavelengths, thereby improving the color accuracy of natural objects. See "digital holography using spectral estimation techniques" by p.xia et al, Display technol., JDT10, 235-242 (2014). Recently, Zhang et al proposed a minimum mean square error estimation based absorption spectrum estimation method specifically for creating accurate color images of pathological sections using in-line holography. See Zhang et al, "holographic and absorption spectroscopy estimation using histochemical staining for accurate color imaging of pathological sections", Journal of Biophotonics e201800335 (2018). Because the color distribution within the stained histopathology slide is limited by the combination of colorimetric dyes used, this method successfully reduces the number of wavelengths required to three while still maintaining an accurate color representation. However, due to the distortion introduced by the double image artifact and the limited resolution of the on-chip holographic system at unit magnification, multi-height phase recovery and pixel super-resolution (PSR) techniques are implemented to achieve acceptable image quality. In the Zhang et al method, four (or more) super-resolution holograms are collected at four different sample-to-sensor (z) distances. This requires a movable stage that not only allows lateral (x, y) motion for acquiring PSR images, but also needs to be moved in the vertical or z direction to acquire multi-height images.
Disclosure of Invention
In one embodiment, a deep learning based accurate color holographic microscopy system and method is disclosed that uses a single super-resolved holographic image acquired under wavelength multiplexed illumination (i.e., simultaneous illumination). The deep neural network based color microscopy system and method significantly simplifies the data acquisition procedure, associated data processing and storage steps, and imaging hardware compared to conventional hyperspectral imaging methods used in coherent imaging systems. First, this technique requires only one super-resolved hologram to be obtained under simultaneous illumination. Because of this, the system and method achieve similar performance to the state-of-the-art absorption spectrum estimation method of Zhang et al, which uses four super-resolved holograms collected with sequential or multiplexed illumination wavelengths over four sample-to-sensor distances, thus exhibiting over a four-fold enhancement in data throughput. Furthermore, there is no need for a more complex movable stage that moves in the z direction, which increases cost, complexity of design, and takes additional time to obtain a color image.
The success of this method and system was demonstrated using two types of pathology slides: lung tissue sections stained with Masson trichrome and prostate tissue sections stained with hematoxylin and eosin (H & E), although it should be understood that other stains and dyes may be used. Using the Structural Similarity Index (SSIM) and the color distance, high fidelity and color accurate images were reconstructed and compared to gold standard images obtained using the hyperspectral imaging method. The overall temporal performance of the proposed framework was also compared with a conventional 20 x bright field scanning microscope, demonstrating that the total time of image acquisition and processing was the same. Such a color imaging framework based on deep learning would facilitate the use of coherent microscopy for histopathological applications.
In one embodiment, a method of performing color image reconstruction of a single super-resolved holographic image of a sample includes obtaining a plurality of sub-pixel shifted lower resolution holographic images of the sample using an image sensor by simultaneous illumination at a plurality of color channels. Then, a holographic intensity image at super resolution for each of the plurality of color channels is digitally generated based on the plurality of sub-pixel shifted lower resolution holographic images. The super-resolved holographic intensity image of each of the plurality of color channels is backpropagated to the object plane with image processing software to generate an amplitude input image and a phase input image of the sample for each of the plurality of color channels. A trained deep neural network is provided, which is executed by image processing software using one or more processors of a computing device, and is configured to receive an amplitude input image and a phase input image of a sample for each of a plurality of color channels, and to output a color output image of the sample.
In another embodiment, a system for performing color image reconstruction of a super-resolution holographic image of a sample comprises: a computing device on which is executed image processing software that includes a trained deep neural network that is executed using one or more processors of the computing device. The trained deep neural network is trained using a plurality of training images or patches of super-resolution holograms from the sample image and corresponding ground truth or target color images or patches. The image processing software (i.e., the trained deep neural network) is configured to receive one or more super-resolved holographic images of the sample generated by the image processing software from a plurality of low resolution images of the sample obtained by simultaneously illuminating the sample at a plurality of illumination wavelengths, and to output a reconstructed color image of the sample.
In another embodiment, a system for performing color image reconstruction of one or more super-resolved holographic images of a sample comprises: a lensless microscope device includes a sample holder for holding a sample, a color image sensor, and one or more optical fibers or cables coupled to respective differently colored light sources configured to emit light at multiple wavelengths simultaneously. The microscope device comprises at least one of a movable stage and an array of light sources configured to obtain a sub-pixel shifted lower resolution holographic intensity image of the sample. The system also includes a computing device on which is executed image processing software comprising a trained deep neural network executed using one or more processors of the computing device, wherein the trained deep neural network is trained with a plurality of training images or image blocks from a super-resolved hologram of a sample image and corresponding ground truth or target color images or image blocks generated from hyperspectral imaging or bright field microscopy, the trained deep neural network configured to receive one or more super-resolved holographic images of the sample generated by the image processing software from a sub-pixel shifted lower resolution holographic intensity image of the sample obtained by simultaneously illuminating the sample, and output a reconstructed color image of the sample.
Drawings
FIG. 1A schematically illustrates a system for performing color image reconstruction of a single super-resolved holographic image of a sample, according to one embodiment;
FIG. 1B shows an alternative embodiment for illuminating a sample using an illumination array, which is an alternative to the movable stage;
FIG. 1C illustrates a process or method for performing color image reconstruction of a single super-resolved holographic image of a sample, according to one embodiment;
fig. 2 schematically illustrates a process of image (data) acquisition for generating input amplitude and phase images (using, for example, red, green, blue channels) input into a trained deep neural network, which then outputs reconstructed color images of the sample (showing color amplitude images of the pathological tissue sample);
FIGS. 3A-3C illustrate conventional hyperspectrumComparison between imaging (fig. 3B) and neural network-based methods (fig. 3C) for reconstructing an accurate color image of a sample, NHIs the number of sample to sensor heights, N, required to perform phase recoveryWIs the number of illumination wavelengths, NMIs the measured number of each illumination condition (multiplexed or sequential), L is the number of lateral positions used to perform pixel super-resolution; FIG. 3A illustrates the number of raw holograms required for conventional hyperspectral imaging and neural network based methods; FIG. 3B schematically illustrates a high fidelity color image reconstruction process for a hyperspectral imaging method; FIG. 3C schematically illustrates a high fidelity color image reconstruction process of the neural network-based approach described herein, which uses only a single super-resolved holographic image of the sample;
FIG. 4 is a schematic diagram of the generator portion of a trained deep neural network, the six-channel input consisting of real and virtual channels of three free-space propagation holograms at three illumination wavelengths (450 nm, 540nm, and 590nm according to one embodiment), resulting in a six-channel input, each downstream block consisting of two convolutional layers that, when used together, double the number of system channels, the downstream blocks being opposite, consisting of two convolutional layers, when used together, having half the number of system channels;
FIG. 5 schematically illustrates a discriminator portion of a trained deep neural network, each downstream block of convolutional layers consisting of two convolutional layers;
FIGS. 6A and 6B show depth-learning based accurate color imaging of lung tissue slides with Masson trichrome staining for multiplexed illumination at 450nm, 540nm, and 590nm using a lensless holographic on-chip microscope; FIG. 6A is a large field of view (with two ROIs) of a network output image; FIG. 6B is a magnified comparison of the net input (amplitude and phase images), net output, and ground truth target at ROIs 1 and 2;
FIGS. 7A and 7B show depth-learning based accurate color imaging of prostate tissue slides stained with H & E under multiplexed illumination at 450nm, 540nm, and 590nm using a lensless holographic on-chip microscope; FIG. 7A is a large field of view (with two ROIs) of a network output image; FIG. 7B is a magnified comparison of the network input (amplitude and phase images), network output, and ground truth target at ROIs 1 and 2;
FIG. 8 shows a digital stitched image of a deep neural network output of lung tissue sections stained with H & E, corresponding to the field of view of the image sensor, at the periphery of the stitched image are various ROIs of the larger image, showing the output from the trained deep neural network and ground truth target images of the same ROIs;
fig. 9A-9J show visual comparisons between network output images from a deep neural network based approach and multi-elevation phase recovery using a spectral estimation approach like Zhang et al for lung tissue samples stained with Masson trichrome; fig. 9A to 9H show the reconstruction results of the spectral estimation method using different numbers of heights and different illumination conditions; FIG. 9I shows an output image of a trained deep neural network (i.e., the network output); FIG. 9J illustrates a ground truth target image obtained using a hyperspectral imaging method;
10A-10J show visual comparisons between a deep neural network based approach and multi-elevation phase recovery using the spectral estimation method of Zhang et al for prostate tissue samples stained with H & E; 10A-10H illustrate the reconstruction results of the spectral estimation method using different numbers of heights and different illumination conditions; FIG. 10I shows an output image of a trained deep neural network (i.e., the network output); FIG. 10J illustrates a ground truth target obtained using the hyperspectral imaging method.
Detailed Description
Fig. 1A schematically shows a system 2 for generating a reconstructed color output image 100 of a sample 4. In one embodiment, color output image 100 may comprise an amplitude (true) color output image. Amplitude color images are commonly used in, for example, histopathological imaging applications. The output color image 100 is shown in fig. 1A as being displayed on a display 10 in the form of a computer monitor, but it should be understood that the color output image 100 may be displayed on any suitable display 10 (e.g., a computer monitor, tablet computer or PC, mobile computing device (e.g., smartphone, etc.)). The system 2 includes a computing device 12 including one or more processors 14 and image processing software 16 including a trained deep neural network 18 (in one embodiment, a deep neural network is a generate-confrontation-network (GAN) -trained deep neural network). In the GAN trained deep neural network 18, two models are used for training. A generative model (e.g., fig. 4) is used that captures the data distribution and learns color correction and elimination of missing phase-related artifacts, while a second discriminator model (fig. 5) estimates the probability that the sample is from training data rather than from the generative model.
As explained herein, the computing device 12 may comprise a personal computer, a remote server, a tablet computer, a mobile computing device, etc., although other computing devices (e.g., devices including one or more Graphics Processing Units (GPUs) or Application Specific Integrated Circuits (ASICs)) may also be used. The image processing software 16 may be implemented in any number of software packages and platforms (e.g., Python, TensorFlow, MATLAB, C + +, etc.). The network training of the GAN-based deep neural network 18 may be performed on the same or different computing devices 12. For example, in one embodiment, a Personal Computer (PC)12 may be used to train the deep neural network 18, although such training may take considerable time. To accelerate the training process, computing device 12 using one or more dedicated GPUs may be used for training. Once the deep neural network 18 is trained, the deep neural network 18 may be executed using the same or a different computing device 12. For example, training may be performed on a remotely located computing device 12, and the trained deep neural network 18 (or parameters thereof) is transmitted to another computing device 12 for execution. The transmission may be over a Wide Area Network (WAN), such as the internet or a Local Area Network (LAN).
The computing device 12 may optionally include one or more input devices 20, such as a keyboard and mouse as shown in FIG. 1A. For example, the input device 20 may be used to interact with the image processing software 16. For example, a user may be provided with a Graphical User Interface (GUI) that he or she may interact with color output image 100. The GUI may provide the user with a series of tools or toolbars that may be used to manipulate various aspects of the color output image 100 of the sample 4. This includes the ability to adjust color, contrast, saturation, magnification, image cropping and copying, etc. The GUI may allow for quick selection and viewing of the color image 100 of the sample 4. The GUI may identify a sample type, stain or dye type, sample ID, and the like.
In one embodiment, the system further comprises a microscope device 22, the microscope device 22 being used to acquire an image of the sample 4, which image is used by the depth neural network 18 to reconstruct the color output image 100. The microscope arrangement 22 comprises a plurality of light sources 24 for illuminating the sample 4 with coherent or partially coherent light. The plurality of light sources 24 may include LEDs, laser diodes, and the like. As explained herein, in one embodiment, at least one light source 24 emits red light while at least one light source 24 emits green light, and while at least one light source 24 emits blue light. As explained herein, the light sources 24 are simultaneously powered to illuminate the sample 4 using appropriate drive circuitry or controller. The light source 24 may be connected to a fiber optic cable, optical fiber, waveguide 26, or the like, as shown in fig. 1A, for emitting light onto the sample 4. Sample 4 is supported on sample holder 28, and sample holder 28 may comprise an optically transparent substrate or the like (e.g., glass, polymer, plastic). The sample 4 is typically illuminated by a fiber optic cable, fiber optics, a waveguide 26, which is typically located a few centimeters from the sample 4.
The sample 4 that can be imaged using the microscope device 22 can include any number of types of samples 4. The sample 4 may comprise a portion of mammalian or plant tissue that has been chemically stained or labeled (e.g., a chemically stained cytology slide). The sample may be fixed or non-fixed. Exemplary stains include, for example, hematoxylin and eosin (H & E) stain, hematoxylin, eosin, jones silver stain, Masson trichrome stain, Periodic Acid Schiff's (PAS) stain, congo red stain, alcnovain blue stain, blue iron, silver nitrate, trichrome stain, ziehlen neelsen, urotropine silver (GMS) stain, gram stain, acid stain, basic stain, silver stain, nisl, weigart stain, golgi stain, Luxol fast blue stain, toluidine blue, Genta, malloy trichrome stain, Gomori trichrome, van-kinilin, Giemsa, sudan black, pureund blue stain, bestsel magenta stain, acridine orange, immunofluorescent stain, immunostaining, kinyouun-cold stain, Albert stain, flagellar stain, endothelial pore stain, Nigrosin, or indian histochemistry. The sample 4 may also comprise a non-tissue sample. These include small inorganic or organic objects. This may include particles, dust, pollen, molds, spores, fibers, hair, mites, allergens, and the like. Small organisms can also be imaged in color. This includes bacteria, yeast, protozoa, plankton and multicellular organisms. Furthermore, in some embodiments, the sample 4 need not be stained or marked, as the natural or natural color of the sample 4 may be used for color imaging.
Still referring to fig. 1A, the microscope device 22 obtains multiple low resolution, sub-pixel shifted images while illuminating at different wavelengths (three were used in the experiments described herein). As shown in fig. 1A and 2, three different wavelengths (λ)1、λ2、λ3) While illuminating the specimen 4 (e.g., a pathology slide on which the pathology specimen is placed) and capturing an image with the color image sensor 30. The image sensor 30 may comprise a CMOS based color image sensor 30. The color image sensor 30 is located on the opposite side of the sample 4 from the optical cable, fiber optics, waveguide 26. Image sensor 30 is typically located near or very close to sample holder 28 and at a distance that is less than the distance between sample 4 and the fiber optic cable, fiber, waveguide 26 (e.g., less than one centimeter, and may be several millimeters or less).
Translation stage 32 is provided, translation stage 32 imparting relative motion in the x and y planes (fig. 1A and 2) between sample holder 28 and image sensor 30 to obtain a subpixel shifted image. Translation stage 32 may move image sensor 30 or sample holder 28 in the x and y directions. Of course, both the image sensor 30 and the specimen 28 may be moved, but this may require a more complex translation stage 32. In a separate alternative, the fiber optic cable, fiber, waveguide 26 may be moved in the x, y plane to create the sub-pixel offset. Translation stage 32 is moved with a small jog (e.g., typically less than 1 μm) to obtain an array of images 34 obtained at different x, y positions, a single low resolution hologram being obtained at each position. For example, a 6 × 6 grid of locations may be used to acquire a total of thirty-six (36) low resolution images 34. While any number of low resolution images 34 may be obtained, this may typically be less than 40.
These low resolution images 34 are then used to digitally create super resolved holograms for each of the three color channels using demosaiced pixel super resolution. A shift-and-add process or algorithm is used to synthesize the high resolution image. The shift-addition process for synthesizing pixel super-resolution holograms is described, for example, in Greenbaum, A et al, "Wide-Domain computational imaging of pathological sections Using lens-free chip microscopy", Science relative Medicine 6, 267ra175-267ra175(2014), which is incorporated herein by reference. In this process, the offset for accurate synthesis of high resolution holograms can be accurately estimated without any feedback or measurement from the translation stage 32 or setup using iterative gradient based techniques.
The three intensity color hologram channels (red, blue, green) of this super-resolved hologram are then digitally backpropagated to the object plane to generate six inputs for the trained deep neural network 18 (fig. 2). This includes three (3) amplitude image channels (50)R、50B、50G) And three (3) phase image channels (52)R、52B、52G) Which are input to the trained deep neural network 18 to generate a reconstructed color output image 40. The pixel super resolution algorithm may be performed using the same image processing software 16 as used to perform the trained deep neural network 18 or a different image processing software. The color output image 100 is a high fidelity image that is compared to images obtained using a plurality of super-resolved holograms collected as a plurality of sample-to-sensor distances (z) (i.e., a hyperspectral imaging method). "gold standard" for hyperspectral imaging "Compared to the method, system 2 and method have lower data density and improved overall time performance or throughput.
Fig. 1B shows an alternative embodiment of a system 2 using an array of light sources 40. In this alternative embodiment, arrays of light sources 40 having different colors are arranged in the x, y plane of sample 4 and sample holder 28. In this alternative embodiment, translation stage 32 is not required, as sub-pixel "movement" is achieved by illuminating sample 4 with different sets of light sources from array 40 located at different spatial positions above sample 4. This has the same effect of having to move image sensor 30 or sample holder 28. Different sets of red, blue and green light sources in the array 40 are selectively illuminated to generate sub-pixel displacements for the synthesized pixel super-resolved hologram. The array 40 may be formed from a bundle of optical fibers coupled at one end to a light source (e.g., an LED), and the opposite end is contained in a head or manifold that secures the opposite ends of the optical fibers in a desired array pattern.
Fig. 1C shows a process or method for performing a color image reconstruction of a single super-resolved holographic image of a sample 4. Referring to operation 200, the microscope device 22 obtains a plurality of sub-pixel shifted lower resolution holographic intensity images of the sample 4 using the color image sensor 30 by illuminating the sample 4 at a plurality of color channels (e.g., red, blue, green) simultaneously. Next, in operation 210, a super-resolved holographic intensity image for each of a plurality of color channels is digitally generated based on the plurality of sub-pixel shifted lower resolution holographic intensity images (three such super-resolved holograms, including one for the red channel, one for the green channel, and one for the blue channel). Next, in operation 220, the super-resolved holographic intensity image for each of the plurality of color channels is backpropagated with the image processing software 16 to the object plane within the sample 4 to generate an amplitude input image and a phase input image for the sample for each of the plurality of color channels, which results in a total of six (6) images. The trained deep neural network 18 executed by the image processing software 16 using one or more processors 14 of the computing device 12 receives (operation 230) the amplitude input image and the phase input image of the sample 4 for each of a plurality of color channels (e.g., six input images) and outputs (operation 240) the color output image 100 of the sample 4. The color output image 40 is a high fidelity image, compared to images obtained using a plurality of super-resolved holograms collected as a plurality of sample-to-sensor distances (i.e., a hyperspectral imaging method). The color output image 100 may comprise a color amplitude image 100 of the sample 4.
Compared to the "gold standard" approach to hyperspectral imaging, system 2 and the method are less data intensive and improve overall temporal performance or throughput. System 2 need not obtain multiple (i.e., four) super-resolved holograms collected at four different heights or sample-to-image sensor distances. This means that the color output image 100 can be obtained faster (and with higher throughput). The use of a single super-resolved hologram also means that the imaging process is less data intensive; less storage and data processing resources are required.
Experimental results
Materials and methods
Overview of reconstruction methods based on hyperspectral and deep neural networks
The deep neural network 18 is trained to perform an image transformation from the complex field obtained from a single super-resolved hologram to a gold standard image (obtained with a hyperspectral imaging method), from NH×NMSuper-resolution hologram gold standard image (N)HIs the number of distances from the sample to the sensor, NMIs the measured quantity under a particular lighting condition). To generate a gold standard image using a hyperspectral imaging method, N is usedH8 and NM31 (ranging from 400nm to 700nm with a step size of 10 nm). The process for generating the golden standard image and the deep network input is described in detail below.
Hyperspectral imaging method
Gold standard hyperspectral imaging method by first performing resolution enhancement using PSR algorithmReconstruction of high fidelity color images (below)Using sequential illumination at holographic pixel super-resolutionDiscussed in detail in). Subsequently, multi-height phase recovery is used to eliminate missing phase-related artifacts (below inMulti-height phase recoveryDiscussed in more detail herein). Finally, the tristimulus color projection is used to generate high fidelity color images (below inColor tri-stimulus projectionDiscussed in more detail herein).
Holographic pixel super-resolution using sequential illumination
Resolution enhancement of hyperspectral imaging methods using the PSR algorithm, as described in Greenbaum, a. et al, "wide-area computed imaging of pathological sections using lensless chip microscopy", Science relative Medicine 6, 267ra175-267ra175(2014), which is incorporated herein by reference. The algorithm can be derived from the RGB image sensor 30(IMX081, Sony, pixel size 1.12 μm, with R, G1、G2And B color channel) of a set of low resolution images 34 are collected to digitally synthesize a high resolution image (pixel size of about 0.37 μm). To acquire these images 34, image sensor 30 is programmed to be rasterized through a 6 × 6 transverse grid using 3D positioning stage 32(MAX606, Thorlabs corporation) with a sub-pixel pitch of about 0.37 μm (i.e., 1/3 pixel size). At each lateral position, a low resolution holographic intensity is recorded. The displacement/offset of the image sensor 30 was accurately estimated using the algorithm described in Greenbaum et al, "field portable wide-field microscopy of dense samples using lensless imaging based on multi-height pixel super resolution", Lab Chip12, 1242-. The high resolution image is then synthesized using a shift-plus based algorithm, as described in Greenbaum et al (2014) above.
Because this hyperspectral imaging method uses sequential illumination, the PSR algorithm uses only one color channel (R, G) from the RGB image sensor at any given illumination wavelength1Or B). According to the transmission spectral response curve of the Bayer RGB image sensor, the blue channel (B) is used for the illumination wavelength in the range of 400-470nm, and the green channel (G)1) Used in the range of 480-580nmFor an illumination wavelength in the range of 590-700 nm.
Angular spectrum propagation
Free-space angular spectral propagation is used in hyperspectral imaging methods to create ground truth images. In order to digitally obtain the light field U (x, y; z) at a propagation distance z, a Fourier Transform (FT) is first applied to a given U (x, y; 0) to obtain an angular spectral distribution A (f)x,fy(ii) a 0). Angular spectrum A (f) of the light field U (x, y; z)x,fy(ii) a z) can be calculated using the following formula:
A(fx,fy;z)=A(fx,fy;0)·H(fx,fy;z) (1)
wherein, H (f)x,fy(ii) a z) is defined as the sum of,
Figure BDA0003314154900000131
where λ is the illumination wavelength and n is the refractive index of the medium. Finally, for A (f)x,fy(ii) a z) inverse Fourier transform to obtain U (x, y; z).
This angular spectrum propagation method is used first as a building block for an autofocus algorithm that estimates the sample-to-sensor distance for each acquired hologram, as described by Zhang et al, "edge sparsity criteria for robust holographic autofocus," optics letters42, 3824(2017), and Tamamitsu et al, "comparison of Gini index and Tamura coefficient for holographic autofocus based on edge sparsity of complex optical wavefronts," arXiv:1708.08055[ physics. After estimating the exact sample-to-sensor distance, the hyperspectral imaging method uses angular spectrum propagation as an additional component of iterative multi-height phase recovery.
Multi-height phase recovery
In order to eliminate spatial image artifacts associated with missing phases, hyperspectral imaging methods apply an iterative phase retrieval algorithm. Iterative phase recovery methods are used to recover this missing phase information, details of which are found in Greenbaum et al, "maskless imaging of dense samples using pixel-super-resolution based multi-height lensless on-chip microscopy", opt express, OE20, 3129-.
Holograms of 8 sample-to-sensor distances were collected in the data acquisition step. The algorithm initially assigns a zero phase to the intensity measurement of the object. Each iteration of the algorithm begins by propagating the complex field from the first height to the eighth height and then back to the first height. The amplitude is updated at each height while the phase remains unchanged. The algorithm typically converges after 10-30 iterations. Finally, the complex field is propagated back from any one measurement plane to the object plane to retrieve amplitude and phase images.
Color tri-stimulus projection
Higher color accuracy is achieved by densely sampling the visible light band at thirty-one (31) different wavelengths in 10nm steps in the 400nm to 700nm range. This spectral information is projected to the color tristimulus values using the color matching functions of the international commission on illumination (CIE). The color tristimulus values in the XYZ color space can be calculated by the following formula,
Figure BDA0003314154900000141
wherein, the lambda is the wavelength,
Figure BDA0003314154900000142
and
Figure BDA0003314154900000143
for the CIE color matching function, T (λ) is the transmission spectrum of the sample, and E (λ) is CIE standard illuminant D65. The XYZ values can be linearly transformed to standard RGB values for display.
High fidelity holographic color reconstruction via deep neural networks
Is produced in the following mannerInput complex field into a deep learning based color reconstruction framework: by demosaicing pixel super-resolution algorithm (Holographic pixel super-resolution using sequential illumination description) Resolution enhancement and crosstalk correction followed by propagation via angular spectrum: (Angular spectrum propagationDescription) to make an initial estimate of the object.
Holographic Demosaicing Pixel Super Resolution (DPSR) using multiplexed illumination
Similar to the hyperspectral imaging method, the trained deep neural network method also uses shift and add based algorithms in conjunction with 6 x6 low resolution holograms to improve hologram resolution. The sample 4 is illuminated with three multiplexed wavelengths, i.e. three different wavelengths simultaneously. To correct for crosstalk errors between different color channels in an RGB sensor, a DPSR algorithm is used, as described in Wu et al, "multiplexed holographic color imaged demosaiced pixel super resolution," SciRep6, (2016), which is incorporated herein by reference. This crosstalk correction can be illustrated by the following equation:
Figure BDA0003314154900000151
wherein, UR-ori
Figure BDA0003314154900000152
And UB-oriRepresenting the raw interference pattern collected by the image sensor, W is the 3 × 4 crosstalk matrix obtained by experimental calibration of a given RGB image sensor 30, and UR、UGAnd UBIs a demultiplexed (R, G, B) interference pattern. Here, three illumination wavelengths were chosen as 450nm, 540nm and 590 nm. Using these wavelengths, a specific tissue staining pattern (i.e., H)&E-stained prostate and Masson's trichrome-stained lung for this study) better color accuracy was obtained. Of course, it should be understood that other colorants or dye types may use different illumination wavelengths.
Deep neural network input formation
According to the demosaiced pixel super-resolution algorithm, the three intensity holograms are propagated numerically back to the object plane, as described in the "angular spectrum propagation" section herein. After this back propagation step, each of the three color holographic channels will produce a complex wave, represented as a real data channel and a virtual data channel (50)R、50B、50G、52R、52B、52G). This results in the six-channel tensor being used as input to the deep network, as shown in figure 2. Unlike the ground truth, in this case, no phase recovery is performed, since only a single measurement is available.
Deep neural network architecture
The deep neural network 18 is a generative countermeasure network (GAN) implemented to learn color correction and eliminate missing phase-dependent artifacts. This GAN framework has recently been applied in super-resolution microscopy and histopathology, consisting of a discriminator network (D) and a generator network (G) (fig. 4 and 5). The D-network (fig. 5) is used to distinguish the three channel RGB ground truth image (z) obtained from hyperspectral imaging from the output image from G. Thus, G (fig. 4) is used to learn the conversion from a six-channel holographic image (x) (i.e., three color channels with real and virtual components) to a corresponding RGB ground truth image.
Discriminator and generator losses are defined as:
ldiscriminator=D(G(x))2+(1-D(z))2 (5)
lgenerator=L2{z,G(x)}+λ×TV{G(x)}+α×(1-D(G(x)))2 (6)
wherein the content of the first and second substances,
Figure BDA0003314154900000161
wherein N ischannelsIs the number of channels in the image (e.g., N for RGB imageschannels3), M and N are the number of pixels on each side of the image, i and j are the pixel indices, and N denotes the channel index. TV presentation adaptationThe total variation regularization term at the generator output is defined as:
Figure BDA0003314154900000162
the regularization parameters (λ, α) are set to 0.0025 and 0.002, so that the total variation loss (λ × TV { G (x))input) }) is L2About 2% of (A), discriminator loss (. alpha. × (1-D) (G (x))input)))2) Is 1generatorAbout 15% of the total. Ideally, at the end of the training phase, D (z)label) And D (G (x)input) All converge to 0.5.
The generator network architecture (fig. 4) is an adaptation of the U-type network. In addition, the discriminator network (fig. 5) uses a simple classifier consisting of a series of convolutional layers that slowly decrease in dimensionality while increasing the number of channels, followed by two fully connected layers to output the classification. The U-network is ideal for removing missing phase artifacts and performing color correction on reconstructed images. The convolution filter size is set to 3 x 3, and each convolution layer except the last is followed by a leak-ReLu activation function, defined as follows:
Figure BDA0003314154900000163
deep neural network training process
In the network training process, the image generated by the hyper-spectral method is used as a network label, and the demosaiced super-resolution hologram reversely propagated to the sample plane is used as network input. Both the generator and discriminator networks are trained using a block size of 128 x 128 pixels. The weights in the convolutional and fully-connected layers are initialized using Xavier initialization, while the offsets are initialized to zero. All parameters were updated using an adaptive moment estimation (Adam) optimizer, with a learning rate of 1 x 10 for the generator network-4The corresponding rate of the discriminator network is 5 x 10-5. Training, validation and testing of networksThe method is carried out on a computer provided with a quad-core 3.60GHz CPU, a 16GB RAM and an Nvidia GeForce GTX 1080Ti GPU.
Bright field imaging
To compare imaging throughput, bright field microscopy images were obtained. An Olympus IX83 microscope equipped with a motorized stage and a set of ultrafull color objective lenses (Olympus uplaspo 20 ×/0.75 Numerical Aperture (NA), Working Distance (WD)0.65) was used. The microscope was developed by metamorphh advanced digital imaging software (version 7.10.1.161,
Figure BDA0003314154900000171
) And controlling, setting the automatic focusing algorithm to search within 5 mu m in the z direction, and setting the precision to be 1 mu m. Dual pixel binning is enabled and a 10% overlap between scan patches is used.
Quantization metric
The quantization metrics are selected and used to evaluate the performance of the network: the SSIM is used for comparing the similarity of the organization structure information between the output image and the target image; Δ E × 94 is used to compare the color distances of the two images. SSIM values range from 0 to 1, where a uniform value indicates that the two images are identical, i.e.,
Figure BDA0003314154900000172
wherein U and V respectively represent a vectorization test image and a vectorization reference image, muUAnd muVAre the average values of U and V respectively,
Figure BDA0003314154900000173
variance, σ, of U and V, respectivelyU,VIs the covariance of U and V, and contains a constant C when the denominator is near zero1And C2To stabilize the division.
The second metric Δ E × 94 used outputs a number between 0 and 100. A value of zero indicates that the compared pixels share exactly the same color, while a value of 100 indicates that the two images have opposite colors (mixing the two opposite colors would cancel each other and produce a gray color). The method calculates the color distance on a pixel-by-pixel basis and calculates the final result by averaging the Δ Ε 94 values in each pixel of the output image.
Sample preparation
De-identified H & E stained human prostate tissue sections and Masson's trichrome stained human lung tissue sections were obtained from UCLA transactional Pathology Core Laboratory. Existing and anonymous samples were used. Subject related information is not linked or cannot be retrieved.
Results and discussion
Qualitative assessment
Two different tissue staining combinations were used to evaluate the performance of the trained deep neural network 18: by H&E stained prostate tissue sections and lung tissue sections stained with Masson trichrome. For both types of samples, the deep neural network 18 was trained on three tissue slices from different patients and blindly tested on another tissue slice from a fourth patient. The field of view (FOV) of each tissue section used for training and testing was about 20mm2
The results of the lung and prostate samples are summarized in fig. 6A-6B, 7A and 7B, respectively. These properties of the color output of the trained deep neural network 18 demonstrate the ability to reconstruct high fidelity and color accurate images from a single non-phase retrieval and wavelength multiplexed hologram. Using the trained deep neural network 18, the system 2 is able to span the FOV of the entire sensor (i.e., about 20 mm)2) The image of the sample 4 is reconstructed as shown in fig. 8.
To further demonstrate the qualitative performance of the deep neural network 18, fig. 9A to 9J and fig. 10A to 10J show the reconstruction results of the deep neural network 18 (fig. 9I and 10I) on images created by the absorption spectrum estimation method according to the required number of measurements. For this comparison, the spectral estimation method is used for the multi-height phase recovery method, and via the order (N) at the same wavelength (i.e., 450nm, 540nm, and 590nm)H=8,NM3) and multiplex (N)H=8,NM1) illuminationA color image is reconstructed from the reduced number of wavelengths (fig. 9A to 9H and fig. 10A to 10H). Qualitatively, the results of the deep neural network 18 can be compared to multiple height results obtained with more than four sample-to-sensor distances for both sequential and multiplexed lighting cases. This is also confirmed by the quantitative analysis described below.
Quantitative performance assessment
The quantitative performance of the network was evaluated based on the computation of SSIM and color difference (Δ E × 94) between the output of the deep neural network 18 and the gold standard image produced by the hyperspectral imaging method. As listed in table 1, the performance of the spectral estimation method decreases (i.e. SSIM decreases, Δ E × 94 increases) as the number of holograms at different sample-to-sensor distances decreases, or when the illumination is changed to multiplexing. This quantitative comparison shows that the performance of the deep neural network 18 using a single super-resolution hologram is comparable to that obtained using the most advanced algorithms for > 4 times the original holographic measurements.
TABLE 1
Figure BDA0003314154900000191
SSIM and Δ E94 performance between the deep neural network 18 and various other methods were compared using two, four, six, and eight sample-to-sensor heights and three sequential/multiplexed wavelength illumination conditions for two tissue samples (the network-based method and other methods with comparable performance are highlighted in bold).
Throughput evaluation
Table 2 lists the entire FOV (. about.20 mm) measured using different methods2) The reconstruction time of (1). For the deep neural network 18, the total reconstruction time includes taking 36 holograms (6 × 6 lateral positions in the multiplexed illumination), performing DPSR, angular spectrum propagation, network inference, and image stitching. For the hyperspectral imaging method, the total reconstruction time consists of a set of 8928 holograms (at 6 × 6 lateral position, 8 sample-to-sensor distance and 31 wavelengths), PSR, how highDegree phase retrieval, color tri-stimulus projection and image stitching. For a conventional brightfield microscope (equipped with an automatic scanning stage), the total time includes scanning the brightfield image using a 20 x/0.75 NA microscope, performing autofocus and image stitching at each scan position. In addition, the time for the multi-height phase recovery method using four sample-to-sensor distances is also shown and has the closest performance to the deep learning based neural network method. All coherent imaging correlation algorithms were accelerated with Nvidia GTX 1080Ti GPU and CUDA C + + programming.
TABLE 2
Figure BDA0003314154900000201
Table 2: the deep neural network method reconstructs a temporal performance estimate of the accurate color image, in contrast to conventional hyperspectral imaging methods and standard bright field microscopy sample scans (where N/a stands for "not applicable").
The deep neural network based approach requires about 7 minutes to acquire and reconstruct 20mm2Approximately equal to the time required to image the same area with a standard, general purpose, bright field scanning microscope using a 20 x objective. Typically, the method is capable of reconstituting at least 10mm within 10 minutes2The FOV of (1). Of course, increasing the processing power and the type of sample 4 may affect the reconstruction time, but this is usually done within a few minutes. Note that this is significantly shorter than about 60 minutes (with four heights and simultaneous illumination) required when using the spectral estimation method. System 2 and the deep learning based approach also improve data efficiency. The original super-resolution hologram data size was reduced from 4.36GB to 1.09GB, which is more comparable to the data size of bright field scanning microscope images using 577.13MB in total.
The system 2 is used to generate a reconstructed color output image 100 of a sample 4, which comprises a histologically stained pathological section. The system 2 and method described herein significantly simplifies the data acquisition process, reduces data storage requirements, shortens processing time, and improves the color accuracy of the holographically reconstructed images. Notably, other techniques (e.g., slide scanning microscopes used in pathology) can easily scan tissue slides at faster speeds, although they are quite expensive to use in resource-limited environments. Thus, an alternative to lensless holographic imaging hardware (e.g., using illumination array 40 to perform pixel super-resolution) may improve overall reconstruction time.
While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the invention. For example, although the invention is primarily described as using a lensless microscope arrangement, the methods described herein may also be used with a lens-based microscope arrangement. For example, the input image to be reconstructed may comprise an image obtained from a coherent lens-based computing microscope (e.g., a fourier tomography microscope). Further, while hyperspectral imaging is used to generate gold standard or target color images for network training, other imaging modalities may be used for training. This includes not only computational microscopy (e.g., the corresponding ground truth or object color image is numerically simulated or calculated), but also bright field microscopy images. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents.

Claims (24)

1. A method for performing color image reconstruction of a single super-resolved holographic image of a sample, comprising:
obtaining a plurality of sub-pixel shifted lower resolution holographic images of the sample using an image sensor by simultaneously illuminating at a plurality of color channels;
digitally generating a super-resolved holographic intensity image for each of the plurality of color channels based on the plurality of sub-pixel shifted lower resolution holographic images;
propagating the super-resolved holographic intensity image of each of the plurality of color channels back to an object plane with image processing software to generate an amplitude input image and a phase input image of the sample for each of the plurality of color channels; and is
Providing a trained deep neural network executed by image processing software using one or more processors of a computing device and configured to receive the amplitude input image and the phase input image of the sample for each of the plurality of color channels and output a color output image of the sample.
2. The method of claim 1, wherein the plurality of color channels comprises three color channels.
3. The method of claim 2, wherein the three color channels include a red channel, a green channel, and a blue channel.
4. The method of claim 1, wherein simultaneously illuminating the sample comprises simultaneously illuminating the sample with three different wavelengths of illumination.
5. The method of claim 4, wherein the three different wavelengths include 450nm, 540nm, and 590 nm.
6. The method of claim 1, wherein the plurality of sub-pixel shifted lower resolution holographic intensity images are obtained by moving the image sensor in an x, y plane coupled to a movable stage.
7. The method of claim 1, wherein the plurality of sub-pixel shifted lower resolution holographic intensity images are obtained by moving a sample holder holding the sample in an x, y plane.
8. The method of claim 1, wherein the plurality of sub-pixel shifted lower resolution holographic intensity images are obtained by selective illumination of light sources from an array of light sources.
9. The method of claim 1, wherein the plurality of sub-pixel shifted lower resolution holographic intensity images are obtained by moving an illumination source in a plane or by using illumination from a plurality of illumination sources.
10. The method of any one of claims 1 to 7, wherein the sample comprises stained or labeled tissue.
11. The method of any one of claims 1 to 7, wherein the sample comprises a stained cytology slide.
12. The method of claim 1, further comprising digitally stitching the plurality of color output images into a larger output image using image processing software.
13. The method of claim 12, wherein the larger output image comprises a pixel comprising at least 10mm2And wherein the larger output image is generated in less than 10 minutes.
14. The method of claim 12, wherein the trained deep neural network outputs the color output image of the sample within minutes of receiving the amplitude input image and the phase input image of the sample.
15. The method of claim 1, wherein the trained deep neural network is trained using a generative confrontation network (GAN) model.
16. A system for performing color image reconstruction of a super-resolved holographic image of a sample, comprising: a computing device on which is executed image processing software comprising a trained deep neural network executed using one or more processors of the computing device, wherein the trained deep neural network is trained with a plurality of training images or patches of super-resolved holograms from images of the sample and corresponding ground truth or target color images or patches, the trained deep neural network being configured to receive one or more super-resolved holographic images of the sample generated by the image processing software from a plurality of low resolution images of the sample obtained by simultaneously illuminating the sample at a plurality of illumination wavelengths, and to output a reconstructed color image of the sample.
17. The system of claim 16, wherein the respective ground truth image or target color image is numerically computed.
18. The system of claim 16, wherein the respective ground truth or target color images are obtained from bright field color images of the same sample.
19. The system of claim 16, further comprising a microscope device to obtain a plurality of low resolution images of the sample, the microscope device comprising a sample holder to hold the sample, a color image sensor, and one or more light sources to emit light at the plurality of wavelengths.
20. The system of claim 17, wherein the microscope device comprises a movable stage configured to move one or both of the color image sensor and/or sample holder in an x, y plane to obtain the plurality of low resolution images of the sample.
21. The system of claim 17, wherein the plurality of light sources comprises an array of light sources.
22. The system of claim 16, wherein the trained deep neural network is trained using a generative confrontation network (GAN) model.
23. A system for performing color image reconstruction of one or more super-resolved holographic images of a sample, comprising:
a lensless microscope device comprising a sample holder for holding the sample, a color image sensor, and one or more optical fibers or cables coupled to respective differently colored light sources configured to emit light at multiple wavelengths simultaneously;
at least one of a movable stage and an array of light sources configured to obtain a sub-pixel shifted lower resolution holographic intensity image of the sample; and
a computing device on which is executed image processing software comprising a trained deep neural network executed using one or more processors of the computing device, wherein the trained deep neural network is trained with a plurality of training images or patches of super-resolved holograms from an image of the sample and corresponding ground truth or target color images or patches generated from hyperspectral imaging or bright field microscopy, the trained deep neural network being configured to receive one or more super-resolved holographic images of the sample generated by the image processing software from the sub-pixel shifted lower resolution holographic intensity images of the sample obtained by simultaneously illuminating the sample, and output a reconstructed color image of the sample.
24. The system of claim 23, wherein the trained deep neural network is trained using a generative confrontation network (GAN) model.
CN202080030303.1A 2019-04-22 2020-04-21 System and method for color holographic microscope based on deep learning Pending CN113711133A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962837066P 2019-04-22 2019-04-22
US62/837,066 2019-04-22
PCT/US2020/029157 WO2020219468A1 (en) 2019-04-22 2020-04-21 System and method for deep learning-based color holographic microscopy

Publications (1)

Publication Number Publication Date
CN113711133A true CN113711133A (en) 2021-11-26

Family

ID=72941351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080030303.1A Pending CN113711133A (en) 2019-04-22 2020-04-21 System and method for color holographic microscope based on deep learning

Country Status (7)

Country Link
US (1) US20220206434A1 (en)
EP (1) EP3959568A4 (en)
JP (1) JP2022529366A (en)
KR (1) KR20210155397A (en)
CN (1) CN113711133A (en)
AU (1) AU2020262090A1 (en)
WO (1) WO2020219468A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061274A (en) * 2022-07-01 2022-09-16 苏州大学 Imaging method and device of super-resolution endoscope based on sparse illumination

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113383225A (en) 2018-12-26 2021-09-10 加利福尼亚大学董事会 System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning
EP3839479B1 (en) * 2019-12-20 2024-04-03 IMEC vzw A device for detecting particles in air
US11915360B2 (en) 2020-10-20 2024-02-27 The Regents Of The University Of California Volumetric microscopy methods and systems using recurrent neural networks
TWI831078B (en) * 2020-12-11 2024-02-01 國立中央大學 Optical system and optical image processing method by applying image restoration
KR102410380B1 (en) * 2021-04-08 2022-06-16 재단법인대구경북과학기술원 Apparatus and method for reconstructing noiseless phase image from garbor hologram based on deep-learning
WO2023080601A1 (en) * 2021-11-05 2023-05-11 고려대학교 세종산학협력단 Disease diagnosis method and device using machine learning-based lens-free shadow imaging technology
CN114326075B (en) * 2021-12-10 2023-12-19 肯维捷斯(武汉)科技有限公司 Digital microscopic imaging system and microscopic detection method for biological sample

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7253946B2 (en) * 2002-09-16 2007-08-07 Rensselaer Polytechnic Institute Microscope with extended field of vision
US9605941B2 (en) * 2011-01-06 2017-03-28 The Regents Of The University Of California Lens-free tomographic imaging devices and methods
WO2013070287A1 (en) * 2011-11-07 2013-05-16 The Regents Of The University Of California Maskless imaging of dense samples using multi-height lensfree microscope
WO2013143083A1 (en) * 2012-03-28 2013-10-03 Liu Travis Low-cost high-precision holographic 3d television technology implemented using chrominance clamping method
EP3175302B1 (en) * 2014-08-01 2021-12-29 The Regents of the University of California Device and method for iterative phase recovery based on pixel super-resolved on-chip holography
US20170168285A1 (en) * 2015-12-14 2017-06-15 The Regents Of The University Of California Systems and methods for image reconstruction
US10043261B2 (en) * 2016-01-11 2018-08-07 Kla-Tencor Corp. Generating simulated output for a specimen
US10795315B2 (en) * 2016-05-11 2020-10-06 The Regents Of The University Of California Method and system for pixel super-resolution of multiplexed holographic color images
US11079719B2 (en) * 2017-07-05 2021-08-03 Accelerate Diagnostics, Inc. Lens-free holographic optical system for high sensitivity label-free microbial growth detection and quantification for screening, identification, and susceptibility testing
WO2019034328A1 (en) * 2017-08-15 2019-02-21 Siemens Healthcare Gmbh Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061274A (en) * 2022-07-01 2022-09-16 苏州大学 Imaging method and device of super-resolution endoscope based on sparse illumination

Also Published As

Publication number Publication date
KR20210155397A (en) 2021-12-22
US20220206434A1 (en) 2022-06-30
WO2020219468A1 (en) 2020-10-29
AU2020262090A1 (en) 2021-11-11
EP3959568A1 (en) 2022-03-02
EP3959568A4 (en) 2022-06-22
JP2022529366A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US20220206434A1 (en) System and method for deep learning-based color holographic microscopy
US11422503B2 (en) Device and method for iterative phase recovery based on pixel super-resolved on-chip holography
Liu et al. Deep learning‐based color holographic microscopy
US11514325B2 (en) Method and system for phase recovery and holographic image reconstruction using a neural network
US20200393793A1 (en) Method and system for pixel super-resolution of multiplexed holographic color images
US20210264214A1 (en) Method and system for digital staining of label-free phase images using deep learning
CN111433817A (en) Generating a virtual stain image of an unstained sample
JP6112872B2 (en) Imaging system, image processing method, and imaging apparatus
US11022731B2 (en) Optical phase retrieval systems using color-multiplexed illumination
Mariën et al. Color lens-free imaging using multi-wavelength illumination based phase retrieval
Bearman et al. Biological imaging spectroscopy
WO2021198252A1 (en) Virtual staining logic
Bian et al. Deep learning colorful ptychographic iterative engine lens-less diffraction microscopy
Pan et al. Image restoration and color fusion of digital microscopes
Guo et al. Revealing architectural order with quantitative label-free imaging and deep neural networks
JP5752985B2 (en) Image processing apparatus, image processing method, image processing program, and virtual microscope system
Ma et al. Light-field tomographic fluorescence lifetime imaging microscopy
WO2022173848A1 (en) Methods of holographic image reconstruction with phase recovery and autofocusing using recurrent neural networks
Liu et al. Color holographic microscopy using a deep neural network
Zhang Lens-Free Computational Microscopy for Disease Diagnosis
Wang Deep Learning-enabled Cross-modality Image Transformation and Early Bacterial Colony Detection
JP5687541B2 (en) Image processing apparatus, image processing method, image processing program, and virtual microscope system
Cui Multi-dimensional Optical Imaging
Tang Label-free Investigation of Cells with Phase Imaging, Diffraction Tomography, and Raman Spectroscopy
Calisesi Compressive sensing in Light Sheet Fluorescence Microscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination