WO2014108708A1 - An image restoration method - Google Patents
An image restoration method Download PDFInfo
- Publication number
- WO2014108708A1 WO2014108708A1 PCT/GB2014/050091 GB2014050091W WO2014108708A1 WO 2014108708 A1 WO2014108708 A1 WO 2014108708A1 GB 2014050091 W GB2014050091 W GB 2014050091W WO 2014108708 A1 WO2014108708 A1 WO 2014108708A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- images
- image
- region
- primary
- calculating
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 112
- 230000004075 alteration Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 46
- 230000006870 function Effects 0.000 claims description 45
- 239000013598 vector Substances 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 19
- 102000029749 Microtubule Human genes 0.000 claims description 17
- 108091022875 Microtubule Proteins 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 17
- 210000004688 microtubule Anatomy 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 9
- 239000002245 particle Substances 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 5
- 210000003850 cellular structure Anatomy 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 claims description 2
- 238000012632 fluorescent imaging Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 1
- 239000002096 quantum dot Substances 0.000 description 18
- 230000000875 corresponding effect Effects 0.000 description 15
- 230000006872 improvement Effects 0.000 description 10
- 238000011109 contamination Methods 0.000 description 9
- 238000000386 microscopy Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000005284 excitation Effects 0.000 description 5
- 238000004321 preservation Methods 0.000 description 5
- FWBHETKCLVMNFS-UHFFFAOYSA-N 4',6-Diamino-2-phenylindol Chemical compound C1=CC(C(=N)N)=CC=C1C1=CC2=CC=C(C(N)=N)C=C2N1 FWBHETKCLVMNFS-UHFFFAOYSA-N 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 210000004027 cell Anatomy 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000000592 Artificial Cell Substances 0.000 description 2
- KPKZJLCSROULON-QKGLWVMZSA-N Phalloidin Chemical compound N1C(=O)[C@@H]([C@@H](O)C)NC(=O)[C@H](C)NC(=O)[C@H](C[C@@](C)(O)CO)NC(=O)[C@H](C2)NC(=O)[C@H](C)NC(=O)[C@@H]3C[C@H](O)CN3C(=O)[C@@H]1CSC1=C2C2=CC=CC=C2N1 KPKZJLCSROULON-QKGLWVMZSA-N 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000000593 degrading effect Effects 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 210000002889 endothelial cell Anatomy 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000799 fluorescence microscopy Methods 0.000 description 2
- 230000003834 intracellular effect Effects 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000399 optical microscopy Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 101710197633 Actin-1 Proteins 0.000 description 1
- 102000007469 Actins Human genes 0.000 description 1
- 108010085238 Actins Proteins 0.000 description 1
- 241000283690 Bos taurus Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 108010009711 Phalloidine Proteins 0.000 description 1
- 102000004243 Tubulin Human genes 0.000 description 1
- 108090000704 Tubulin Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 239000000975 dye Substances 0.000 description 1
- 238000001839 endoscopy Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000198 fluorescence anisotropy Methods 0.000 description 1
- 238000002073 fluorescence micrograph Methods 0.000 description 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 1
- 238000011503 in vivo imaging Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000009607 mammography Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000001000 micrograph Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 229920013655 poly(bisphenol-A sulfone) Polymers 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 210000001147 pulmonary artery Anatomy 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 210000004895 subcellular structure Anatomy 0.000 description 1
- 238000010869 super-resolution microscopy Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- QOFZZTBWWJNFCA-UHFFFAOYSA-N texas red-X Chemical compound [O-]S(=O)(=O)C1=CC(S(=O)(=O)NCCCCCC(=O)O)=CC=C1C(C1=CC=2CCCN3CCCC(C=23)=C1O1)=C2C1=C(CCC1)C3=[N+]1CCCC3=C2 QOFZZTBWWJNFCA-UHFFFAOYSA-N 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10064—Fluorescence image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Definitions
- the present invention relates to a method and apparatus for increasing the spatial resolution of an image, for example a method and apparatus for restoring a microscope image with a spatial resolution beyond the diffraction limit.
- SR super-resolution
- SMLM single molecule localization microscopy
- the third approach is computational, in which image processing techniques are employed to reconstruct a SR image from a set of low-resolution (LR) observations (Elad, M. & Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 6, 1646-1658 (1997)).
- LR low-resolution
- HR high-resolution
- SR medical imaging such as X-ray mammography, functional magnetic resonance imaging and positron emission tomography.
- Medical imaging usually uses highly controlled illumination doses to avoid damage to the subject, leading to low signal-to-noise ratio (SNR) images.
- SNR signal-to-noise ratio
- Inclusion of a prior model for noise removal therefore becomes critically important for the performance of SR restoration.
- noise removal and feature preservation (and restoration) can impede image resolution that can be restored.
- the prior model is usually constructed based on the edge-preservation concept in medical and other applications (Farsiu, S., Robinson, D., Elad, M. & Milanfar, P. Advances and challenges in super-resolution. Int. J. Imaging Syst. Technol. 14, 47-57 (2004)); features are restored as long as all the edges are preserved in the inverse process.
- Medical images usually contain data describing tissues with simpler structures and larger size compared to biological images, typically 2-3 times smaller than the resolution limit of the imaging system. Fluorescence images of intracellular structures often contain abundant, heterogeneous blob and ridge-like features, complex sub- cellular structures, potentially 10 times smaller than the diffraction limit. In general, edges embedded in such small and complex features can be prone to noise contamination.
- an image restoration method comprising obtaining a plurality of primary images wherein each primary image contains a different representation of a subject, calculating a map representing at least one image feature identified by calculating at least one second order difference or higher order difference, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature and extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
- the at least one image feature may comprise at least one feature of the target image, or at least one feature of one or more of the primary images.
- the at least one second order difference or higher order difference may comprise at least one second order difference or higher order difference between points or regions of an image, which can for example be the target image or one or more of the primary images.
- the second order difference or higher order difference may comprise a non-local second order difference or non-local higher order difference.
- the constraint that the target image includes the at least one feature may comprise a constraint that the target image includes a representation of the at least one feature substantially in accordance with the map.
- a map can be provided that enables preservation of desired features, for example blobs, ridges or other areas, which can be difficult to identify under low-SNR environments using conventional methods.
- desired features for example blobs, ridges or other areas, which can be difficult to identify under low-SNR environments using conventional methods.
- the preservation of such features can be useful in the context of fluorescence microscopy and/or in the context of imaging of cellular or sub-cellular features.
- the obtaining of the plurality of primary images may comprise acquiring the images using a measurement apparatus or may comprise reading previously acquired images from a data store.
- the calculating of the map may comprise calculating at least one first order difference.
- the at least one feature may comprise at least one edge, area or volume.
- the at least one first order difference may be used to identify at least one edge.
- the at least one second order difference or higher order difference may be used to identify at least one area or volume, for example at least one blob.
- Each area may comprise a region of lateral extent having a length greater than a minimum length and a width greater than a minimum width, for example a length greater than one pixel of the target image and a width greater than one pixel of the target image.
- At least one of the edges, areas or volumes may be smaller than an impulse response of an optical system used to measure the primary images.
- the at least one difference may comprise at least one non-local difference.
- Each primary image may comprise picture elements, for example pixels or voxels, and calculating the map may involve for at least some picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
- Calculating differences between regions may involve calculating a first order difference and/or a second order difference between a first and a second region.
- the differences may comprise intensity differences.
- Each difference may be a difference in intensity or any quantity derived from intensity or a difference in brightness or a difference in contrast or a difference in colour.
- the first order non-local difference may be calculated by defining first and second regions.
- the first and second regions may be adjacent or contiguous.
- the first order non-local difference may be obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function of values associated with the second region.
- the second order non-local difference may be calculated by defining three regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region.
- the first, second and third regions may be adjacent or contiguous.
- the second order non-local difference may be obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
- the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region.
- the first, second or higher order non-local differences may be first, second or higher order differences or functions of such differences.
- a first order difference could be calculated as a first order derivative squared.
- Calculating a first order difference may comprise defining a first region and a second region, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region and a second vector or matrix comprising values of a plurality of picture elements within the second region and calculating a norm of a difference between the vector or matrix of the first region and the vector or matrix of the second region.
- the first and second regions may be adjacent or contiguous.
- Calculating a second order difference may comprise defining a first, a second, and a third region, the second region being located between the first and the third region, representing the first, second, and third region by a vector or a matrix comprising values of a plurality of picture elements within the first, second and third region respectively, and calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
- the regions may be adjacent or contiguous to each other.
- Calculating a high order difference may comprise defining a first region, a second region and a new matrix, representing the first region by a vector or a matrix comprising values of a plurality of picture elements within the first region, a second vector or matrix comprising values of a plurality of picture elements within the second region and a new matrix comprising norms of different orders of a difference between the vector or matrix of the first region and the vector or matrix of the second region, and for example calculating the multiplication between the Moore-Penrose pseudo-inverse (see for example, Chatterjee, P. & Milanfar, P. Practical Bounds on Image Denoising: From Estimation to Information. IEEE Transactions on Image Processing 20, 1221 - 1233, (201 1 )) of the new matrix and the vector or matrix of the first region.
- the picture element values may be intensity values and/or brightness and/or colour and/or any quantity derived from them, for example, different in frequency of the light.
- Calculating the map may comprise calculating at least one first order difference and at least one second order difference.
- Calculating a map may comprise weighting the first and second order differences by a first weighting factor and a second weighting factor respectively.
- the first weighting factor may be proportional to a ratio of the first order difference over a sum of the first and second order differences and the second weighting factor may be proportional to a ratio of the second order difference over a sum of the first and second order differences.
- the plurality of primary images may be acquired using a microscope. That feature is particularly significant and so in a further, independent aspect of the invention there is provided an image restoration method comprising obtaining a plurality of primary images acquired using a microscope wherein each primary image contains a different representation of a subject, fitting the primary images to a model that represents each primary image as an alteration of a common target image of the subject, extracting the target image from the fit, wherein the target image has a spatial resolution greater than a spatial resolution of the primary images.
- the method may comprise calculating a map comprising at least one image feature, wherein fitting the primary images is subject to a constraint that the target image includes the at least one feature.
- the model may comprise an energy function and fitting may comprise minimizing the energy function.
- the fitting may be performed iteratively and at each iteration stage the target image may be updated and the map may be re-calculated as a function of the updated target image, until the energy function is minimized.
- Obtaining a plurality of primary images may comprise causing relative translation of the subject and imaging optics to a plurality of relative positions and obtaining at least one image for each position.
- Obtaining a plurality of primary images may comprise other types of spatial displacement of the subject or/and the imaging system and obtaining at least one image for each displacement.
- Obtaining the plurality of images may be performed in a single acquisition.
- Obtaining a plurality of images may comprise using diffractive optics to acquire the plurality of images.
- the plurality of images may comprise fluorescent images obtained using a fluorescent imaging system.
- Obtaining a plurality of primary images may comprise measuring a first set of primary images and a second set of primary images, extracting a first and second target image corresponding to the first and second set of primary images using the method of any of the preceding claims and combining the first and second target images to form a combined target image.
- the primary images of the first set may be images of a first colour
- the primary images of the second set may be images of a second, different colour.
- the primary images of the first set may comprise images of a first type of structure and the primary images of the second set comprise images of a second type of structure.
- the primary images may be images of cellular structures, optionally images of at least one of transport particle structures and microtubule structures.
- the spatial resolution of the target image may increase with an increase in the plurality of primary images. For example, the spatial resolution may increase up to 7 times.
- the target image may have a spatial resolution beyond a limit of diffraction.
- an apparatus comprising image acquisition means operable to acquire a plurality of primary images, a memory in communication with the image acquisition means for storing the primary images, and a processor in communication with the memory, the processor being arranged to process the primary images according to at least one of the methods as claimed and/or described herein.
- the image acquisition means may comprise a microscope operable in combination with a translation stage to acquire primary images.
- the image acquisition means may comprise a microscope operable in combination with diffractive optics to acquire primary images.
- Figure 1 is a diagram of an experimental set up for performing translation microscopy according to an embodiment.
- Figure 2 is a flow diagram of a generic method for increasing the spatial resolution of an image.
- Figure 3 is a flow diagram of an exemplary method for increasing the spatial resolution of an image.
- Figure 4 (a) is a synthetic 1 -D signal at low resolution and High resolution.
- Figure 4 (b) is a 1 s ' and 2 nd NLD response of the LR signal of figure 4 (a) and the combination of the 1 st and 2 nd NLD.
- Figure 4(c) is a restored signal obtained after 131 iterations by IRLS.
- Figure 4(d) is a restored signal obtained after 388 iterations by I RLS.
- Figure 4(e) is a restored signal obtained after 517 iterations by I RLS.
- Figure 4(f) is a HR restored signal obtained by TRAM and by a prior art method using an edge-preserving prior model.
- Figure 5(a) is an ISO 12233 resolution chart.
- Figure 5(b) is a restored image of figure 5(a) with added noise, using TRAM.
- Figure 5 (e) is a close-up region marked by a red box in figure 5 (a).
- Figure 5 (g) is a restored image of figure (f) obtained by TRAM.
- Figure 5 (h) is a restored image of figure (f) obtained by ALG.
- Figure 5 (i) is a restored image of figure (f) obtained by RSR.
- Figure 5 (j) is a restored image of figure (f) obtained by ZMT.
- Figure 6 (a) is an HR image showing five different synthetic structures.
- Figure 6 (b) is an artificially blurred image of figure 6 (a).
- Figure 6 (c) is a restored image of figure 6(b) using TRAM.
- Figure 6 (d) is the FWHM ratio of the LR image to the restored images obtained for the five types of structures.
- Figure 6(e) is the FWHM ratio of the LR image to the restored image as a function of the number of LR images and obtained for 3 input noise levels.
- Figure 6 (f) is the FWHM ratio of the LR to the restored image versus the Std of the input noise.
- Figure 7 (a) is low resolution image showing a plurality of quantum dots.
- Figure 7 (b) is a zoomed image of a first region of figure 7 (a).
- Figure 7 (c) is a restored super-resolution image of the first region of figure 7 (a) acquired using 32 low resolution images.
- Figure 7 (d) is a restored super-resolution image of figure 7 (b) acquired using 64 low resolution images.
- Figure 7 (e) is a resolution curve showing the FWHM of a QD image recovered using an increasing number of low resolution images.
- Figure 7 (f) is a zoomed image of a second region of figure 7 (a).
- Figure 7 (g) is a restored super-resolution image of figure 5 (f) acquired using 64 low resolution images.
- Figure 7 (h) is a zoomed image of a third region of figure 7 (a).
- Figure 7 (i) is a restored super-resolution image of figure 7 (h) acquired using 64 low resolution images.
- Figure 7 (j) is an intensity fluctuation measured over time in figure 7 (b).
- Figure 7(k) is an intensity fluctuation of the unresolved image of figure 7 (f) measured over time and an intensity fluctuation of each of the two resolved QDs in figure 7(g).
- Figure 7 (I) is an intensity fluctuation of the unresolved image of figure 7 (h) measured over time (black curve) and an intensity fluctuation of each of the three resolved QDs in Figure 7(i).
- Figure 8 (a) is a low resolution image of a pulmonary endothelial cell.
- Figure 8 (b) is a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images.
- Figure 8 (c) is a zoom on a first region of figure 8(a).
- Figure 8 (d) is a super resolution restored image corresponding to figure 8 (c).
- Figure 8 (e) is a zoom on a second region of figure 8(a).
- Figure 8 (f) is a super resolution restored image corresponding to figure 8 (e).
- Figure 9 (a) is a restored image by the TRAM method of the microtubules.
- Figure 9 (b) is the feature image of the microtubules of figure 9(a).
- Figure 10(a) is a LR image of a human face.
- Figure 10(b) is a restored image of the human face of Figure 10(a) using SR translation imaging.
- Figure 10(c) is a restored image of the human face of Figure 10(a) using the ALG method.
- Figure 10(d) is a restored image of the human face of Figure 10(a) using the RSR method.
- Figure 10(e) is a restored image of the human face of Figure 10(a) using the ZMT method.
- Figure 1 shows a diagram of a setup 10 for performing fluorescence super-resolution microscopy.
- the setup comprises a laser system 12 in optical communication with an inverted microscope, a camera 24, a memory 26 and a processor 28.
- the inverted microscope has an objective 14 positioned under a translation stage 16, collimating optics and imaging optics (not shown) and a set of emission 20 and excitation 22 filters.
- a sample 18 is positioned onto the translation stage 16.
- the translation stage 16 is positioned in a first position with respect to the objective, where the sample is in the field of view of the objective.
- a laser beam having a wavelength suitable for fluorescence excitation of the sample is directed onto an aperture of the objective via the excitation filter 20 and collimating optics (not shown). Fluorescent light from the sample is collected by the objective and imaged onto the camera 24 via the emission filter 22 and imaging optics (not shown).
- the camera 24 records a first image of the sample in a first position.
- the translation stage 16 is then translated to a second position where the sample remains in the field of view of the objective, and a second image is recorded by the camera 24.
- a plurality of images are recorded by the camera for different translation stage positions and stored in a memory 26 as a set of primary images.
- the processor 28 retrieves the set of primary images from the memory 26 and runs an image restoration algorithm. Each image comprises picture elements for example digital pixels.
- QDs quantum dots
- ASI motorized stage
- Image data was collected using an Orca-Flash 4.0s CMOS camera (Hamamatsu) which in combination with a 1 .6ximagnifier in the image path provided an effective pixel size of 27 27 nm. Ten frames were acquired at each position before translation of the stage to the next position.
- the primary images could also be acquired in a single measurement. This could be achieved using diffractive optics to simultaneously record the diffraction images of a same subject in different diffraction orders. In this case each diffraction image represents different spatial shift of the same subject. In such an embodiment no translation of the sample would be required.
- Figure 2 shows a flow diagram 30 of the stages of a method performed by the processor.
- the method When applied to microscopy the method is referred to as translation microscopy (TRAM) and can be used to achieve super-resolution imaging.
- TAM translation microscopy
- the processor After receiving the plurality of primary images containing a different representation of a subject 32, the processor fits the primary images to a model that represents each primary image as a distortion, or other alteration, of a common target image of the subject 34.
- the target image is then extracted from the fit 36.
- the extracted target image has a spatial resolution greater than a spatial resolution of the primary images.
- Figure 3 shows a flow diagram of an embodiment of the method highlighted in Figure 2.
- the fitting stage 34 of Figure 2 is decomposed into stages 42, 44, 46, 48 and 50.
- the mode of operation of the method involves: obtaining 40 a plurality M of low resolution (LR) correlated images J, also referred to as primary images, calculating 42 a correspondence matrix C a and a convolving matrix P k to model a predicted image, identifying 44 edges and blobs features in the original image by calculating a series of first order non local difference (NLD) and second order NLD between different regions of the original image, calculating 46 a structural map of the original image, defining 48 an energy function that is function of a high resolution (HR) image I to be restored and of the structural map, minimizing 50 the energy function, and extracting 52 the HR restored image.
- LR low resolution
- Stage 40 may comprise measuring images or reading images that have been measured previously.
- the low resolution (LR) images to be used to recover a high resolution (HR) image via inverse process must be correlated but not identical.
- the LR primary images are recorded using the setup of Figure 1 by translating the sample or specimen in the XY plane as described above.
- the obtained primary images / are considered as the outcome of an original high resolution (HR) image or target image I, after an image-degrading process involving blurring and noise contamination.
- Measuring stage 42 estimates a predicted image by modelling the image-degrading process. This process can be formulated by a linear image capturing model as:
- M denotes the number of images, the column vectors /, and consist of rowwise concatenations of the LR and HR images
- P is a blurring matrix (also referred to as convolving matrix) determined by the PSF of the imaging system and N, represents additive white Gaussian noise (AWGN) .
- AWGN additive white Gaussian noise
- SR restoration aims at recovering the HR images beyond the diffraction limit from the LR observations.
- the blurring matrix for an optical microscopy cannot have a full rank and is not invertible. Therefore, is usually estimated by minimizing a pre-defined energy function
- E(I,)
- the first term in the energy function, E(/ / ) measures the difference between the LR observations and predicted data in a L2-norm form
- C M is a matrix measuring the pixel-level correspondence between the HR images, /, and I k
- ⁇ ( ⁇ ) is an increasing function defined as:
- the correspondence matrix C M is unknown to the observer but is assumed to be unchanged during the degrading process. As such, the matrix can be determined by the correspondence between LR images.
- the correspondence matrix can be determ ined from motion vectors of two LR images given by the relative positions between the camera and specimen.
- the PSF matrix in laboratory environment is readily calculated based on the specifications of the microscopes and the correspondence matrix or can be accurately estimated using experimental images of single point sources such as bead or quantum dot samples.
- a predicted image is calculated as the product of the blurring matrix P k times the correspondence matrix C H times the target image /, where is common to the predicted images.
- the desired solution i.e. , restored SR image
- stages 44 and 46 are performed sequentially to calculate a map also referred to as prior model of the target image.
- the prior model In the presence of noise, the prior model, ?(/,) , is included in the energy function.
- the purpose of the prior model is to regulate the minimization process in order to remove noise while preserving fine structures in the LR observations.
- the proportional parameter, X t is adjusted during the iterative process to balance noise removal and feature preservation.
- an edge is a fundamental feature that underlies more complicated features or structures in an image, so the latter can be preserved as long as edges are preserved.
- a new prior model is presented capable of characterizing complex biological structures while avoid over-smoothing in low signal to noise ratio images.
- the model is based on the fact that diverse biological structures such as vesicles, filaments, microtubules and their complex networks are made primarily of two basic features, blob and ridge, which are circular and line-like regions either brighter or darker than their surroundings. These circular-like regions also referred to as areas, are better correlated with a second-order difference rather than a first-order difference which measures edges.
- the prior model is expressed as:
- NLDs local differences
- N the pixel number of image / / .
- non-local it is meant that the differences are computed between regions (patches), instead of picture elements (pixels).
- Calculating the map involves for each picture elements of the primary image, defining a region around the picture element wherein the region has an area greater than an area of the picture element and calculating differences between regions to identify image features.
- Calculating differences between regions involves, in this case, calculating a first order difference (e.g. gradient) and a second order difference (e.g. difference of the gradient) between a first and a second region.
- a first order difference e.g. gradient
- a second order difference e.g. difference of the gradient
- the first order non-local difference is calculated by defining first and second adjacent regions. Each of the first and second adjacent regions is represented as a vector comprising the intensity values of the pixels present within the first and second adjacent regions respectively.
- the first order difference is then obtained by calculating a norm of a difference between the vector of the first adjacent region and the vector of the second adjacent region.
- the second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region wherein the second region is located between the first and the third region. Each of the three regions is represented by a vector comprising the intensity values of the pixels present within each of the three adjacent regions respectively.
- the second order non-local difference is then obtained by calculating a norm of a difference between a first order non-local difference calculated between the first and the second region, and a first order nonlocal difference calculated between the second and the third region.
- the first and second order NLDs are calculated as:
- NLDs are more robust feature detectors compared to pixel-level gradient and Laplace operators in the presence of noise.
- the coefficients ⁇ , ( ) and w z (x) are the weights that balance the contributions of the two NLDs in the forms of
- NLD NLD dominates the prior model in this region.
- the second order NLD dominates in the vicinity of blobs for the same reason.
- the prior model Eq. (4) can be also constructed by including higher-order differences for which the coefficients can be also calculated in a similar way to Eq. (5) and Eq. (6).
- An alternative way to calculate the differences of different orders can be undertaken by using Taylor series expansion of the same patch as a vector.
- the highest order of differences for each patch can be then adaptively determined using principal components analysis by considering the noise levels (see for example Chatterjee, P. & Milanfar, P. Clustering-based denoising with locally learned dictionaries. Image Processing, IEEE Transactions on 18, 1438-1451 (2009)).
- the weights coefficient for differences can be also calculated in the similar way as Eq. (6).
- Stage 48 of Figure 3 defines the energy function.
- the energy function has already been described above and can be rewritten by substituting Eq. (4) into Eq. (2) as:
- NLDs involve the patches each of which contains multiple pixels
- two matrices, D, e and D 2 E R m are defined in order to represent the first and second order NLDs
- 0 ' is a null matrix to avoid the boundary effect
- R and e R are defined as
- a column vector, is further defined.
- Eq. (17) is a nonlinear equation of /, because NL1 , A NL2 and A k also involve the variable, so will have multiple solutions that can correspond to local and global minima of the energy function. As such, traditional optimization methods such as the gradient- descent and variational calculus methods are inappropriate to solve Eq. (17).
- Stages 50 and 52 are performed sequentially to minimize the energy function and extract the target image.
- the minimization problem in Eq. (2) is solved by a modified iteratively reweighted least squares (MIRLS) method. During the minimization process, both the target image and map/prior model evolve.
- MIRLS modified iteratively reweighted least squares
- This step enforces that the multiple solutions l ik by step (c) should be similar to each other.
- step (e) Go to step (c) if Eq. (18) cannot be satisfied using the current estimation otherwise update the parameter according to the residual noise in the current estimation / / .
- step (f) The iteration stops when / / converges and is considered to be the restored image; otherwise go to step (b) to compute again the weight matrices with updated A.
- the rate of the evolution is adjusted at each iteration stage based on the difference of HR solutions between the present and previous stages; fast in the beginning it becomes slower as the energy function gets closer to the global minimum.
- the parameter ⁇ is also updated at each iteration stage according to the residual noise contained in the current HR image estimation. When the mean square difference of the HR image estimations between two adjacent iterations is below a pre-set threshold, the iteration stops and the solution is considered to be the restored HR target image.
- I lk argmin X F k [l k + (P k T P k + eps) " (P k J k - P k T P k I, ,k )) I M - B k
- the 64 HR signal was recovered using 64 LR synthetic signals.
- Figure 4b shows the responses of the first order NLD 60 and second order NLDs 62 and their combination 64 to the noisy LR signal.
- the value of the 1 st -order NLD is relatively larger in the vicinity of the edge but smaller in the neighbourhood of the blobs and stripe.
- the second order NLD responds better to blobs and stripes a combination of the two, > gives rise to a high and well-balanced response to all the features and low response to the background, as shown in Figure 4b (blue).
- Figure 4 (c-e) shows recovery of the HR signal at different stage of the iterative process. It can be observed that background regions are smoothed heavily in the initial stage while features are being restored (Fig. 4c). As the signal evolves during the inverse process, the smoothing effect "propagates" towards the feature regions, which leads to higher contrast between feature and background and therefore increased responses of the first and second order NLD to the features. The system performs in such a positive feedback manner, leading to more effective noise reduction and resolution improvement in the second stage, as shown in Figure 4d-e. The iteration process completes when the signals between two adjacent iterations is blow a predefined threshold.
- Figure 4 (g) shows the final result.
- a good restoration of features and reduction of noise are obtained compared to the noise-free signal in Figure 4 (a).
- a second signal was restored using the same set of LR frames but by setting our method with W
- Figure 5 shows a 2-D 8-bit ISO 12233 resolution chart containing blobs and ridges with varying sizes and orientations and that is commonly used for a standard evaluation of SR restoration.
- FIG. 5 (b) shows a restored image using TRAM with a set of 64 LR frames. All the features in the chart, including stripes, curves and numbers are shown to be very well recovered.
- PSNR Peak Signal to Noise Ratio
- Figures 5 (e-j) show respectively the HR, LR and four restored images of a magnified boxed region in Figure 5(a) obtained using TRAM (g), ALG (Babacan, S. D., Molina, R. & Katsaggelos, A. K. Variational Bayesian Super Resolution. IEEE Trans. Image Process. 20, 984-999 (201 1 )) (h), RSR (Farsiu, S., Robinson, M. D., Elad, M.
- the PSNRs of the restored results were plotted for all four methods on the 64 LR frames for different degradation cases with various noise and PSF levels. As seen in Figure 5(d) the TRAM performs noticeably better than the other methods, at least by 5dB in terms of PSNR.
- Figure 6 shows five synthetic cells with different structures containing blobs and ridges that mimic the key features of transport particle and microtubules in intracellular structures.
- the image is a HR 8-bit image (2312 pixel 384 pixel).
- the blobs have a diameter of 21 pixels and a center distance of 21 pixels between the two adjacent ones.
- the ridges have the FWHM of 10 pixels and a center-line distance of 32 pixels.
- the 1 -D vertical profiles for the four types of particle arrangements and a cross- sectional profile for the three microtubules are plotted 80 in this figure.
- the corresponding intensity profile 82 shows that the cell structures are diffraction unresolved.
- Figure 6 (c) shows the restored image obtained by the TRAM method. The resolution improvement is measured to be around 6.3 times for each structure in terms of the FWHM ratio (Fig. 6(d)), demonstrating the robustness of the method for different structures.
- the resolution in the restored image is -14 pixels (28.4 nm) and is smaller than the distances between the adjacent particles and parallel microtubules, as such they are all resolved as shown by the intensity profiles in Fig. 6 (c).
- the decease of the FWHM ratio on increasing noise level can be divided into three stages. In the first stage where the noise contamination is low (Std from 2 to 10), the FWHM ratio decreases rapidly.
- the FWHM ratios for all levels of noise contamination show a monotonic increase on increasing the number of LR observations and begin to saturate at 50 LR images. There is however a shift among the three curves because of different severities of noise contamination; less resolution improvement for higher level of noise contamination for a fixed number of LR images and, for higher noise levels, more LR observations are required to achieve a same resolution improvement compared to lower noise cases.
- Figure 7(a) shows a 16-bit LR image of a plurality of quantum dots (QD).
- QD quantum dots
- the image was acquired with an excitation at 405 nm wavelength using a widefield microscope equipped with a 150 X 1.45 NA objective. This set up resulted in a diffraction limit of 228 nm (thus PSF of 194 nm at FWHM) which in turn determines the convolving matrix, P / ..
- a set of LR images was acquired whilst translating the sample along the y- axis in steps of 100 nm, from which C w was determined.
- Figure 7 (b) shows a zoomed image of region 1 , where the intensity profile is Airy-disk shape of the FWHM of 194 nm (Gaussian fitting), in agreement with the theoretical value.
- Figure 7 (c) and (d) shows the restored SR images resulting from 32 and 64 LR observations, giving measured FWHM of 39.7 and 30.6 nm respectively.
- Figure 7 (e) shows that the FWHM measured from a restored image decreases exponentially when increasing the number of LR images used to restore the image.
- the spatial resolution improves ⁇ 3-fold for 16 LR images and up to 7-fold for 64 LR images.
- the synthetic results are consistent with those obtained in the synthetic cell data experiment discussed above in terms of resolution improvement and its dependence on the numbers of LR image frames.
- Figure 7 (f) shows a zoomed image of a second region of figure 7 (a) and Figure 7 (g) shows the corresponding restored super-resolution image of figure 7 (f) acquired using 64 low resolution images.
- Figure 7 (g) reveals the presence of 2 QDs.
- Figure 7 (h) shows a zoomed image of a third region of figure 7 (a) and Figure 7 (i) shows the corresponding restored super-resolution image of figure 7 (h) acquired using 64 low resolution images.
- Figure 7 (i) reveals the presence of 3 QDs.
- Figure 7 (g) and (i) shows that the method described above allows identifying diffraction-unresolved multiple QDs in Figure 7 (a).
- QD intensity fluctuations were investigated, taking advantage of the quantum blinking effect of single QDs.
- Figure 7 (j) shows the intensity fluctuation measured over time in figure 7 (b) where the LR image contains a single QD. In this case the intensity fluctuation varies quantally between bright and dark states.
- Figure 7(k) shows the intensity fluctuation measured over time in figure 7 (f) where the LR image contains 2 QDs.
- the intensity fluctuation signal 100 is the sum of those of the two dots (curves 102, 104), consequently the "off" state appears less frequently as shown by the black curve.
- This characteristic becomes more prominent when there are more QD signals in a bright spot as shown in Figure 7 (I) corresponding to the case of three QDs (curves 102, 104, 106).
- the intensity fluctuation tends to be averaged out by random blinks of all the individual dots in the region.
- Figure 8 (a) shows a multi colour low resolution image of a bovine pulmonary artery endothelial cell.
- the TRAM method was performed by measuring a first set of primary images of a first colour, a second set of primary images of a second colour and a third set of primary images of a third colour.
- the corresponding first, second and third target images were estimated and then combined to form a multicolour target image.
- the three colours represent three different stained structures; Red: Actin 1 10, Green: Microtubules 1 12 and Blue: DNA (DAPI) 1 14, respectively.
- DAPI DNA
- FIG. 8 (b) shows a super resolution restored image corresponding to figure 8 (a) and obtained using 60 low resolution images. The image demonstrates a significant improvement in resolution and signal-to-noise ratio in all three colours.
- Figure 8 (c) and (d) show a zoom image of the microtubule network of figure 8 (a) at low and high resolution respectively.
- the microtubule network is unresolved and overlaps with DAPI.
- Individual microtubule filaments and DAPI profiles are clearly resolved on the recovered super resolution image of Figure 8 (d).
- the measured FWHM of a single microtubule is 31 nm, which represents a resolution improvement of 6.4-fold.
- Figure 8 (e) and (f) show a zoom image in an area of figure 8 (a) where three stained structures are densely packed. At LR the three colours are mixed Figure 8 (a). In the recovered SR image figure 8 (f) the relative position of each structure is clearly improved and in particular the boundary between actin and microtubule filaments.
- Figure 9 (a) shows the restored high-resolution image of the microtubules by the TRAM method.
- Figure 9 (b) shows the map corresponding to the microtubules in figure 8.
- both the target image and the map evolve (for this case, from thick unfocused line to thin/focused lines).
- Figure 9 (a) and (b) show the images obtained at the last iteration, where the target image is considered to be the "true solution”.
- Figure 10 (a-e) shows the restoration of a human portrait, demonstrating that the method can be applied to improve the resolution of images taken by commercial cameras.
- Figure 10 (a) shows a LR human portrait provided by UCSC. In this case multiple images of the portrait were acquired by spatially displacing the camera for each image taken.
- Figure 10 (b-e) show the restored images obtained by SR Translational imaging (b), ALG (c) , RSR (d) and ZMT (e).
- SR Translational imaging provides a better recovery, including the eyes, eye bows, nose and hair.
- our method is also very effective in suppressing noise without introducing artifacts.
- RSR and ZMT do not effectively restore the HR resolution since the gradient-based prior function over-smooths the features during the inverse process.
- ALG recovers the resolution better than RSR and ZMT but results in severe zigzag artifacts around the edges.
- the principle of the method is not limited to the restoration of images relative to specific systems and can be adapted to identify the most suitable types of features in order to improve the spatial resolution of a particular system.
- a combination of first and second order differences can be used, for example as described, in order to identify edges and blobs/areas. It is noted that other higher orders of difference may be used, alone or in combination, in order to calculate a map/ prior model in alternative embodiments. For example a map could be obtained by calculating a third order difference alone. It could also be possible to obtain a map by calculating a combination of orders, such as a first and third order , or a second order and a third order, or a first ,second and third order.
- the differences can be calculated using any suitable method, for example by using a numerical method, determining differences, applying an algorithm to set of data, for example a set of intensity or other data, or analytically solving an expression.
- a specific way of calculating the first and second order differences in one embodiment has been described above with reference to Equation 5.
- Any suitable method for determining first order, second order or higher order non local differences can be used in alternative embodiments.
- the first order non-local difference is calculated by defining first and second regions. The first and second regions may be adjacent or contiguous. The first order non-local difference is then obtained by calculating a difference between values associated with the first region and values associated with the second region or between a function of values associated with in the first region and a function values associated with the second region.
- each of the first and second regions can be represented as a vector or a matrix comprising values of a plurality of picture elements (for example pixels or voxels) within the first and second regions respectively.
- the first order non-local difference is then obtained by calculating a norm of a difference between the vector (or matrix) of the first region and the vector (or matrix) of the second region.
- the second order non-local difference is calculated by defining three adjacent regions, for example a first, a second and a third region, wherein the second region is located between the first and the third region.
- the first, second and third regions may be adjacent or contiguous.
- the second order non-local difference is then obtained by calculating a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
- the second order non-local difference may be obtained by calculating a difference between a function of a first order difference calculated between the first and the second region and a function of a first order difference calculated between the second and the third region.
- each of the three regions may be represented by a vector or a matrix comprising a plurality of picture elements within each of the three regions respectively.
- Each picture element for example pixels
- the second order difference is then obtained by calculating a norm of a difference between a first order difference calculated between the first and the second region and a first order difference calculated between the second and the third region.
- a third order non-local difference is calculated as the difference between a first second-order NLD and a second second-order NLD and using five adjacent or contiguous regions.
- an N' h order NLD is calculated as the difference between a first (N-l )-order NLD and a second (N-l )-order NLD and using 2N-1 adjacent or contiguous regions.
- the regions defined for the calculation of the first, second or higher order non-local difference may be of any particular shape, for example each region may form a substantially rectangular, triangular or circular region.
- the values of the picture elements contained in the vector or matrix representing these regions may be an intensity value, or other quantity such as brightness, colour or frequency or any quantity derived from these quantities.
- the first, second or higher order non-local differences are first, second or higher order derivatives or functions of such derivatives. For example a first order difference could be calculated as a first order derivative squared.
- the inverse process of the method is not limited to minimizing an energy function as described above.
- Other types of energy functions could be used.
- the robust function Eq.(3) can be replaced by an exponential function, which would not significantly change the results.
- the method is not limited to a specific fluorescence modality.
- the method could be used with fluorescence anisotropy or fluorescence lifetime type measurements.
- the method is also not limited to microscopy imaging techniques or to imaging applications performed in the optical region of the spectrum.
- the method can be used to improve the spatial resolution of X-ray CT scans such as CT scans for oil search applications. In this case multiple images could be taken at different angles.
- the method could also find applications for in vivo imaging applications.
- the method could be of particular interest in these cases where the subject (a patient or an animal) is moving during measurement.
- the motion of the subject provides a natural translational motion that can be used as a means of obtaining a plurality of primary images.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1512290.6A GB2528179B (en) | 2013-01-14 | 2014-01-14 | An image restoration method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1300637.4 | 2013-01-14 | ||
GB201300637A GB201300637D0 (en) | 2013-01-14 | 2013-01-14 | An Image Restoration Method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014108708A1 true WO2014108708A1 (en) | 2014-07-17 |
Family
ID=47757955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2014/050091 WO2014108708A1 (en) | 2013-01-14 | 2014-01-14 | An image restoration method |
Country Status (2)
Country | Link |
---|---|
GB (2) | GB201300637D0 (en) |
WO (1) | WO2014108708A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117541495A (en) * | 2023-09-04 | 2024-02-09 | 长春理工大学 | Image stripe removing method, device and medium for automatically optimizing model weight |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084883A (en) * | 2019-04-15 | 2019-08-02 | 昆明理工大学 | A method of it inducing brain activity and rebuilds face-image |
-
2013
- 2013-01-14 GB GB201300637A patent/GB201300637D0/en not_active Ceased
-
2014
- 2014-01-14 WO PCT/GB2014/050091 patent/WO2014108708A1/en active Application Filing
- 2014-01-14 GB GB1512290.6A patent/GB2528179B/en not_active Expired - Fee Related
Non-Patent Citations (17)
Title |
---|
AHMAD HUMAYUN ET AL: "A Novel Framework for Molecular Co-Expression Pattern Analysis in Multi-Channel Toponome Fluorescence Images", MIAAB 2011 (PROCEEDINGS OF THE 2011 MICROSCOPIC IMAGE ANALYSIS WITH APPLICATIONS IN BIOLOGY), 2 September 2011 (2011-09-02), pages 109 - 112, XP055108769 * |
BABACAN, S. D.; MOLINA, R.; KATSAGGELOS, A. K: "Variational Bayesian Super Resolution", IEEE TRANS. IMAGE PROCESS., vol. 20, 2011, pages 984 - 999, XP011411765, DOI: doi:10.1109/TIP.2010.2080278 |
CHATTERJEE P ET AL: "Clustering-Based Denoising With Locally Learned Dictionaries", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 18, no. 7, 1 July 2009 (2009-07-01), pages 1438 - 1451, XP011268735, ISSN: 1057-7149, DOI: 10.1109/TIP.2009.2018575 * |
CHATTERJEE, P.; MILANFAR, P.: "Clustering-based denoising with locally learned dictionaries. Image Processing", IEEE TRANSACTIONS ON, vol. 18, 2009, pages 1438 - 1451 |
CHATTERJEE, P.; MILANFAR, P.: "Practical Bounds on Image Denoising: From Estimation to Information", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 20, 2011, pages 1221 - 1233, XP011411797, DOI: doi:10.1109/TIP.2010.2092440 |
DILIP KRISHNAN ET AL: "Fast Image Deconvolution using Hyper-Laplacian Priors", PROC. NEURAL INF. PROCESS. SYST., vol. 1041, 6 December 2010 (2010-12-06), pages 1033, XP055108267 * |
ELAD, M.; FEUER, A.: "Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images", IEEE TRANS. IMAGE PROCESS, vol. 6, 1997, pages 1646 - 1658, XP000724632, DOI: doi:10.1109/83.650118 |
FARSIU, S.; ROBINSON, D.; ELAD, M.; MILANFAR, P.: "Advances and challenges in super-resolution", INT. J. IMAGING SYST. TECHNOL., vol. 14, 2004, pages 47 - 57, XP008142293 |
FARSIU, S.; ROBINSON, M. D.; ELAD, M.; MILANFAR, P: "Fast and robust multiframe super resolution", IEEE TRANS. IMAGE PROCESS., vol. 13, 2004, pages 1327 - 1344, XP011118230, DOI: doi:10.1109/TIP.2004.834669 |
HEINTZMANN, R.; GUSTAFSSON, M. G. L: "Subdiffraction resolution in continuous samples", NAT. PHOTONICS, vol. 3, 2009, pages 362 - 364 |
HELL, S. W., MICROSCOPY AND ITS FOCAL SWITCH. NAT. METHODS, vol. 6, 2009, pages 24 - 32 |
MARSHALL F TAPPEN ET AL: "Exploiting the Sparse Derivative Prior for Super-Resolution and Image Demosaicing", IEEE WORKSHOP ON STATISTICAL AND COMPUTATIONAL THEORIES OF VISION AT ICCV 2003, 13 October 2003 (2003-10-13), XP055108792 * |
TIBSHIRANI, R: "Regression shrinkage and selection via the lasso", JOURNAL OF THE ROYAL STATISTICAL SOCIETY. SERIES B (METHODOLOGICAL, 1996, pages 267 - 288 |
UROS KRZIC: "Multiple-view microscopy with light-sheet based fluorescence microscope", DISSERTATION, 8 July 2009 (2009-07-08), Heidelberg, pages 1 - 149, XP055079132, Retrieved from the Internet <URL:http://archiv.ub.uni-heidelberg.de/volltextserver/9668/1/Uros_Krzic_PhD_Thesis_Heidelberg_University_July_2009_v40.pdf> [retrieved on 20130913] * |
WON, R.: "Eyes on super- resolution", NAT. PHOTONICS, vol. 3, 2009, pages 368 - 369 |
ZHEN QIU ET AL: "A new feature-preserving nonlinear anisotropic diffusion for denoising images containing blobs and ridges", PATTERN RECOGNITION LETTERS, vol. 33, no. 3, 1 February 2012 (2012-02-01), pages 319 - 330, XP028122441, ISSN: 0167-8655, [retrieved on 20111115], DOI: 10.1016/J.PATREC.2011.11.001 * |
ZOMET, A.; RAV-ACHA, A.; PELEG, S., PROC. IEEE CVPR, 2001, pages 645 - 650 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117541495A (en) * | 2023-09-04 | 2024-02-09 | 长春理工大学 | Image stripe removing method, device and medium for automatically optimizing model weight |
Also Published As
Publication number | Publication date |
---|---|
GB201512290D0 (en) | 2015-08-19 |
GB2528179A (en) | 2016-01-13 |
GB201300637D0 (en) | 2013-02-27 |
GB2528179B (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11222415B2 (en) | Systems and methods for deep learning microscopy | |
EP3942518B1 (en) | Systems and methods for image processing | |
US20220253983A1 (en) | Signal processing apparatus and method for enhancing a digital input signal | |
CN110313016B (en) | Image deblurring algorithm based on sparse positive source separation model | |
WO2015121422A1 (en) | Method for performing super-resolution on single images and apparatus for performing super-resolution on single images | |
Ikoma et al. | A convex 3D deconvolution algorithm for low photon count fluorescence imaging | |
Boulanger et al. | Nonsmooth convex optimization for structured illumination microscopy image reconstruction | |
Ben Hadj et al. | Space variant blind image restoration | |
Lee et al. | Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction | |
Elwarfalli et al. | Fifnet: A convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames | |
Ponti et al. | Image restoration using gradient iteration and constraints for band extrapolation | |
Soulez | A “learn 2D, apply 3D” method for 3D deconvolution microscopy | |
Zhang et al. | Group-based sparse representation for Fourier ptychography microscopy | |
Prigent et al. | SPITFIR (e): a supermaneuverable algorithm for fast denoising and deconvolution of 3D fluorescence microscopy images and videos | |
Buades et al. | Motion-compensated spatio-temporal filtering for multi-image and multimodal super-resolution | |
Gregson et al. | Stochastic deconvolution | |
WO2014108708A1 (en) | An image restoration method | |
Lee et al. | Optimizing image focus for 3D shape recovery through genetic algorithm | |
Zhou et al. | Parameter-free Gaussian PSF model for extended depth of field in brightfield microscopy | |
Wieslander et al. | TEM image restoration from fast image streams | |
Prigent et al. | SPITFIR (e): A supermaneuverable algorithm for restoring 2D-3D fluorescence images and videos, and background subtraction | |
Sahay et al. | Shape extraction of low‐textured objects in video microscopy | |
Kryvanos et al. | Nonlinear image restoration methods for marker extraction in 3D fluorescent microscopy | |
Tadrous | A method of PSF generation for 3D brightfield deconvolution | |
Dong et al. | Three-dimensional deconvolution of wide field microscopy with sparse priors: Application to zebrafish imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14703160 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 1512290 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20140114 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1512290.6 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14703160 Country of ref document: EP Kind code of ref document: A1 |