CN110989154B - Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof - Google Patents

Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof Download PDF

Info

Publication number
CN110989154B
CN110989154B CN201911227287.0A CN201911227287A CN110989154B CN 110989154 B CN110989154 B CN 110989154B CN 201911227287 A CN201911227287 A CN 201911227287A CN 110989154 B CN110989154 B CN 110989154B
Authority
CN
China
Prior art keywords
image
point spread
spread function
light field
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911227287.0A
Other languages
Chinese (zh)
Other versions
CN110989154A (en
Inventor
毛珩
张光义
陈良怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Publication of CN110989154A publication Critical patent/CN110989154A/en
Priority to PCT/CN2020/105190 priority Critical patent/WO2021098276A1/en
Application granted granted Critical
Publication of CN110989154B publication Critical patent/CN110989154B/en
Priority to PCT/CN2020/129083 priority patent/WO2021098645A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof, wherein the method comprises the following steps: step 1, decomposing a 3D body image into layer images, wherein each layer image consists of a plurality of 2D sub-images, and the pixel value of each 2D sub-image except coordinates (i, j) is zero; step 2, extracting pixels at coordinates (i, j) on each layer image to rearrange the pixels into single-channel images, and stacking all the single-channel images into multi-channel images; step 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel; step 4, taking the multi-channel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, performing convolution on the multi-channel image by adopting a 4D convolutional core, and outputting the multi-channel image; and 5, rearranging the multi-channel image output in the step 4 into a light field image. The method can greatly improve the convolution calculation speed, thereby improving the reconstruction speed of the image.

Description

Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof
Technical Field
The invention relates to the technical field of light field microscope image reconstruction, in particular to a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof.
Background
The light field microscope is a fast imaging mode, and a traditional fluorescence microscope can be changed into the light field microscope (as shown in fig. 1) by adding a micro lens array on a phase surface. Conventional microscopes can only perform 2D imaging, while light field microscopes can perform 3D imaging, by taking a light field image, and then using the light field image, the 3D distribution of an object can be reconstructed.
As shown in fig. 1, the light field images of a typical microscope, a light field microscope, and zebrafish taken by a light field microscope are shown. Note that the image taken by the light field microscope has a typical circular shape, which is the effect of the microlens.
In general, the positive process of light field imaging can be expressed as:
f=Hg
where f is the captured light field image, g is the 3D volume image to be reconstructed, and they are all represented by one-dimensional vectors for description introduction. H is a system matrix that relates the 3D volume image g to the light field image f from which the 3D volume image g is reconstructed. Different reconstruction algorithms differ in detail, but both Hg and H need to be calculatedTf, forward process (forward) and back projection (back) of computed imaging, and the computation process is similar.
The process of calculating the positive process Hg mainly includes the steps of convolving a volume image g with a corresponding Point Spread Function (PSF), wherein the point spread function of a light field camera is related to a spatial position, and only points which are located in the same layer and correspond to the same position behind microlenses are the same as the point spread function, as shown in fig. 2, if N × N pixels are located behind each microlens, a 3D volume to be reconstructed has S layers shown as 2a in fig. 2, each layer needs to be decomposed into N × N layers, then the layers are convolved with the corresponding point spread function, and the calculation of the positive process Hg once needs to calculate N × N × S2D convolutions, and the light field image L fmimage can be obtained by accumulating the images after convolution.
Disclosure of Invention
It is an object of the present invention to provide a reconstruction method for microscopic light field volume imaging and a forward process and a back projection acquisition method thereof that overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
In order to achieve the above object, the present invention provides a positive process acquisition method in a reconstruction method of microscopic light field volume imaging, the method comprising:
step 1, the size is M2And 3D volume image (M) with number of layers S2,M2S) into N × N × S pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of N × N, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000021
There are a total of N × N × S single-channel images, all of which are stacked into a multi-channel image
Figure BDA0002302588580000022
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000023
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
Figure BDA0002302588580000024
And 5, rearranging the multichannel image output by the step 4 into a light field image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure BDA0002302588580000025
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain N × N small convolution kernels;
step 323, stacking the N × N small convolution kernels obtained in the step 322 into dimensions of
Figure BDA0002302588580000031
The 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging, which comprises the following steps:
step 1, the size is (M)2,M21), light field image with layer number 1 is decomposed into N × N light field images with size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of N × N, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000032
There are a total of N × N single-channel images
Figure BDA0002302588580000033
Stacking all of the single-channel images into a multi-channel image
Figure BDA0002302588580000034
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000041
Step 4, performing 4D convolution kernel of the step 3
Figure BDA0002302588580000042
Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processed
Figure BDA0002302588580000043
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure BDA0002302588580000044
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure BDA0002302588580000045
For the multi-channel image
Figure BDA0002302588580000046
Performing convolution and outputting multi-channel image
Figure BDA0002302588580000047
Step 6, the multi-channel image output in the step 5 is processed
Figure BDA0002302588580000048
And rearranging to obtain a 3D volume image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure BDA0002302588580000049
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain N × N small convolution kernels;
step 323, convolving N × N small convolutions obtained in step 322The nuclear stack has a dimension of
Figure BDA0002302588580000051
The 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the method.
Since the invention first images the 3D volume (M)2,M2S) into N × N × S layer images or light field images (M)2,M21), decomposing into N × N layer images, and extracting only the pixels at non-zero positions without calculating the pixels at zero positions, so that the size of each single-channel image is
Figure BDA0002302588580000052
The 2D point spread functions are rearranged into N × N small convolution kernels, then are respectively convolved and are reversely rearranged, and the convolution method replaces the prior art that the 2D point spread functions are directly adopted for convolution, so that the convolution calculation speed can be greatly improved, and the reconstruction speed of the image is improved.
Drawings
Fig. 1 is a schematic diagram of an imaging principle of a microscope, a schematic diagram of an imaging principle of a light field microscope, and a captured light field picture, which are typical in the prior art;
FIG. 2 is a schematic diagram illustrating the calculation principle of the positive imaging process of a light field camera in the prior art;
FIG. 3 is a schematic diagram illustrating a reconstruction method for microscopic light field imaging according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an arrangement of 5D point spread functions into 4D convolution kernels according to an embodiment of the present invention;
fig. 5 is a flowchart of a positive process acquisition method in a reconstruction method of microscopic light field volume imaging according to an embodiment of the present invention;
fig. 6 is a flowchart of a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging according to the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
As shown in fig. 2, fig. 3 and fig. 5, the positive process obtaining method in the reconstruction method of microscopic light field volume imaging provided in this embodiment is to obtain a light field image f according to a known 3D volume image g, and the method includes:
step 1, from the 3D volume image (M)2,M2S) of size M2The number of layers is S, the 3D-volume image is decomposed into N × N × S layer images, and the size of each layer image is M2×M2Wherein each layer image is composed of a plurality of 2D sub-images c with the size of N × N, the pixel value of each 2D sub-image c except the pixel D at the coordinate (i, j) is 0, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N.
And 2, extracting a pixel d at the coordinate (i, j) on each layer image to rearrange the pixel d into a plurality of single-channel images h. Wherein the single-channel image h is represented as
Figure BDA0002302588580000061
Each single-channel image h has a size of
Figure BDA0002302588580000062
There are N × N × S channels, all of which are stacked into a multi-channel image
Figure BDA0002302588580000063
In fig. 3, with reference to fig. 3, a layer image obtained in step 1 is located above the left side of fig. 3, and the size of the layer image is represented as M2×M2. The lower left side shows a single channel image h. The embodiment extracts only the pixels of the non-zero position coordinates (i, j) without calculating the pixels of the zero position, and therefore, the size of each single-channel image is
Figure BDA0002302588580000064
It is quickFast reconstruction provides a convenient condition.
And 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel. Specifically, as shown in FIG. 2, the 5D point spread function of a light field microscope is a 5D vector with a dimension of (M)1,M1N, N, S), namely (M)1,M1N, S), where N is the number of discrete pixels in the row or column direction behind each microlens, S is the number of layers of the 3D volume image, M1The maximum number of pixels to which the point spread function can spread in the row direction or the column direction. The resulting 4D convolution kernel is represented as
Figure BDA0002302588580000071
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image, wherein the output multichannel image is represented as
Figure BDA0002302588580000072
Wherein, when using a convolutional neural network, the edge padding is set to
Figure BDA0002302588580000073
The convolution step length stride is set to 1, and the convolutional neural network can be implemented by using existing software, such as matlab, tensorflow, pytorch, and the like.
And 5, rearranging the multichannel image output by the step 4 into a light field image, wherein the rearrangement can be realized by adopting the existing method.
The embodiment firstly decomposes the 3D volume image into N × N × S layer images, extracts only the pixels at the non-zero positions without calculating the pixels at the zero positions, so that the size of each single-channel image is
Figure BDA0002302588580000074
And rearrange the 2D point spread function to N × N smallThe convolution kernels are respectively convolved and then are inversely rearranged to replace the prior art in which a 2D point spread function is directly adopted for convolution, so that the convolution calculation speed can be greatly increased, and the reconstruction speed is increased.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point diffusion function e included in the 5D point diffusion function, and converting the 2D point diffusion function e into a 3D vector; then, selecting the next 2D point diffusion function e until all 2D point diffusion functions e contained in the 5D point diffusion function are traversed, and converting all 2D point diffusion functions e contained in the 5D point diffusion function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
Figure BDA0002302588580000075
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, step 3 can also be implemented by methods provided in the prior art, which are not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, and the second preset value is set to be j- (N + 1)/2.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and sequentially traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N, thereby obtaining N × N small convolution kernels f.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1The lower right side shows a number N × N of small convolution kernels f, each of which has a size of
Figure BDA0002302588580000081
Specifically, when pixels are extracted for the first time, pixels with coordinates on the aligned 2D point diffusion function as (1, 1) are taken as starting points, pixels are extracted every N pixels along the row direction and the column direction of the aligned 2D point diffusion function respectively, when pixels are extracted for the second time, pixels with coordinates on the aligned 2D point diffusion function as (1, 2) are taken as starting points, pixels are extracted every N pixels along the row direction and the column direction of the aligned 2D point diffusion function respectively, when pixels are extracted for the third time, pixels with coordinates on the aligned 2D point diffusion function as (1, 3) are taken as starting points, pixels are extracted every N pixels along the row direction and the column direction of the aligned 2D point diffusion function respectively, by analogy, pixels traversing to the aligned 2D point diffusion function as (N, N) are taken as starting points, pixels are extracted every N pixels along the row direction and the column direction of the aligned 2D point diffusion function respectively, and a convolution kernel of × is obtained in the process.
Step 323, stacking the N × N small convolution kernels f obtained in step 322 into the 3D vector, the dimension of the 3D vector being
Figure BDA0002302588580000082
As shown in fig. 6, the present invention also provides a back projection acquisition method in a reconstruction method of an apparent micro-light field volume imaging, which is to obtain a 3D volume image g from a known light field image f, the method comprising:
step 1, the size is M2Light field image (M) with layer number 12,M21) decomposition into N × N pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of N × N, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is larger than or equal to 1 and smaller than or equal to N, and j is larger than or equal to 1 and smaller than or equal to N.
Step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000091
There are a total of N × N single-channel images
Figure BDA0002302588580000092
Stacking all of the single-channel images into a multi-channel image
Figure BDA0002302588580000093
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000094
Wherein, 5D point spread function (M)1,M1N, S) have been introduced above and will not be described herein.
Step 4, performing 4D convolution kernel of the step 3
Figure BDA0002302588580000095
Rotation and exchange dimensions are performed. Where "rotate and swap dimensions" refers to convolving the 4D kernels
Figure BDA0002302588580000096
The first two-dimensional 2D image is rotated 180 degrees and then the third and fourth dimensions are swapped.
Step 5, the multichannel image obtained in the step 2 is processed
Figure BDA0002302588580000097
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure BDA0002302588580000098
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure BDA0002302588580000099
For the multi-channel image
Figure BDA00023025885800000910
Performing convolution and outputting multi-channel image
Figure BDA00023025885800000911
Wherein, when using a convolutional neural network, the edge padding is set to
Figure BDA00023025885800000912
The convolution step stride is set to 1. The convolutional neural network can be implemented by using existing software, such as matlab, tenserflow, pytorch, etc.
In the step, the single-channel image obtained in the step 2 is converted into the input of a Convolutional Neural Network (CNN), and the 4D convolutional kernel obtained in the step 3 is used as the weight of the convolutional neural network, so that the existing neural network toolkit can be used for calculation, the output of the network is obtained, and then the inverse operation is carried out on the original image space, and the final output is obtained.
Step 6, the multi-channel image output in the step 5 is processed
Figure BDA0002302588580000101
The rearrangement is performed to obtain a 3D volume image, wherein the rearrangement can be performed by using an existing method.
The present embodiment first provides the decomposition of the light field image into N × N layer images, just to mentionTaking the pixels at non-zero positions without having to calculate the pixels at zero positions, so that each single-channel image has a size of
Figure BDA0002302588580000102
The 2D point spread functions are rearranged into N × N small convolution kernels, then are respectively convoluted, and are reversely rearranged, so that the convolution calculation speed can be greatly increased, and the reconstruction speed is increased by replacing the prior art in which the 2D point spread functions are directly adopted for convolution.
Note that, similarly to the above-described embodiment, each layer image in step 1 (the size M of each layer image) may also be set2×M2N × N × S) are respectively shifted in the row direction by- (i-1) and in the column direction by- (j-1) to align the image of the image layer the method of step 2 is similar to the above embodiment, resulting in a plurality of sizes of (M)2,M2) The plurality of single-channel images of (c) is further divided into a plurality of sizes of (M)2,M2) Is stacked into a multi-channel image (M)2,M2N × N × S) in step 3, each 2D point spread function included in the 5D point spread function is shifted by i-1 in the row direction and by j-1 in the column direction, respectively, to obtain a 4D convolution kernel (M)1,M1N × N × S,1) operation of step 4 is the same as in the above embodiment, where the edge padding of the neural network is set to (M)1-1)/2, convolution step stride set to N. Step 5 output shape is (M)2,M2And 1) light field image of the light field.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point spread function e contained in the 5D point spread function, and converting the 2D point spread functions into 3D vectors; then, the next 2D point diffusion function e is selected until all the 2D point diffusion functions e contained in the 5D point diffusion function are traversed,converting all 2D point spread functions e contained in the 5D point spread function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
Figure BDA0002302588580000111
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, the method provided in the prior art can also be used to implement step 2, which is not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, i is larger than or equal to 1 and is smaller than or equal to N, the second preset value is set to be j- (N +1)/2, and j is larger than or equal to 1 and is smaller than or equal to N.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and sequentially traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N, thereby obtaining N × N small convolution kernels f.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1The lower right side shows a number N × N of small convolution kernels f, each of which has a size of
Figure BDA0002302588580000112
Specifically, when the pixel is extracted for the first time, the aligned 2D point is usedThe method comprises the steps of taking a pixel point with coordinates (1, 1) on a diffusion function as a starting point, respectively extracting pixels every N pixels along the row direction and the column direction of an aligned 2D point diffusion function, taking the pixel point with coordinates (1, 2) on the aligned 2D point diffusion function as the starting point, respectively extracting pixels every N pixels along the row direction and the column direction of the aligned 2D point diffusion function, taking the pixel point with coordinates (1, 3) on the aligned 2D point diffusion function as the starting point, respectively extracting pixels every N pixels along the row direction and the column direction of the aligned 2D point diffusion function, and so on, sequentially traversing the pixel points with coordinates (N, N) on the aligned 2D point diffusion function as the starting point, respectively extracting pixels every N pixels along the row direction and the column direction of the aligned 2D point diffusion function, and forming a convolution kernel × of the extracted pixels in the process.
Step 323, stacking the N × N small convolution kernels f obtained in step 322 into the 3D vector, the dimension of the 3D vector being
Figure BDA0002302588580000121
The size of the image layer image obtained by decomposing the 3D body image is M2×M2The size of the 2D point spread function contained in the 5D point spread function of the light field microscope corresponding to the image layer image is M1×M1The complexity of the convolution is directly calculated to be O (M) by adopting the existing method1 2M2 2) The complexity can be reduced to using the existing FFT
Figure BDA0002302588580000122
Observing the characteristics of the convolution, the convolved image can be found to be 0 except for the specific position, so the invention extracts the image and rearranges the point spread function to obtain the size M1×M1Is rearranged into N × N small convolution kernels, each having a size of
Figure BDA0002302588580000123
Thus the complexity of directly calculating the convolution is reduced to
Figure BDA0002302588580000124
Meanwhile, due to the development of deep learning, a plurality of software packages for efficiently calculating the convolution are provided, the convolution can be efficiently calculated by combining the software packages, and experiments show that the method can greatly improve the calculation speed compared with the original algorithm based on FFT.
The following table 1 is a comparison of the reconstruction method provided by the present invention with the existing methods in terms of time consumption and acceleration factor in reconstructing the image:
TABLE 1
Image size (pixel) 600×600 900×900 1200×1200 1500×1500 1800×1800
FFT time (seconds) 2135.6 2149.0 2185.5 2228.6 2825.9
The invention consumes time (second) 35.3 90.6 122.0 163.0 227.2
Multiple of acceleration 60.0× 23.7× 17.9× 13.7× 12.4×
According to the above table 1, comparing the time consumption of the reconstruction method provided by the present invention with the time consumption of the existing method, all based on the algorithm provided in the documents Prevedel, r, et al, Simultaneous white-animal 3D imaging of neural effective light-field microscopics, nat Methods,2014.11(7): p.727-730, the reconstruction method provided by the present invention can achieve a 12 to 60-fold improvement, for example, when the size of the region of interest is 900 ×, the reconstruction time of 1000 images can be reduced from 3 weeks to 1 day, and the time cost can be greatly reduced.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the methods in the embodiments.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A positive process acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2And 3D volume image (M) with number of layers S2,M2S) into N × N × S pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of N × N, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure FDA0002498316760000011
There are a total of N × N × S single-channel images, all of which are stacked into a multi-channel image
Figure FDA0002498316760000012
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure FDA0002498316760000013
M1The maximum pixel number which can be diffused by the point diffusion function in the row direction or the column direction is taken as the pixel number;
step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
Figure FDA0002498316760000014
And 5, rearranging the multichannel image output by the step 4 into a light field image.
2. The positive process acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 1, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure FDA0002498316760000021
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
3. The method for acquiring positive process in the reconstruction method of microscopic light field volume imaging according to claim 2, wherein the step 32 of selecting a 2D point spread function included in the 5D point spread function comprises:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain N × N small convolution kernels;
step 323, obtaining N in step 322× N small convolution kernels are stacked with dimensions of
Figure FDA0002498316760000022
The 3D vector of (a).
4. The method for acquiring positive processes in the reconstruction method for microscopic light field volume imaging according to claim 3, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
5. A back projection acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2Light field image (M) with layer number 12,M21) decomposition into N × N pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of N × N, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure FDA0002498316760000031
There are a total of N × N single-channel images
Figure FDA0002498316760000032
Stacking all of the single-channel images into a multi-channel image
Figure FDA0002498316760000033
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure FDA0002498316760000034
M1The maximum pixel number which can be diffused by the point diffusion function in the row direction or the column direction is taken as the pixel number;
step 4, performing 4D convolution kernel of the step 3
Figure FDA0002498316760000035
Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processed
Figure FDA0002498316760000036
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure FDA0002498316760000037
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure FDA0002498316760000038
For the multi-channel image
Figure FDA0002498316760000039
Performing convolution and outputting multi-channel image
Figure FDA00024983167600000310
Step 6, the multi-channel image output in the step 5 is processed
Figure FDA00024983167600000311
And rearranging to obtain a 3D volume image.
6. The back projection acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 5, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure FDA0002498316760000041
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
7. The back projection acquisition method in the reconstruction method for microscopically light field volume imaging as claimed in claim 6, wherein when selecting one of the 2D point spread functions included in the 5D point spread function in the step 32, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain N × N small convolution kernels;
step 323, stacking the N × N small convolution kernels obtained in the step 322 into dimensions of
Figure FDA0002498316760000042
The 3D vector of (a).
8. The back projection acquisition method in the reconstruction method for microscopic light field volume imaging according to claim 7, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
9. A reconstruction method for microscopic light field volume imaging comprising the method of any one of claims 1 to 8.
CN201911227287.0A 2019-11-22 2019-12-04 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof Active CN110989154B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/105190 WO2021098276A1 (en) 2019-11-22 2020-07-28 Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein
PCT/CN2020/129083 WO2021098645A1 (en) 2019-11-22 2020-11-16 Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911156679 2019-11-22
CN2019111566792 2019-11-22

Publications (2)

Publication Number Publication Date
CN110989154A CN110989154A (en) 2020-04-10
CN110989154B true CN110989154B (en) 2020-07-31

Family

ID=70090028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911227287.0A Active CN110989154B (en) 2019-11-22 2019-12-04 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof

Country Status (2)

Country Link
CN (1) CN110989154B (en)
WO (2) WO2021098276A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989154B (en) * 2019-11-22 2020-07-31 北京大学 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof
CN113971722B (en) * 2021-12-23 2022-05-17 清华大学 Fourier domain optical field deconvolution method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8039776B2 (en) * 2008-05-05 2011-10-18 California Institute Of Technology Quantitative differential interference contrast (DIC) microscopy and photography based on wavefront sensors
WO2015157769A1 (en) * 2014-04-11 2015-10-15 The Regents Of The University Of Colorado, A Body Corporate Scanning imaging for encoded psf identification and light field imaging
CN106199941A (en) * 2016-08-30 2016-12-07 浙江大学 A kind of shift frequency light field microscope and three-dimensional super-resolution microcosmic display packing
CN106767534B (en) * 2016-12-30 2018-12-11 北京理工大学 Stereomicroscopy system and mating 3 d shape high score reconstructing method based on FPM
CN108364342B (en) * 2017-01-26 2021-06-18 中国科学院脑科学与智能技术卓越创新中心 Light field microscopic system and three-dimensional information reconstruction method and device thereof
CN109615651B (en) * 2019-01-29 2022-05-20 清华大学 Three-dimensional microscopic imaging method and system based on light field microscopic system
CN110047430B (en) * 2019-04-26 2020-11-06 京东方科技集团股份有限公司 Light field data reconstruction method, light field data reconstruction device and light field display device
CN110441271B (en) * 2019-07-15 2020-08-28 清华大学 Light field high-resolution deconvolution method and system based on convolutional neural network
CN110989154B (en) * 2019-11-22 2020-07-31 北京大学 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof

Also Published As

Publication number Publication date
WO2021098276A1 (en) 2021-05-27
WO2021098645A1 (en) 2021-05-27
CN110989154A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN109101975B (en) Image semantic segmentation method based on full convolution neural network
Zhang et al. Residual non-local attention networks for image restoration
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN110989154B (en) Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof
CN112750082A (en) Face super-resolution method and system based on fusion attention mechanism
CN112070670B (en) Face super-resolution method and system of global-local separation attention mechanism
CN109389667B (en) High-efficiency global illumination drawing method based on deep learning
DE112018007730T5 (en) 3D OBJECT DETECTION USING 3D CONVOLUTIONAL NEURAL NETWORKS WITH DEPTH-BASED MULTISCALING FILTERS
CN113837946B (en) Lightweight image super-resolution reconstruction method based on progressive distillation network
Nishiyama et al. 360 single image super resolution via distortion-aware network and distorted perspective images
CN117575915B (en) Image super-resolution reconstruction method, terminal equipment and storage medium
CN115439702B (en) Weak noise image classification method based on frequency domain processing
Xie et al. Non-local nested residual attention network for stereo image super-resolution
CN112215345B (en) Convolutional neural network operation method and device based on Tenscorore
CN103493482B (en) The method and apparatus of a kind of extraction and optimized image depth map
Shi et al. (SARN) spatial-wise attention residual network for image super-resolution
CN112581517A (en) Binocular stereo matching device and method
CN113096032B (en) Non-uniform blurring removal method based on image region division
Conde et al. Real-time 4k super-resolution of compressed AVIF images. AIS 2024 challenge survey
CN111951159B (en) Processing method for super-resolution of light field EPI image under strong noise condition
Zafeirouli et al. Efficient, lightweight, coordinate-based network for image super resolution
CN116128722A (en) Image super-resolution reconstruction method and system based on frequency domain-texture feature fusion
CN115170921A (en) Binocular stereo matching method based on bilateral grid learning and edge loss
CN112634128A (en) Stereo image redirection method based on deep learning
CN111915492A (en) Multi-branch video super-resolution method and system based on dynamic reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant