CN114581910A - Micro-needle hole view noise reduction method combining stereo matching and deep learning - Google Patents

Micro-needle hole view noise reduction method combining stereo matching and deep learning Download PDF

Info

Publication number
CN114581910A
CN114581910A CN202210483567.3A CN202210483567A CN114581910A CN 114581910 A CN114581910 A CN 114581910A CN 202210483567 A CN202210483567 A CN 202210483567A CN 114581910 A CN114581910 A CN 114581910A
Authority
CN
China
Prior art keywords
image
view
denoising
image block
stereo matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210483567.3A
Other languages
Chinese (zh)
Other versions
CN114581910B (en
Inventor
黎柳欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210483567.3A priority Critical patent/CN114581910B/en
Publication of CN114581910A publication Critical patent/CN114581910A/en
Application granted granted Critical
Publication of CN114581910B publication Critical patent/CN114581910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a micro-needle hole view noise reduction method combining stereo matching and deep learning, belonging to the technical field of image noise reduction methods; the method specifically comprises the following steps: s1, stereo matching: inputting the micro-needle hole views captured under different visual angles in the same scene and the corresponding central view into a stereo matching module together for stereo matching work to obtain the view difference; s2, image block pairing: searching all image blocks in each view according to the view difference, calculating the similarity, and acquiring the image blocks matched with the similarity to form a training pair; s3, image training: inputting an image block training pair into a CARE denoising network, and obtaining the network weight of the image block after training; s4, denoising result prediction: and processing the test image based on the network weight obtained in the step S3 to predict the denoising result of the image. Compared with the existing design on the market, the method has the advantages that the denoising speed for the microscopic pinhole view is higher, and the denoising effect is better.

Description

Micro-needle hole view noise reduction method combining stereo matching and deep learning
Technical Field
The invention relates to the technical field of image noise reduction methods, in particular to a micro-needle hole view noise reduction method combining stereo matching and deep learning.
Background
In recent years, deep learning has been widely used in various fields including the image processing field due to its simple and practical characteristics. Deep learning has been incorporated into research directions of super-resolution, noise reduction, segmentation, target recognition, and the like. In the noise reduction direction, researchers propose to use various neural networks, such as CNN, Dncnn, FFDnet, CBDnet, etc., to achieve image denoising, and most denoising methods using these networks require an external clean image training set. Such as: a classical de-noised convolutional neural network is content-aware image recovery (CARE), which applies the common convolutional neural network structure to the image recovery task. The article demonstrates in various imaging scenarios: the trained CARE network produced results that were previously unavailable. This means that the application of CARE in biological images exceeds the design space constraints and computational imaging by machine learning drives the limit of fluorescence microscope resolution possible. In contrast, denoising microscopic images has become a major problem in denoising because it is difficult to obtain clean images in a microscope. The earliest, Noise2Noise (N2N), proposed by j.lehtinen et al, trained the network using pairs of corrupted images with the same scene but with independent sampling Noise.
A view of a microneedle well is captured by a light field microscope incorporating a microlens array, and such microscopic images are mostly used for three-dimensional reconstruction of a specimen. The structure of a light field microscope (light field microscope) is shown in FIG. 1 (a). The microlens array is placed in the original image plane of the microscope, and each lens separately images the specimen from a different angle, thereby obtaining angular information of the object. Placing the camera detector at the focal plane of the lenslets results in an array of speckle images, the angular resolution of which depends on the number of lenses, and the spatial resolution of which depends on the number of pixels behind each lens. As shown in fig. 1 (b). FIG. 1(c) showsThe sampling process of the light field microscope on the sample is shown. The specimen is projected onto the micro-lens array by using oblique parallel light beams, the leftmost pixel of the small lens on the focusing plane is corresponding to the oblique parallel projection of the main ray through the specimen, and the blue points are combined into a picture, so that a low-resolution image of the specimen under a certain angle can be obtained. In fact, the above-described image is an image captured by placing a pinhole camera on the focal plane of the microlens, and therefore, it is customary for predecessors to refer to this image as a pinhole view. If N pixels are obtained behind each lenslet, N will eventually be obtained2And (5) stretching pinhole views at different angles. Such special microscope images are different from ordinary microscope images in that they contain angular information of the specimen, and we consider using the angular information to denoise the microscope images. Due to the different positions of the lenses, the noise levels of the views captured by the different lenses are different, and the noise content of the views near the center position is significantly less than that of the edge positions, so the view at the center-most position can be said to be "clean" relative to the other views, and an exemplary view is shown in fig. 2.
However, in real scenes, it is difficult to collect a large number of pairs of damaged images for training. Further, Noise2void (N2V) predicts pixels from surrounding pixels through a learning blind spot network, but the denoising result is still slightly inferior to that of other conventional denoising networks. Next, many researchers have proposed an improvement to N2V. Including improving its loss function, replacing the simple low-efficiency MSE with the a posteriori probability (PN2V) described by a general noise model (histogram); the blind spot network structure is improved, the acceptance domain is limited to four different directions, and the prediction efficiency of the mask is improved (N2S). In the PN2V improved method, the specificity of microscopic image noise is not considered, but a general noise model is adopted, which greatly reduces the denoising effect. Further, PPN2V is proposed, replacing the general noise model in PN2V with a gaussian-poisson joint noise model often used to describe microscopic noise, so that the effect is further improved. The denoising method based on the single noise image has the advantages that the image information is limited, and the denoising effect almost reaches the limit of the image information. There is also a low rank tensor approximation model based on Laplacian Scale Mixture (LSM) modeling to denoise multi-frame image data to effectively utilize multi-dimensional correlation of multi-dimensional data. TID proposes a data-based denoising method to recover a noise image, which is different from the existing denoising algorithm, and searches patches from the noise image or a general database, and a new algorithm searches patches from a database containing related patches and can also denoise multi-view input. Meanwhile, the disparity relationship of multi-view calculation can also be used for denoising, however, under the influence of noise, it is difficult to estimate the correct disparity relationship, and many error values are easily caused, resulting in erroneous similar image block grouping. Furthermore, it can be proven by the prior art that simply measuring similarity by euclidean distance is not reliable for matching of similar blocks, as shown in fig. 3. Most of the non-single image denoising methods show excellent effects and exceed the single image method, but it is undeniable that the existing microscopic pinhole view denoising method on the market still has defects and has a great improvement space, and in order to solve the problems of the existing design, the invention provides a microneedle pinhole view denoising method combining stereo matching and deep learning.
Disclosure of Invention
The invention aims to provide a micro-needle pore view noise reduction method combining stereo matching and deep learning, which aims to further improve the noise reduction speed and the noise reduction effect of the micro-needle pore view, and makes up the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a micro needle hole view noise reduction method combining stereo matching and deep learning specifically comprises the following steps:
s1, stereo matching: inputting the micro-needle hole views captured under different visual angles in the same scene and the corresponding central view into a stereo matching module together for stereo matching work to obtain a view difference between the micro-needle hole views and the central view;
s2, image block pairing: searching all image blocks in each view according to the view difference obtained in the step S1, calculating the similarity, and obtaining the image blocks matched with the similarity to form a training pair;
s3, image training: inputting the training pairs of the image blocks obtained in the S2 into a CARE denoising network for training, and obtaining the network weight of the image blocks after training;
s4, denoising result prediction: and processing the test image based on the network weight obtained in the step S3, and predicting the denoising result of the image.
Preferably, the view difference calculation based on the stereo matching algorithm mentioned in S1 involves calculating the matching cost of the principal component item and the edge item, and specifically includes the following steps:
a1, based on the principle component analysis idea, replacing Euclidean distance on the image block of the noise image by the distance between two vectors with the size of C,
Figure 591238DEST_PATH_IMAGE001
wherein C is the number of main components, r2Is the image block size; thus, the first term defining the matching cost calculation is:
Figure 182887DEST_PATH_IMAGE002
(1)
(1) in the formula,P i an image block centered on a pixel i;P X andP C respectively representing image blocks in two noise views of an arbitrary view angle X and a central view angle C;drepresenting the estimated disparity value;
Figure 519191DEST_PATH_IMAGE003
the projection function is used for acquiring the coefficient corresponding to the image block after dimension reduction;
a2, a concept for describing edge information is defined and introduced into fuzzy core calculation, and the concept is specifically defined as:
Figure 103756DEST_PATH_IMAGE004
(2)
(2) in the formula,
Figure 474694DEST_PATH_IMAGE005
respectively representing the horizontal and vertical gradients of the pixel; the denominator is added with 0.5 to prevent large MI response in low texture areas;
a3, further defining an edge descriptor based on the description concept defined in the A2, wherein the edge descriptor is specifically defined as:
Figure 486513DEST_PATH_IMAGE006
(3)
taking the formula (3) as a second term of the matching cost calculation;
a4, simultaneous equations (1) and (3), the complete matching cost calculation function can be defined as:
Figure 795265DEST_PATH_IMAGE007
(4)
(4) in the formula,
Figure 183521DEST_PATH_IMAGE008
is an adjustment parameter for balancing the influence of the principal component item and the edge information item;
a5, calculating the matching cost between the main component item and the edge information item by using the formula (4) obtained in the A4, completing the stereo matching work between the micro-pinhole view and the corresponding central view, and acquiring view difference data.
Preferably, the image block pairs mentioned in S2 specifically include the following:
b1, based on the characteristic that the microneedle hole view is closer to the center position and the image noise content is smaller, finding an image block similar to the reference image block in the center view, and accordingly providing a new similarity function, specifically:
Figure 143387DEST_PATH_IMAGE009
(5)
(5) in the formula,P i 、P j 、P i-d 、P js is a pixeli、j、i-d、jsAn image block centered;P X andP C respectively represent arbitrary viewing anglesXAnd central viewing angleCImage blocks in two noise views;
Figure 60527DEST_PATH_IMAGE010
representing image blocks that are mapped from an arbitrary view to a central view according to structural dependencies. Here, the structural correlation refers to a spatial relationship between image blocks. The meaning of this function is: when a suspected corresponding block of the reference image block is found in the central view, calculating the similarity between the suspected corresponding block and the central corresponding block, and if the suspected corresponding block is confirmed to be similar to the central corresponding block, determining that the reference image block finds the matched central corresponding block;
b2, adding a principal component analysis method on the basis of the similarity function provided in B1, and improving the accuracy of image block similarity measurement of double-view denoising, so that the similarity function (5) provided in B1 is further converted intoS PCA The method specifically comprises the following steps:
Figure 371423DEST_PATH_IMAGE011
(6)
(6) in the formula,
Figure 297791DEST_PATH_IMAGE012
is a projection function;
b3, completing image block pairing work according to the optimized similarity function proposed in the B2, and forming an image block training pair.
Preferably, the CARE denoising network mentioned in S3 is constructed based on the CSBDeep framework, and batch normalization is added before each activation function using the U-Net architecture.
Compared with the prior art, the invention provides a microneedle hole view noise reduction method combining stereo matching and deep learning, which has the following beneficial effects:
(1) the stereo matching of noisy microneedle hole views can be made to achieve a more accurate effect: the invention provides a new edge information descriptor, which is combined with a principal component to be used as a new matching cost PCMI, is more suitable for parallax calculation of a microscopic view, and can enable the result of stereo matching to be more accurate;
(2) so that denoising of the microneedle pore view is no longer limited to single image denoising: the prior microscopic image denoising is limited to single image denoising, and the information contained in a single image is limited, so the denoising effect almost reaches the limit of the single image denoising and the information domain is expanded to a dual-view image by combining the angle information of a view, so the microscopic image denoising breaks the original limit and enters a new field;
(3) the invention combines the stereo matching and the deep learning for the denoising of the micro-needle hole view, and can achieve better denoising effect: the existing denoising method combining stereo matching is based on traditional calculation, and the invention provides the combination of stereo matching and deep learning, so that the denoising process is faster and the denoising effect is better.
Drawings
Fig. 1 is an imaging schematic diagram of a microscopic image used in the background of the present invention, wherein fig. 1(a) shows a light field microscope structural schematic diagram; FIG. 1(b) shows that the spatial resolution depends on the number of pixels behind each lens; FIG. 1(c) shows a light field microscope sampling process of a sample;
FIG. 2 is an illustration of a microneedle hole used in the background of the invention;
fig. 3 is a schematic diagram in the background art of the present invention demonstrating that the method for determining similar blocks by euclidean distance is not robust, and fig. 3 (a) and (b) show two image blocks extracted from an image, wherein the left row is a version of the image block of the right row with noise added thereto; FIG. 3 (a) shows that image blocks are judged to be similar according to Euclidean distance, and FIG. 3 (b) shows that they are not similar;
FIG. 4 is a schematic flow chart of a method for reducing noise of a microneedle hole view by combining stereo matching and deep learning according to the present invention;
fig. 5 is a diagram illustrating the validity of the proposed edge information descriptor in embodiment 2 of the present invention; wherein, fig. 5 (a) shows horizontal gradient information of a noise image, eliminating the influence of noise; FIG. 5 (b) shows vertical gradient information for a noisy image, depicting the intensity of local structures; FIG. 5 (c) shows the processing of a noise image result using the edge information descriptor MI;
FIG. 6 is a schematic diagram demonstrating the effectiveness of the proposed matching cost PCMI in example 2 of the present invention; fig. 6 (a) shows a disparity estimated for a noise image using a matching cost AD (absolute value of pixel difference); fig. 6(b) shows the disparity estimated for a noise image using the matching cost PCMI;
fig. 7 is a schematic diagram of image block matching of dual views in embodiment 2 of the present invention; fig. 7(a) represents a full mapping (mapping of a reference patch (circle) and a similar patch (square) of the left view to the right view according to disparity); FIG. 7(b) shows the mapping of a left view to a right view in "mode", the spatial structure of similar image blocks in the left view as the reference image block being copied into the right view; FIG. 7(c) shows mapping of the left view to the right view in accordance with formula (7) in embodiment 2 while considering the similarity and spatial relationship
FIG. 8 is a graph showing 5 exemplary images representing test sets with different noise levels and the average noise level of the corresponding test sets proposed in embodiment 3 of the present invention;
fig. 9 is a schematic diagram illustrating an influence of the number of principal components on the noise reduction effect in embodiment 3 of the present invention;
fig. 10 is a schematic diagram of qualitative denoising results of the test sets 1 and 5 in embodiment 3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example 1:
referring to fig. 4, a method for reducing noise in a micro-needle hole view by combining stereo matching and deep learning specifically includes the following steps:
s1, stereo matching: inputting the micro-needle hole views captured under different visual angles in the same scene and the corresponding central view into a stereo matching module together for stereo matching work to obtain a view difference between the micro-needle hole views and the central view;
the view difference calculation based on the stereo matching algorithm mentioned in S1 relates to the calculation of the matching cost of the main component item and the edge item, and specifically includes the following contents:
a1, replacing Euclidean distance on image block of noise image by distance between two vectors with size C based on principle component analysis idea
Figure 143123DEST_PATH_IMAGE013
Wherein C is the number of principal components, r2Is the image block size; thus, the first term defining the matching cost calculation is:
Figure 496744DEST_PATH_IMAGE002
(1)
(1) in the formula,P i an image block centered on a pixel i;P X andP C respectively representing image blocks in two noise views of an arbitrary view angle X and a central view angle C;drepresenting the estimated disparity value;
Figure 294936DEST_PATH_IMAGE003
the projection function is used for acquiring the coefficient corresponding to the image block after dimension reduction;
a2, a concept for describing edge information is defined and introduced into fuzzy core calculation, and the concept is specifically defined as:
Figure 759415DEST_PATH_IMAGE004
(2)
(2) in the formula,
Figure 428294DEST_PATH_IMAGE005
respectively representing the horizontal and vertical gradients of the pixel; 0.5 is added to the denominator to prevent lowTextured areas produce large MI responses;
a3, further defining an edge descriptor based on the description concept defined in A2, wherein the edge descriptor is specifically defined as:
Figure 969128DEST_PATH_IMAGE006
(3)
taking the formula (3) as a second term of the matching cost calculation;
a4, simultaneous equations (1) and (3), the complete matching cost calculation function can be defined as:
Figure 989036DEST_PATH_IMAGE007
(4)
(4) in the formula,
Figure 257207DEST_PATH_IMAGE008
is an adjustment parameter for balancing the influence of the principal component item and the edge information item;
a5, calculating the matching cost between the main component item and the edge information item by using the formula (4) obtained in A4, completing the stereo matching work between the micro-needle hole view and the corresponding central view, and acquiring view difference data;
s2, image block pairing: searching all image blocks in each view according to the view difference obtained in the S1, calculating the similarity, and obtaining the image blocks matched with the similarity to form a training pair;
the image block pairs mentioned in S2 specifically include the following:
b1, based on the characteristic that the microneedle hole view is closer to the center position and the image noise content is smaller, finding an image block similar to the reference image block in the center view, and accordingly providing a new similarity function, specifically:
Figure 46171DEST_PATH_IMAGE014
(5)
(5) in the formula,P i 、P j 、P i-d 、P js is a pixeli、j、i-d、jsAn image block centered;P X andP C respectively represent arbitrary viewing anglesXAnd central viewing angleCImage blocks in two noise views;
Figure 741595DEST_PATH_IMAGE010
representing image blocks mapped from an arbitrary view to a center view according to structural dependencies. Here, the structural correlation refers to a spatial relationship between image blocks. The meaning of this function is: when a suspected corresponding block of the reference image block is found in the central view, calculating the similarity between the suspected corresponding block and the central corresponding block, and if the suspected corresponding block is confirmed to be similar to the central corresponding block, determining that the reference image block finds the matched central corresponding block;
b2, adding a principal component analysis method on the basis of the similarity function provided in B1, improving the accuracy of the similarity measurement of the image block subjected to double-view denoising, and further converting the similarity function (5) provided in B1S PCA The method specifically comprises the following steps:
Figure 983220DEST_PATH_IMAGE015
(6)
(6) in the formula,
Figure 805814DEST_PATH_IMAGE012
is a projection function;
b3, completing image block pairing work according to the optimized similarity function provided in B2 to form an image block training pair;
s3, image training: inputting the image block training pair obtained in the S2 into a CARE denoising network for training, and obtaining the network weight of the image block after training;
the CARE denoising network mentioned in S3 is constructed and completed based on a CSBDeep framework, a U-Net framework is used, and batch processing normalization is added before each activation function;
s4, denoising result prediction: and processing the test image based on the network weight obtained in the step S3 to predict the denoising result of the image.
In summary, the invention provides a new edge information descriptor, which is combined with the principal component to serve as a new matching cost PCMI, and is more suitable for parallax calculation of a microscopic view, so that the result of stereo matching is more accurate; the prior microscopic image denoising is limited to single image denoising, and the information contained in a single image is limited, so the denoising effect almost reaches the limit of the single image denoising and the information domain is expanded to a dual-view image by combining the angle information of a view, so the microscopic image denoising breaks the original limit and enters a new field; the existing denoising method combining stereo matching is based on traditional calculation, but the invention provides the combination of stereo matching and deep learning, the denoising process is faster, and the denoising effect is better
Example 2:
referring to fig. 3 and 5-7, based on embodiment 1 but with the difference that,
with reference to the contents described in embodiment 1, the novelty, creativity, and rationality of the contents of stereo matching, image block pairing, and image training are further described, which specifically include the following contents:
1) stereo matching
Stereo matching algorithms almost all rely on the assumption that the intensities are identical, i.e. that two corresponding points in the two views have the same intensity. However, this assumption is difficult to guarantee in practice, since there are many factors that influence the imaging process. Most efforts to deal with the problem of intensity disparity have focused on illumination variations between the two views, and the problem of noise interference has not been adequately studied. However, microscopic views are often corrupted by noise, and noise-resistant stereo matching becomes an unavoidable problem.
The matching cost in the disparity calculation is a function that measures the similarity between pixels (or image blocks). The similarity between two image blocks, measured by the euclidean distance, is not robust to noisy images, as shown in fig. 3. Therefore, the idea of Principal Component Analysis (PCA) is first considered to reduce the effect of noise on the matching. PCA is a traditional decorrelation method widely usedThe field of dimensionality reduction and image denoising. When the raw data is projected into the PCA domain, the signal and noise can be separated, and by preserving the principal components, the noise can be eliminated to some extent. These coefficients can be used to describe the similarity of the image blocks, since clean information can be well preserved in the principal component. Therefore, we replace the euclidean distance over the noisy image block by the distance between two vectors of size C
Figure 714864DEST_PATH_IMAGE001
WhereinCIs the number of principal components, r2Is the image block size. Thus, the first term that defines the matching cost is:
Figure 315610DEST_PATH_IMAGE002
(1)
whereinP i Is a pixeliAn image block centered;P X andP C respectively represent arbitrary viewing anglesXAnd central viewing angleCImage blocks in two noise views;drepresents the estimated disparity value of the image of the object,
Figure 310110DEST_PATH_IMAGE016
the method is a projection function and has the function of acquiring the coefficient corresponding to the image block after dimension reduction.
In addition to the principal components, another important similarity information in the parallax calculation is edge similarity, and experiments prove that the edge similarity has a good effect on stereo matching at different intensities. However, when a picture is corrupted by noise, the original edge information (gradient) is unstable. Through experiments, the horizontal gradient and the vertical gradient of the microscopic image contain different information, the horizontal gradient information eliminates the influence of noise, and the vertical gradient information describes the strength of local structures, so that the microscopic image is robust to the noise. Therefore, we propose to introduce a concept describing edge information into the blur kernel estimation, which is defined as:
Figure 654504DEST_PATH_IMAGE004
(2)
wherein
Figure 683640DEST_PATH_IMAGE005
Respectively represent pixels
Figure 455287DEST_PATH_IMAGE017
The addition of 0.5 to the denominator is to prevent large MI responses in low texture regions. FIG. 5 demonstrates the effectiveness of this edge information descriptor, where FIG. 5 (a) is the horizontal gradient information of a noisy image, eliminating the effect of noise; FIG. 5 (b) is vertical gradient information for a noisy image, depicting the intensity of local structures; fig. 5 (c) processes the noise image result using the edge information descriptor MI, and can detect the characteristic edge well. We take this describing function as the second term of our matching cost. And the edge descriptor is defined as:
Figure 687816DEST_PATH_IMAGE006
(3)
combining the two cost items, the complete matching cost is defined as:
Figure 835901DEST_PATH_IMAGE007
(4)
wherein
Figure 719543DEST_PATH_IMAGE008
Is an adjustment parameter for balancing the influence of the principal component item and the edge information item. Fig. 6 demonstrates the effectiveness of PCMI, where fig. 6 (a) is the disparity estimated for a noise image using the matching cost AD (absolute value of pixel difference); fig. 6(b) is a parallax estimated for a noise image using the matching cost PCMI. For simplicity, aggregation of the matching cost is done by BoxFilter, without refinement.
2) Image block pairing
Unlike a single image, a pair of images providesWith one additional view, the image block selection can be extended to both views. A similarity function is provided in an NLM algorithm and used for a de-noising device under an ideal condition. Giving a pair of images of different viewing anglesI L AndI R we denote the image block centered on pixel i asP i Image blocks around j are represented asP j . If the picture isI L In (1)P i And withP j Similarly, then they are in the figureI R In the corresponding image blockP i-di AndP j-dj should also be similar to each other, whereind i Representing the disparity at pixel i. Based on the above assumptions, the similarity adaptive to the ideal case proposed in the NLM algorithm is defined as:
Figure 927671DEST_PATH_IMAGE018
(5)
whereind i Andd j the true disparity at the i and j pixels, respectively.
This ideal situation requires two conditions to be met, the dual-view image pair being a clean image and the used disparity being true disparity. Both the reference image blocks and the similar image blocks in the left view are mapped into the right view according to disparity, which is called "full mapping". Most multi-view based denoising methods use full mapping to search for similar image blocks. However, in practice, the disparity calculated from a noisy image pair is inaccurate (as shown in fig. 6), there are a large number of erroneous values in the disparity, and many similar image blocks are mapped to erroneous positions. This wrong mapping results in a worse grouping since the purpose of introducing another view is to improve the accuracy of the image block pairing. As shown in fig. 7(a), both reference image blocks (circle blocks) and similar image blocks (square blocks) in the left view are mapped into the right view according to disparity, which we call "full mapping". To solve this problem, a new image block grouping standard based on the structural correlation between noise image pairs has been proposed.
Pixel in left viewiImage block with center
Figure 631184DEST_PATH_IMAGE019
As a reference image block, its similar image block is denoted
Figure 317381DEST_PATH_IMAGE020
. First, a reference image block is mapped to a right view according to disparity, resulting in a mapped reference image block
Figure 68911DEST_PATH_IMAGE021
. However, unlike NLM algorithms, this processing method proposes to map the "pattern" of the reference image block to the right view (NLMM). Here, "mode" refers to a spatial relationship between the reference image block and the similar image block. Once a similar image block is identified in the left image, its relationship to the reference image block is maintained in the right image during the search. Thus, the spatial structure of similar image blocks in the left view as the reference image block is maintained and copied into the right view. Fig. 7(b) is an example. The similarity between two image blocks is defined asS R The method specifically comprises the following steps:
Figure 447940DEST_PATH_IMAGE022
(6)
wherein,
Figure 638750DEST_PATH_IMAGE023
representing image blocks mapped from a left view to a right view according to structural dependencies.
When the method is particularly applied to the micro-pinhole image block matching problem, the characteristics of a pinhole view are considered: the closer to the center position the less noise content of the image, so we need to find the image block that is most similar to the reference image block in the center view. Thereby providing a new similarity functionS
Figure 394217DEST_PATH_IMAGE024
(7)
The meaning of this function is: when a suspected corresponding block of the reference image block is found in the central view, we calculate the similarity of the similar block of the reference image block to its central corresponding block, and when they are all similar, we assume that the reference image block found its central corresponding block. Fig. 7(c) demonstrates the effectiveness of equation (7).
Although the use of spatial structure information solves the problem with low quality disparity, the similarity between two image blocks determined by the euclidean distance is not robust to noisy images, as shown in fig. 3. Furthermore, the accuracy of image block similarity measurement of double-view denoising is improved by adding a PCA method. The similarity in equation (7) further evolves intoS PCA
Figure 986872DEST_PATH_IMAGE025
(8)
Wherein
Figure 553114DEST_PATH_IMAGE003
Is a projection function.
3) Image training
By the method, the user can view in any view according to the similarityI X And central viewI C Pairs of image blocks are obtained. Inputting a training set of a neural network, and using the image blocks to train the CARE network for denoising the microscopic image. We use the CARE tool provided in the CSBDeep framework as the basis for the implementation, and according to the settings of the CSBDeep, a U-Net architecture is used, with batch normalization added before each activation function.
Example 3:
referring to fig. 8-10, based on embodiment 1-2 but with the difference,
in this embodiment, qualitative and quantitative evaluation is performed on the microneedle hole view denoising method combining stereo matching and deep learning proposed by the present invention through a design experiment. The work focus of the invention is a microneedle hole view, the experiment is carried out based on a micro substance data set Celegant established in the existing design, and the movement of the nematode is shot by the Celegant, and the total number of frames is 150. Each frame contains 169 views, the view of the central view being (7, 7). The data was collected using a microscope with a 125 μm pitch f/20 microlens array and a 60 x 1.4NA objective in this design. The denoising performance of the proposed method without a reference image is evaluated.
Since a true-value image of a microscopic image cannot be obtained, the conventional peak signal-to-noise ratio (PSNR) cannot be used for measuring the noise level of the image. In quality evaluation of a reference-free image, the sharpness of the image is often used as a measure of the image quality, and it is generally considered that the quality is worse when the image is blurred, and vice versa. However, the gradient information amount of the microscope image with high noise degree is mixed, the image quality cannot be judged by using the gradient information, and the general quality evaluation method without the reference image is not applicable, so that the quality of the microscope image is considered to be measured by using the single-image noise level estimation proposed by Masayuki Tanaka. The method comprises the steps of detecting and selecting a weak texture block based on the texture strength of an image block gradient matrix, and estimating a Noise Level Function (NLF) parameter of signal-dependent noise (SDN) in an image by using the mean value and the variance of the weak texture block selected by a Maximum Likelihood (ML) estimator.
Furthermore, since the noise content is higher in the more angularly skewed views, we constructed 5 test sets representing different noise levels at different distances of view from the central view, these images being from the Celegant dataset. Fig. 8 shows an example image of these 5 test sets.
Setting parameters
In the experiment, the image block size r and the search window size S were set to 8 and 13, respectively. The number of similar image blocks for image block matching between the two views is set to 16. The number of principal components in formula (6) in image block matching in embodiment 1 is set to 3. Using the same network architecture as proposed in CSBDeep, i.e., U-Net with depth 3, kernel size 5, batch norm 32, input channel 1, the last layer is a linear activation function, containing 1 output channel, yielding only one prediction. The batch size of training is 16, the initial learning rate is 0.0004, the size of the input image block is 64 pixels, 200 rounds are trained, and a standard learning rate scheduler is used. But differs in that the network is trained using the extracted image blocks in the center view as ground truth data and standard mean square error loss. All parameters were adjusted on the Celegant data set and applied to all 5 test sets.
The number of principal components C in the formula (6) in the image block matching in embodiment 1 is set by experiment. Fig. 9 illustrates the relationship between the denoising performance and the number of principal components C, showing the denoising performance of an image based on the Celegant dataset.
② evaluation of denoising Performance
Since the proposed method combines stereo information with neural networks, we mainly compare it with the state-of-the-art non-single image denoising method, which includes using dual-view joint image denoising JID and TID based on target external database denoising, and blind spot micro denoising neural network method, when evaluating denoising performance. The blind spot micro denoising neural network method comprises N2S for improving the network structure of the mesh points]And PPN2V for blind-spot denoising networks using a joint noise model. In addition to the above denoising method, since our algorithm is inspired by NLMM, we also compare it with NLMM.
The comparison results are shown in table 1.
Table 1: noise level comparison using various algorithms on 5 test sets representing different noise levels
(*102) noisy NLMM JID TID N2S PPN2V proposed
Test set 1 28.0644 27.1259 19.7395 26.8451 26.2873 21.3705 15.6469
Test set 2 37.2735 34.9264 25.3373 34.2824 35.1045 29.3854 21.4598
Test set 3 49.7404 39.8691 36.3993 42.4927 39.9884 35.6907 30.0136
Test set 4 59.2772 55.7283 45.1639 53.7050 53.0738 45.9814 39.6362
Test set 5 70.2210 64.0461 47.9505 71.3235 57.6076 50.5998 44.2325
Average 48.9153 44.3392 34.9181 45.7297 42.4123 36.6056 30.1978
From table 1 we can see that the proposed method is clearly superior to NLMM at both low and high noise levels, with a reduction in noise level of over 9. In addition, compared with the method provided by the invention, the presentation of the joint image denoiser JID and TID is slightly inferior. Especially TID, its performance in the original text is significantly better than the single image denoising method, provided that only one image is corrupted by noise, in our experiments, its performance is significantly degraded when both views are corrupted by noise. In table 1 (right two and three columns), the representation of the blind-spot denoising network method is shown, and the result shows that the proposed method is superior to all other denoising methods participating in comparison. Specifically, it is on average 12.2 and 6.4 higher in noise reduction level than N2S and PPN2V, respectively. With the increase of the noise level, compared with other denoising methods, the method provided by the invention performs better. For example, for the fifth test set, the proposed method of the present invention is 13.4 higher than N2S and 6.4 higher than PPN 2V. This indicates that the proposed method is efficient and robust to high noise levels.
Fig. 10 shows a qualitative comparison of the above denoising methods. The denoising results of the test set 1 and the test set 5 are shown. The selected areas (white and black boxes) are enlarged for comparison. We can see that the proposed method reduces the noise considerably and retains more details than other methods. In particular, in test set 1, the proposed method reconstructs the internal structure of the nematode cells well, while others either smooth them or have edge artifacts or apparently still carry high levels of noise. The method proposed by the present invention also performed better on the margins of the nematodes. The test set 5 proves that, from the visual result, the proposed method can reconstruct the structure and the details even under large noise, and in conclusion, the experiment shows that the proposed method has good performance in the pinhole microscopic image denoising.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical scope of the present invention and the equivalent alternatives or modifications according to the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (4)

1. A micro needle hole view noise reduction method combining stereo matching and deep learning is characterized by specifically comprising the following steps:
s1, stereo matching: inputting the micro-needle hole views captured under different visual angles in the same scene and the corresponding central view into a stereo matching module together for stereo matching work to obtain a view difference between the micro-needle hole views and the central view;
s2, pairing image blocks: searching all image blocks in each view according to the view difference obtained in the step S1, calculating the similarity, and obtaining the image blocks matched with the similarity to form a training pair;
s3, image training: inputting the image block training pair obtained in the S2 into a CARE denoising network for training, and obtaining the network weight of the image block after training;
s4, denoising result prediction: and processing the test image based on the network weight obtained in the step S3, and predicting the denoising result of the image.
2. The method for reducing noise of a micro-pinhole view by combining stereo matching and deep learning according to claim 1, wherein the view difference calculation based on the stereo matching algorithm mentioned in S1 relates to a main component item and edge item matching cost calculation, which specifically includes the following:
a1, based on the principle component analysis idea, replacing Euclidean distance on the image block of the noise image by the distance between two vectors with the size of C,
Figure 759174DEST_PATH_IMAGE001
wherein C is the number of main components, r2Is the image block size; thus, the first term defining the matching cost calculation is:
Figure 587189DEST_PATH_IMAGE002
(1)
(1) in the formula,P i an image block centered on a pixel i;P X andP C respectively representing image blocks in two noise views of an arbitrary view angle X and a central view angle C;drepresenting the estimated disparity value;
Figure 727183DEST_PATH_IMAGE003
the projection function is used for acquiring the coefficient corresponding to the image block after dimension reduction;
a2, a concept of describing edge information is defined and introduced into fuzzy core calculation, and the concept is specifically defined as follows:
Figure 166255DEST_PATH_IMAGE004
(2)
(2) in the formula,
Figure 442516DEST_PATH_IMAGE005
respectively representing the horizontal and vertical gradients of the pixel; the denominator is added with 0.5 to prevent large MI response in low texture areas;
a3, further defining an edge descriptor based on the description concept defined in the A2, wherein the edge descriptor is specifically defined as:
Figure 676051DEST_PATH_IMAGE006
(3)
taking the formula (3) as a second term of the matching cost calculation;
a4, simultaneous equations (1) and (3), the complete matching cost calculation function can be defined as:
Figure 37762DEST_PATH_IMAGE007
(4)
(4) in the formula,
Figure 31257DEST_PATH_IMAGE008
is an adjustment parameter for balancing the influence of the principal component item and the edge information item;
a5, calculating the matching cost between the main component item and the edge information item by using the formula (4) obtained in the A4, completing the stereo matching work between the micro-pinhole view and the corresponding central view, and acquiring view difference data.
3. The method for reducing noise of a micro pinhole view by combining stereo matching and deep learning according to claim 1, wherein the image block pairs mentioned in S2 specifically include the following contents:
b1, based on the characteristic that the microneedle hole view is closer to the central position and the image noise content is smaller, searching an image block which is similar to the reference image block in the central view, and accordingly providing a new similarity functionSThe method specifically comprises the following steps:
Figure 162024DEST_PATH_IMAGE009
(5)
(5) in the formula,P i 、P j 、P i-d 、P js is a pixeli、j、i-d、jsAn image block centered;P X andP C respectively represent arbitrary viewing anglesXAnd central viewing angleCImage blocks in two noise views;
Figure 566460DEST_PATH_IMAGE010
representing image blocks mapped from an arbitrary view to a center view according to structural dependencies; wherein, the structural correlation refers to the spatial relationship between the image blocks; the meaning of this function is: when a suspected corresponding block of the reference image block is found in the central view, calculating the similarity between the suspected corresponding block and the central corresponding block, and if the suspected corresponding block is confirmed to be similar to the central corresponding block, determining that the reference image block finds the matched central corresponding block;
b2, adding a principal component analysis method on the basis of the similarity function provided in B1, and improving the accuracy of the similarity measurement of the image blocks subjected to double-view denoising, so that the similarity function (5) provided in B1 is further converted into:
Figure 415468DEST_PATH_IMAGE011
(6)
(6) in the formula,
Figure 196342DEST_PATH_IMAGE012
is a projection function;
b3, completing image block pairing work according to the optimized similarity function proposed in the B2, and forming an image block training pair.
4. The method for reducing noise of a micro pinhole view by combining stereo matching and deep learning according to claim 1, wherein the CARE denoising network mentioned in S3 is constructed based on a CSBDeep framework, and batch normalization is added before each activation function by using a U-Net architecture.
CN202210483567.3A 2022-05-06 2022-05-06 Micro-needle hole view noise reduction method combining stereo matching and deep learning Active CN114581910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210483567.3A CN114581910B (en) 2022-05-06 2022-05-06 Micro-needle hole view noise reduction method combining stereo matching and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210483567.3A CN114581910B (en) 2022-05-06 2022-05-06 Micro-needle hole view noise reduction method combining stereo matching and deep learning

Publications (2)

Publication Number Publication Date
CN114581910A true CN114581910A (en) 2022-06-03
CN114581910B CN114581910B (en) 2022-07-12

Family

ID=81779045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210483567.3A Active CN114581910B (en) 2022-05-06 2022-05-06 Micro-needle hole view noise reduction method combining stereo matching and deep learning

Country Status (1)

Country Link
CN (1) CN114581910B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657947A (en) * 2015-02-06 2015-05-27 哈尔滨工业大学深圳研究生院 Noise reducing method for basic group image
CN104732500A (en) * 2015-04-10 2015-06-24 天水师范学院 Traditional Chinese medicinal material microscopic image noise filtering system and method adopting pulse coupling neural network
CN105761216A (en) * 2016-01-25 2016-07-13 西北大学 Image de-noising processing method and device
CN112435175A (en) * 2020-10-30 2021-03-02 西安交通大学 Metallographic image denoising method and system
CN112700389A (en) * 2021-01-13 2021-04-23 安徽工业大学 Active sludge microorganism color microscopic image denoising method
CN112819739A (en) * 2021-01-28 2021-05-18 浙江祺跃科技有限公司 Scanning electron microscope image processing method and system
CN114170092A (en) * 2020-09-10 2022-03-11 Imec 非营利协会 Method for denoising electron microscope images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657947A (en) * 2015-02-06 2015-05-27 哈尔滨工业大学深圳研究生院 Noise reducing method for basic group image
CN104732500A (en) * 2015-04-10 2015-06-24 天水师范学院 Traditional Chinese medicinal material microscopic image noise filtering system and method adopting pulse coupling neural network
CN105761216A (en) * 2016-01-25 2016-07-13 西北大学 Image de-noising processing method and device
CN114170092A (en) * 2020-09-10 2022-03-11 Imec 非营利协会 Method for denoising electron microscope images
CN112435175A (en) * 2020-10-30 2021-03-02 西安交通大学 Metallographic image denoising method and system
CN112700389A (en) * 2021-01-13 2021-04-23 安徽工业大学 Active sludge microorganism color microscopic image denoising method
CN112819739A (en) * 2021-01-28 2021-05-18 浙江祺跃科技有限公司 Scanning electron microscope image processing method and system

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CHEN LI等: "A survey for the applications of content-based microscopic image analysis in microorganism classification domains", 《SPRINGER》 *
HONGQIANG YU等: "Phase coherent noise reduction in digital holographic microscopy based on adaptive total variation", 《OPTICS AND LASERS IN ENGINEERING》 *
JOSEPH JACHINOWSKI等: "Beam Profiling With Noise Reduction From Computer Vision and Principal Component Analysis for the MAGIS-100 Experiment", 《ARXIV:2203.03380V1 [PHYSICS.INS-DET]》 *
RAJEEV RANJAN等: "Noises investigations and image denoising in femtosecond stimulated Raman scattering microscopy", 《JOURNAL OF BIOPHOTONICS》 *
SHIRO IHARA等: "Deep learning-based noise ¦ltering toward millisecond order imaging by using scanning transmission electron microscopy", 《RESEARCH SQUARE》 *
SHIVESH CHAUDHARY等: "Fast, Efficient, and Accurate Neuro-Imaging Denoising via Deep Learning", 《BIORXIV》 *
TIM-OLIVER BUCHHOLZ等: "CRYO-CARE: CONTENT-AWARE IMAGE RESTORATION FOR CRYO-TRANSMISSION ELECTRON MICROSCOPY DATA", 《ARXIV:1810.05420V2 [CS.CV]》 *
YUNUS ENGIN GÖKDAG等: "Image denoising using 2-D wavelet algorithm for Gaussian-corrupted confocal microscopy images", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》 *

Also Published As

Publication number Publication date
CN114581910B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN112001960B (en) Monocular image depth estimation method based on multi-scale residual error pyramid attention network model
CN104079827B (en) A kind of optical field imaging weighs focusing method automatically
CN117408890B (en) Video image transmission quality enhancement method and system
WO2018171008A1 (en) Specular highlight area restoration method based on light field image
CN103080979B (en) From the system and method for photo synthesis portrait sketch
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN110176023B (en) Optical flow estimation method based on pyramid structure
CN111784620B (en) Light field camera full-focusing image fusion algorithm for guiding angle information by space information
CN112734915A (en) Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning
CN107767358B (en) Method and device for determining ambiguity of object in image
CN107194948B (en) Video significance detection method based on integrated prediction and time-space domain propagation
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN109671031B (en) Multispectral image inversion method based on residual learning convolutional neural network
CN111626927A (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN116309757A (en) Binocular stereo matching method based on machine vision
CN112651945A (en) Multi-feature-based multi-exposure image perception quality evaluation method
CN116563916A (en) Attention fusion-based cyclic face super-resolution method and system
Lequyer et al. Noise2Fast: fast self-supervised single image blind denoising
Kong et al. No-reference image quality assessment for image auto-denoising
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN111369435B (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN116385401B (en) High-precision visual detection method for textile defects
CN114581910B (en) Micro-needle hole view noise reduction method combining stereo matching and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant