CN109345469B - Speckle denoising method in OCT imaging based on condition generation countermeasure network - Google Patents

Speckle denoising method in OCT imaging based on condition generation countermeasure network Download PDF

Info

Publication number
CN109345469B
CN109345469B CN201811042548.7A CN201811042548A CN109345469B CN 109345469 B CN109345469 B CN 109345469B CN 201811042548 A CN201811042548 A CN 201811042548A CN 109345469 B CN109345469 B CN 109345469B
Authority
CN
China
Prior art keywords
image
images
oct
training
speckle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811042548.7A
Other languages
Chinese (zh)
Other versions
CN109345469A (en
Inventor
陈新建
石霏
马煜辉
朱伟芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201811042548.7A priority Critical patent/CN109345469B/en
Publication of CN109345469A publication Critical patent/CN109345469A/en
Application granted granted Critical
Publication of CN109345469B publication Critical patent/CN109345469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a speckle denoising method in OCT imaging based on a condition generation countermeasure network, which comprises the following steps: acquiring a training image, preprocessing the training image, amplifying data, training a model and using the model; the method adopts a conditional generation countermeasure network (cGAN) framework, obtains a mapping model from an OCT image containing speckle noise to a noiseless OCT image through training, and eliminates the speckle noise of the OCT image of the retina by adopting the mapping model. According to the invention, constraint conditions for keeping edge details are introduced into a condition generation countermeasure network architecture for training, and an OCT image speckle denoising model sensitive to edge information is obtained, so that the speckle denoising model can effectively remove speckle noise and better keep image detail information.

Description

Speckle denoising method in OCT imaging based on condition generation countermeasure network
Technical Field
The invention belongs to the technical field of retina image denoising methods, and particularly relates to a speckle denoising method in OCT imaging based on a condition generation countermeasure network.
Background
Optical Coherence Tomography (OCT) is a broadband Optical scanning Tomography developed in recent years, which uses low Coherence of a broadband light source to realize high-resolution and non-invasive Optical Tomography, and at present, the resolution of OCT imaging can generally reach tens of microns, and can reach several microns at most.
The optical coherence tomography can quickly acquire the cross-section image of the ocular biological tissue with micron-sized resolution, is an important tool for retinal imaging at present, and provides help for clinical ophthalmologists to diagnose diseases; speckle noise caused by multiple forward and backward scattering of light waves is a major factor causing degradation of OCT image quality, and existing speckle noise often masks fine but important morphological details, thus being detrimental to the observation of retinopathy, and it also affects the performance of automatic analysis methods for objective and accurate quantification; although the imaging resolution, speed and depth of OCT have improved greatly over the last two decades, speckle noise, an inherent problem of imaging technology, has not been solved well.
The patent with application number 201210242543.5 discloses an adaptive bilateral filtering-based OCT image speckle noise reduction algorithm, which is characterized in that a speckle noise model of an original OCT image is established, the speckle noise model of the original OCT image is used as a variable according to a Rayleigh criterion to construct a space function, and a method formula for adaptively correcting a filtering weight coefficient by the space function F is deduced by analyzing the characteristics of the space function; the method can reduce the speckle noise of the OCT image, reduce the mean square error of the image, improve the peak signal-to-noise ratio, greatly maintain the edge information of the image, improve the edge contrast ratio and obtain clearer image edge details. However, the current retinal OCT image speckle denoising algorithm has the following defects: (1) the universal image denoising algorithm is difficult to effectively remove the speckle noise according to the characteristics of the speckle noise; (2) some traditional image denoising algorithms can cause image edge distortion and contrast reduction to a certain degree; (3) most image denoising algorithms are difficult to remove speckle noise and well retain image detail information, so that excessive smoothness of images is easily caused; (4) some methods are too complex to implement and time costly and difficult to adapt to images acquired by different types of OCT scanners.
Disclosure of Invention
The invention aims to provide a speckle denoising method in OCT imaging based on a condition generation countermeasure network, which adopts a condition generation countermeasure network (cGAN) framework, obtains a mapping model from an OCT image containing speckle noise to a noiseless OCT image through training, and eliminates the speckle noise of the OCT image of a retina by adopting the mapping model.
In order to achieve the purpose, the invention adopts the technical scheme that:
a speckle denoising method in OCT imaging based on a condition generation countermeasure network comprises the following steps:
s1, acquiring a training image, and acquiring a three-dimensional image containing a plurality of B scanning images for the same eye for multiple times;
s2, preprocessing a training image, registering B scanning images at close positions in the three-dimensional image, averaging a plurality of registered images, stretching contrast to obtain a noise-free OCT image, and forming a training image pair by the noise-free OCT image and an original B scanning image containing speckle noise at a corresponding position;
s3, performing data amplification on the preprocessed training image pair through random scaling, horizontal turning, rotation and non-rigid transformation to obtain a final training data set;
s4, training a model, namely generating a confrontation network framework by using a training data set and adopting conditions, introducing constraint for keeping edge details, and obtaining an OCT image speckle denoising model sensitive to edge information through end-to-end training;
and S5, using the model, sending the OCT image containing the speckle noise into the trained OCT image speckle denoising model for calculation, and obtaining the noise-free OCT image.
Specifically, in step S2, the step of registering the B-scan images at the close positions in the three-dimensional image includes the steps of:
s21, randomly selecting one of the three-dimensional images as a target image;
s22, based on the ith B scanning image in the target image, placing all B scanning images in the three-dimensional images, the positions of which are close to the ith B scanning image, in a set;
and S23, registering all the B-scan images except the ith B-scan image in the set by using affine transformation, wherein the B-scan images except the ith B-scan image are used as reference.
Further, in step S2, the averaging and contrast stretching of the registered images includes the following steps:
s24, selecting a plurality of images with the highest average structural similarity index from the registered images, and averaging the images with the ith B-scan image to obtain a reference de-noised image corresponding to the ith B-scan image;
s25, performing piecewise linear gray scale stretching transformation on the reference de-noised image to obtain a standard de-noised image with enhanced contrast, wherein the gray scale smaller than the average value of the background area is mapped to 0, and the rest gray scales are expanded to [0,255] through linear stretching.
Further, in step S24, the average structural similarity index is obtained by the following formula:
Figure GDA0003187180090000021
where x and y are two windows of size W at corresponding positions in the two images, μxAnd uyRespectively the average of the pixel grey levels in the two windows,
Figure GDA0003187180090000022
and
Figure GDA0003187180090000023
are the variance, σ, of the pixel gray levels in the two windows, respectivelyxyIs the covariance of both the x and y windows; the constant C1 is 2.55 and the constant C2 is 7.65.
Specifically, in step S3,
the random scaling adopts different scaling factors to simulate images acquired by OCT instruments with different resolutions, so that a model trained by the amplified data set can be used for testing other images acquired by different types of OCT scanners;
the horizontal inversion is used for simulating the symmetry of the right eye and the left eye so as to ensure that the model trained by the amplified data set can adapt to the left eye and the right eye;
the rotation is used for simulating different inclinations of the retina in the OCT image, and the range of the rotation angle is-30 degrees, so that the robustness of the model trained by the amplified data set for processing the retina OCT images with different inclination degrees is improved;
the non-rigid transformation is used for simulating deformation differences caused by different pathologies, so that OCT images of different pathologies can be processed by using a model trained by the amplified data set.
Specifically, in step S4, the condition generation countermeasure network includes a generator and an arbiter;
the conditional generation countermeasure network constrains the generated image with the input image as a condition;
the generator generates images which are difficult to distinguish by the discriminator by self through training and learning, and the discriminator improves the distinguishing capability of the discriminator by training and learning.
Further, the objective function of the conditional generation countermeasure network is:
Figure GDA0003187180090000031
wherein, Pdata(x, y) is a joint probability density function of x and y, Pdata(x) Probability density function of x, Pz(z) is the probability density function for z; g is a generator, and D is a discriminator; the input of the generator is a B scanning image x and a random noise vector z in a target image, and the output is a generated image G (x, z) corresponding to x; the input of the discriminator is a real data pair (x, y) composed of a B scanning image x and a corresponding gold standard y in a target image or a generated data pair (x, G (x, z)) composed of the B scanning image x and a generated image G (x, z)), and the output is the probability that the data pair is judged to be real;
in the training process, the goal of the discriminator is to maximize the objective function, the goal of the generator is to minimize the objective function, and then the optimized objective function is:
Figure GDA0003187180090000032
to make the generated image closer to the gold standard, an L1 distance constraint was introduced in the objective function:
Figure GDA0003187180090000033
in order to solve the difficulty that the speckle noise is removed and the edge is clearly reserved, the edge loss sensitive to the edge information is introduced into an objective function:
Figure GDA0003187180090000041
wherein i and j represent the coordinates of the longitudinal and transverse directions in the image;
the final optimization objective function of the conditional generation countermeasure network is:
Figure GDA0003187180090000042
wherein λ is1And λ2The weighting coefficients for the L1 distance and edge loss, respectively.
Compared with the prior art, the invention has the beneficial effects that: (1) according to the method, the three-dimensional images containing a plurality of B scanning images are acquired for the same eye for multiple times, the B scanning images at the similar positions are registered, then the average is calculated, and the contrast stretching is carried out on the B scanning images, so that the quality of the obtained training images is higher; (2) in the amplification of training data, the random scaling is adopted to ensure that a model trained by an amplified data set can test images acquired by different types of OCT scanners; adopting horizontal turning to ensure that the model trained by the amplified data can adapt to left and right eyes; the robustness of the model trained by the amplified data set for processing the retina OCT images with different inclination degrees is improved by rotation; the model trained by the amplified data set can process OCT images of different pathologies by adopting non-rigid transformation; (3) according to the invention, constraint conditions for keeping edge details are introduced into a condition generation countermeasure network architecture for training, and an OCT image speckle denoising model sensitive to edge information is obtained, so that the speckle denoising model can effectively remove speckle noise and better keep image detail information.
Drawings
FIG. 1 is a flow chart of a speckle denoising method in OCT imaging based on a condition-generated countermeasure network according to the present invention;
FIG. 2a is a B-scan image of the target image in example 1;
FIG. 2B is the original B-scan image after registration and averaging in example 1;
FIG. 2c is a standard de-noised image with enhanced contrast corresponding to the original B-scan image in example 1;
FIG. 3 is a schematic diagram showing a U-Net structure of a generator according to embodiment 2;
FIG. 4 is a schematic diagram of the PatchGAN model structure of the discriminator in embodiment 2;
FIG. 5 is an image of a background region and three signal regions manually demarcated in example 3;
FIG. 6a is a comparison graph of the effect of the OCT image after denoising in the denoising model in embodiment 3;
FIG. 6b is a comparison graph of the effect of the OCT image after denoising in the denoising model in embodiment 3;
FIG. 6c is a comparison graph of the effect of the OCT image after denoising in the denoising model in embodiment 3;
fig. 6d is a comparison graph of the effect of the OCT image after being denoised by the denoising model in embodiment 3.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a speckle denoising method in OCT imaging based on a condition-generated countermeasure network, including the following steps:
s1, acquiring training images, repeatedly acquiring K three-dimensional OCT images of the same normal eye, and avoiding eye movement as much as possible in the acquisition process;
s2, preprocessing a training image, registering B scanning images at close positions in the three-dimensional image, averaging a plurality of registered images, stretching contrast to obtain a noise-free OCT image, and forming a training image pair by the noise-free OCT image and an original B scanning image containing speckle noise at a corresponding position;
s3, performing data amplification on the preprocessed training image pair through random scaling, horizontal turning, rotation and non-rigid transformation to obtain a final training data set;
s4, training a model, namely generating a confrontation network framework by using a training data set and adopting conditions, introducing constraint for keeping edge details, and obtaining an OCT image speckle denoising model sensitive to edge information through end-to-end training;
and S5, using the model, sending the OCT image containing the speckle noise into the trained OCT image speckle denoising model for calculation, and obtaining the noise-free OCT image.
Specifically, in step S2, the step of registering the B-scan images at the close positions in the three-dimensional image includes the steps of:
s21, randomly selecting one of the K three-dimensional images as a target image and expressing the target image as V1The other K-1 three-dimensional images are represented as V2…VKWill VmThe jth B-scan image is represented as
Figure GDA0003187180090000051
S22, taking the ith B-scan image in the target image
Figure GDA0003187180090000052
For reference, 2P + 1B-scan images with indices close to i in all K three-dimensional images are placed in a set: b scanning images which are close to each other are placed in a set;
Figure GDA0003187180090000053
s23, dividing all the parts in the set by affine transformation
Figure GDA0003187180090000054
Other than B scanning the image to
Figure GDA0003187180090000055
Registration is performed for the reference. Further, in step S2, the registered images are averaged and subjected to contrast stretch wrappingThe method comprises the following steps:
s24, selecting Q images with the highest average structural similarity index from the (2P +1) K-1 registered images and comparing the Q images with the (2P +1) K-1 registered images
Figure GDA0003187180090000056
Are averaged together to obtain
Figure GDA0003187180090000057
A corresponding reference denoised image; repeating this operation for all B-scan images in the target image, a set of reference de-noised images corresponding to all B-scan images in the target image can be obtained at different positions of the retina, the original B-scan image is shown in fig. 2a, and the scan image is a normal retina image which is collected by a Topcon DRI-1 scanner and takes macula lutea as a center; the obtained reference denoised image is shown in fig. 2 b;
s25, performing piecewise linear gray scale stretching transformation on the reference de-noised image to obtain a standard de-noised image with enhanced contrast, wherein the gray scale smaller than the average value of the background area is mapped to 0, and the rest gray scales are expanded to [0,255] through linear stretching; the standard de-noised image is shown in FIG. 2 c;
further, in step S24, the average structural similarity index is obtained by the following formula:
Figure GDA0003187180090000061
where x and y are two windows of size W at corresponding positions in the two images, μxAnd uyRespectively the average of the pixel grey levels in the two windows,
Figure GDA0003187180090000062
and
Figure GDA0003187180090000063
are the variance, σ, of the pixel gray levels in the two windows, respectivelyxyIs the covariance of both the x and y windows; the constant C1 is 2.55 and the constant C2 is 7.65.
In the present embodiment, K is 10 to 20, P is 3 to 5, Q is 20 to 70, and W is 3 or 5.
Specifically, in step S3,
the random scaling adopts different scaling factors to simulate images acquired by OCT instruments with different resolutions, so that a model trained by the amplified data set can be used for testing other images acquired by different types of OCT scanners;
the horizontal inversion is used for simulating the symmetry of the right eye and the left eye so as to ensure that the model trained by the amplified data set can adapt to the left eye and the right eye;
the rotation is used for simulating different inclinations of the retina in the OCT image, and the range of the rotation angle is-30 degrees, so that the robustness of the model trained by the amplified data set for processing the retina OCT images with different inclination degrees is improved;
the non-rigid transformation is used for simulating deformation differences caused by different pathologies, so that OCT images of different pathologies can be processed by using a model trained by the amplified data set.
Specifically, in step S4, the conditional generation countermeasure network includes a generator (G) and a discriminator (D), the generator is aimed at generating as real images as possible, the discriminator is aimed at judging as accurately as possible whether the input images are real or generated by the generator, and the model training process is a game between the generator and the discriminator; the generator generates images which are difficult to distinguish by the discriminator by training and learning, and the discriminator improves the distinguishing capability of the discriminator by training and learning; unlike a general generation countermeasure network (GAN), the conditional generation countermeasure network in the present embodiment constrains a generated image with an input image as a condition;
further, the objective function of the conditional generation countermeasure network is:
Figure GDA0003187180090000071
wherein, Pdata(x, y) isA joint probability density function of x and y, Pdata(x) Probability density function of x, Pz(z) is the probability density function for z; the input of the generator is a B scanning image x and a random noise vector z in a target image, and the output is a generated image G (x, z) corresponding to x; the input of the discriminator is a real data pair (x, y) composed of a B scanning image x and a corresponding gold standard y in a target image or a generated data pair (x, G (x, z)) composed of the B scanning image x and a generated image G (x, z)), and the output is the probability that the data pair is judged to be real;
in the training process, the goal of the discriminator is to maximize the objective function, the goal of the generator is to minimize the objective function, and then the optimized objective function is:
Figure GDA0003187180090000072
to make the generated image closer to the gold standard, an L1 distance constraint was introduced in the objective function:
Figure GDA0003187180090000073
in order to solve the difficulty that the speckle noise is removed and the edge is clearly reserved, the edge loss sensitive to the edge information is introduced into an objective function:
Figure GDA0003187180090000074
wherein i and j represent the coordinates of the longitudinal and transverse directions in the image;
the final optimization objective function of the conditional generation countermeasure network is:
Figure GDA0003187180090000075
wherein λ is1And λ2Respectively L1 distance and edge lossA weighting coefficient; through experimental tests, lambda in the embodiment1The value range of (a) is 80-120, lambda2The value range of (1) is 0.8-1.2, so that the L1 distance and the edge loss are ensured to have the same order of magnitude, and the optimization process is stable and convergent.
Example 2
As shown in fig. 3 and 4, the present embodiment provides a conditional generation countermeasure network for speckle denoising in OCT imaging, where the conditional generation countermeasure network includes a generator and a discriminator; the generator adopts a U-Net convolution neural network to generate a picture with better details; the generator is an encoder-decoder structure with symmetrical cross-layer connection, and can keep feature map detail information of different resolutions in an encoder, so that the decoder can better repair target details, and a generated image is closer to a gold standard; the discriminator adopts a PatchGAN model to discriminate the truth of the generated image; the discriminator is used to identify whether each nxn patch in the image is true or false and treat the image as a markov random field, assuming independence between pixels in different patches. Through experimental testing, the size N of the patch is set to 70, which enables the arbiter to have fewer parameters and faster operating speed, and still produce high quality results.
Specifically, as shown in fig. 3, in the generator, all convolutional layers and deconvolution layers adopt convolutional kernels with a sliding step size of 2 and a shape of 4 × 4, and each layer adopts batch normalization except for the first convolutional layer of the encoder; all the active functions ReLU in the encoder are leakage ReLUs with a slope of 0.2, while the active functions in the decoder are ReLUs; introducing a dropout rate of 0.5 in the first three layers of the decoder as a form of a random noise vector z, and also effectively preventing overfitting during training, wherein a hyperbolic tangent function is used as an activation function of the last layer in the decoder;
specifically, as shown in fig. 4, in the discriminator, PatchGAN inputs a real data pair or a generated data pair to generate a corresponding output, which has 5 convolution layers, wherein the first three layers use convolution kernels with a sliding step size of 2 and a shape of 4 × 4, and the last two layers use convolution kernels with a sliding step size of 1 and a shape of 4 × 4; the middle three layers adopt batch standardization; all the activating functions ReLU in the first four layers are leak ReLU, the slope is 0.2, and the last layer adopts a Sigmoid function, so that the aim of identification is achieved; in the final 62 x 62 image, each pixel represents the probability that the corresponding 70 x 70 patch in the input was recognized as true.
Example 3
In the training model of the embodiment, 512 groups of prepared data are used as a training set, and the initial learning rate is 2e-4The Adam algorithm with momentum of 0.5 is used for alternately optimizing the generator and the discriminator; the number of a batch of pictures sent into a neural network is set to be 1, the training frequency is set to be 100, after training is finished, OCT images to be subjected to speckle noise removal are tested only by using a trained generator, 9 groups of OCT images for testing are collected from OCT scanners of four different types, and the test images comprise normal eye images and pathological change eye images; as shown in table 1:
table 1 collects the list of OCT scanners used to test OCT images;
Figure GDA0003187180090000081
for speckle denoising of a retina OCT image, a signal-to-noise ratio (SNR), a contrast-to-noise ratio (CNR), an equivalent visual number (ENL) and an edge preservation coefficient (EPI) are used as objective indexes of an evaluation method, in order to calculate the indexes, a Region of Interest (RIO) and a layered boundary are manually defined on the image, as shown in FIG. 5, the image is also manually defined with a background region, three signal regions (respectively located in a Retinal Nerve Fiber Layer (RNFL), an inner retina and Retinal Pigment Epithelium (RPE) complex) and three boundaries (an upper boundary of the RNFL is sequentially defined from top to bottom, and an inner retina boundary, an outer retina boundary and a lower boundary of the RPE are respectively used as positions for calculating the EPI), and the image is a normal retina image which is collected by a Topcon DRI-1 scanner and takes a macula as a center; the performance index is introduced as follows:
(a) signal-to-noise ratio (SNR)
SNR is a suitable criterion to reflect the level of noise in an image, defined as follows:
Figure GDA0003187180090000091
where max (I) denotes the maximum gray value, σ, of the image IbIs the standard deviation of the background area.
(b) Contrast to noise ratio (CNR)
Figure GDA0003187180090000092
Wherein muiAnd σiRepresents the mean and standard deviation of the ith signal region in the image, andband σbMean and standard deviation of the background area are indicated.
In the present embodiment, the average CNR is calculated over 3 signal ROIs.
(c) Equivalent vision (ENL)
ENL is typically used to measure the smoothness of homogeneous areas in images. The ENL for the ith ROI in the image may be calculated as:
Figure GDA0003187180090000093
wherein muiAnd σiRepresenting the mean and standard deviation of the ith signal ROI in the image.
In the present embodiment, the average ENL is calculated over 3 signals ROI.
(d) Edge maintenance index (EPI)
EPI is a metric that reflects how much detail the edges of an image remain after denoising. The longitudinal EPI is defined as:
Figure GDA0003187180090000094
wherein IoAnd IdRepresenting the noisy and denoised image, while i and j represent the longitudinal and transverse coordinates in the image. If computed over the entire image, this coefficient may not be an accurate indicator of edge preservation, since after denoising, the gradient will become smaller in the homogeneous region. Therefore, we compute in the neighborhood of the image boundary. In our experiment, the image boundary neighborhood is set to a band of 7 pixels in height, centered at the boundary as shown in FIG. 5.
As shown in table 2, the average performance index of the original B-scan image and the image processed by the denoising model are compared, which is greatly improved;
table 2 shows the comparison of average performance indexes before and after speckle denoising of an OCT image by using the denoising model of the embodiment
Figure GDA0003187180090000101
As can be seen from table 2, after the speckle denoising is performed on the OCT image by using the denoising model of the present embodiment, four indexes are all greatly improved; as shown in fig. 6a, 6b, 6c, and 6d, the denoising model of this embodiment can better remove speckle noise and simultaneously retain edge details to the maximum extent on an OCT image, and has a good denoising effect on images collected by different types of OCT scanners; wherein fig. 6a is a normal retinal image centered on the papilla of the eye collected by a Topcon 2000 scanner; FIG. 6b is a central serous choroidopathy retinal image centered at the macula collected by a Topcon DRI-1 scanner; FIG. 6c is a normal retinal image centered at the macula collected by a Topcon DRI-1 scanner; figure 6d is an image of a pathological myopic retina centered at the macula collected by a Zeiss 4000 scanner.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A speckle denoising method in OCT imaging based on a condition generation countermeasure network is characterized by comprising the following steps:
s1, acquiring a training image, and acquiring a three-dimensional image containing a plurality of B scanning images for the same eye for multiple times;
s2, preprocessing a training image, registering B scanning images at close positions in the three-dimensional image, averaging a plurality of registered images, stretching contrast to obtain a noise-free OCT image, and forming a training image pair by the noise-free OCT image and an original B scanning image containing speckle noise at a corresponding position;
s3, performing data amplification on the preprocessed training image pair through random scaling, horizontal turning, rotation and non-rigid transformation to obtain a final training data set;
s4, training a model, namely generating a confrontation network framework by using a training data set and adopting conditions, introducing constraint for keeping edge details, and obtaining an OCT image speckle denoising model sensitive to edge information through end-to-end training;
s5, using the model, sending the OCT image containing speckle noise into the trained OCT image speckle denoising model for calculation, and obtaining a noise-free OCT image;
the objective function of the conditional generation countermeasure network is:
Figure FDA0003187180080000011
wherein, Pdata(x, y) is a joint probability density function of x and y, Pdata(x) Probability density function of x, Pz(z) is the probability density function for z; g is a generator, and D is a discriminator; the input of the generator is a B scanning image x and a random noise vector z in a target image, and the output is a generated image G (x, z) corresponding to x; the input to the discriminator being in the target imageA true data pair (x, y) composed of a B scanning image x and a corresponding gold standard y or a generated data pair (x, G (x, z)) composed of the B scanning image x and a generated image G (x, z)), and outputting the probability that the data pair is judged to be true;
in the training process, the goal of the discriminator is to maximize the objective function, the goal of the generator is to minimize the objective function, and then the optimized objective function is:
Figure FDA0003187180080000012
to make the generated image closer to the gold standard, an L1 distance constraint was introduced in the objective function:
Figure FDA0003187180080000013
in order to solve the difficulty that the speckle noise is removed and the edge is clearly reserved, the edge loss sensitive to the edge information is introduced into an objective function:
Figure FDA0003187180080000021
wherein i and j represent the coordinates of the longitudinal and transverse directions in the image;
the final optimization objective function of the conditional generation countermeasure network is:
Figure FDA0003187180080000022
wherein λ is1And λ2The weighting coefficients for the L1 distance and edge loss, respectively.
2. The method for denoising speckle in OCT imaging based on conditional generation countermeasure network of claim 1, wherein the step S2 of registering the B-scan images of the close positions in the three-dimensional image comprises the following steps:
s21, randomly selecting one of the three-dimensional images as a target image;
s22, based on the ith B scanning image in the target image, placing all B scanning images in the three-dimensional images, the positions of which are close to the ith B scanning image, in a set;
and S23, registering all the B-scan images except the ith B-scan image in the set by using affine transformation, wherein the B-scan images except the ith B-scan image are used as reference.
3. The method for denoising the speckle in the OCT imaging based on the conditional generation countermeasure network of claim 1, wherein the step S2 of averaging the registered images and performing contrast stretching comprises the following steps:
s24, selecting a plurality of images with the highest average structural similarity index from the registered images, and averaging the images with the ith B-scan image to obtain a reference de-noised image corresponding to the ith B-scan image;
s25, performing piecewise linear gray scale stretching transformation on the reference denoised image, wherein the gray scale smaller than the average value of the background area is mapped to 0, and the rest gray scales are expanded to [0,255] through linear stretching.
4. The method for denoising speckle in OCT imaging based on conditional generation countermeasure network of claim 3, wherein in step S24, the average structural similarity index is obtained by the following formula:
Figure FDA0003187180080000023
where x and y are two windows of size W at corresponding positions in the two images, μxAnd uyRespectively the average of the pixel grey levels in the two windows,
Figure FDA0003187180080000024
and
Figure FDA0003187180080000025
are the variance, σ, of the pixel gray levels in the two windows, respectivelyxyIs the covariance of both the x and y windows; the constant C1 is 2.55 and the constant C2 is 7.65.
5. The speckle denoising method in OCT imaging based on conditional generation countermeasure network of claim 1, wherein in step S3:
the random scaling adopts different scaling factors to simulate images acquired by OCT instruments with different resolutions;
the horizontal flipping is used to simulate the symmetry of the right and left eyes;
the rotation is used for simulating different gradients of the retina in the OCT image, and the rotation angle range is-30 degrees;
the non-rigid transformation is used to model the difference in deformation caused by different pathologies.
6. The speckle denoising method in OCT imaging based on the conditional generation countermeasure network of claim 1, wherein in step S4, the conditional generation countermeasure network comprises a generator and a discriminator;
the conditional generation countermeasure network constrains the generated image with the input image as a condition;
the generator generates images which are difficult to distinguish by the discriminator by self through training and learning, and the discriminator improves the distinguishing capability of the discriminator by training and learning.
CN201811042548.7A 2018-09-07 2018-09-07 Speckle denoising method in OCT imaging based on condition generation countermeasure network Active CN109345469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811042548.7A CN109345469B (en) 2018-09-07 2018-09-07 Speckle denoising method in OCT imaging based on condition generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811042548.7A CN109345469B (en) 2018-09-07 2018-09-07 Speckle denoising method in OCT imaging based on condition generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109345469A CN109345469A (en) 2019-02-15
CN109345469B true CN109345469B (en) 2021-10-22

Family

ID=65304548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811042548.7A Active CN109345469B (en) 2018-09-07 2018-09-07 Speckle denoising method in OCT imaging based on condition generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109345469B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223254A (en) * 2019-06-10 2019-09-10 大连民族大学 A kind of image de-noising method generating network based on confrontation
CN110390647A (en) * 2019-06-14 2019-10-29 平安科技(深圳)有限公司 The OCT image denoising method and device for generating network are fought based on annular
CN110390650B (en) * 2019-07-23 2022-02-11 中南大学 OCT image denoising method based on dense connection and generation countermeasure network
CN110428377B (en) * 2019-07-26 2023-06-30 北京康夫子健康技术有限公司 Data expansion method, device, equipment and medium
CN110516201B (en) * 2019-08-20 2023-03-28 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110516577B (en) * 2019-08-20 2022-07-12 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111462012A (en) * 2020-04-02 2020-07-28 武汉大学 SAR image simulation method for generating countermeasure network based on conditions
CN111402174A (en) * 2020-04-03 2020-07-10 北京图湃影像科技有限公司 Single OCT B-scan image denoising method and device
CN112085734B (en) * 2020-09-25 2022-02-01 西安交通大学 GAN-based image restoration defect detection method
CN112288657A (en) * 2020-11-16 2021-01-29 北京小米松果电子有限公司 Image processing method, image processing apparatus, and storage medium
CN112150341B (en) * 2020-11-26 2021-05-28 南京理工大学 Physical constraint and data drive-based dual-stage scatter imaging method
CN112215784B (en) * 2020-12-03 2021-04-06 江西博微新技术有限公司 Image decontamination method, image decontamination device, readable storage medium and computer equipment
CN112700390B (en) * 2021-01-14 2022-04-26 汕头大学 Cataract OCT image repairing method and system based on machine learning
CN112819867A (en) * 2021-02-05 2021-05-18 苏州大学 Fundus image registration method based on key point matching network
CN112801998B (en) * 2021-02-05 2022-09-23 展讯通信(上海)有限公司 Printed circuit board detection method and device, computer equipment and storage medium
CN113096169B (en) * 2021-03-31 2022-05-20 华中科技大学 Non-rigid multimode medical image registration model establishing method and application thereof
CN113269092A (en) * 2021-05-26 2021-08-17 中国石油大学(华东) Offshore oil spill detection method based on multi-scale condition countermeasure network
CN113240669A (en) * 2021-06-11 2021-08-10 上海市第一人民医院 Vertebra image processing method based on nuclear magnetic image
CN113283848B (en) * 2021-07-21 2021-09-28 湖北浩蓝智造科技有限公司 Goods warehousing detection method, warehousing system and storage medium
CN113687352A (en) * 2021-08-05 2021-11-23 南京航空航天大学 Inversion method for down-track interferometric synthetic aperture radar sea surface flow field
CN113780444B (en) * 2021-09-16 2023-07-25 平安科技(深圳)有限公司 Training method of tongue fur image classification model based on progressive learning
CN114841878A (en) * 2022-04-27 2022-08-02 广东博迈医疗科技股份有限公司 Speckle denoising method and device for optical coherence tomography image and electronic equipment
CN117706514B (en) * 2024-02-04 2024-04-30 中南大学 Clutter elimination method, system and equipment based on generation countermeasure network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319932A (en) * 2018-03-12 2018-07-24 中山大学 A kind of method and device for the more image faces alignment fighting network based on production
US10043261B2 (en) * 2016-01-11 2018-08-07 Kla-Tencor Corp. Generating simulated output for a specimen

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043261B2 (en) * 2016-01-11 2018-08-07 Kla-Tencor Corp. Generating simulated output for a specimen
CN108319932A (en) * 2018-03-12 2018-07-24 中山大学 A kind of method and device for the more image faces alignment fighting network based on production

Also Published As

Publication number Publication date
CN109345469A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109345469B (en) Speckle denoising method in OCT imaging based on condition generation countermeasure network
CN109493954B (en) SD-OCT image retinopathy detection system based on category distinguishing and positioning
Chen et al. DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images
CN110517235B (en) OCT image choroid automatic segmentation method based on GCS-Net
JP7193343B2 (en) Method and apparatus for reducing artifacts in OCT angiography using machine learning techniques
US9418423B2 (en) Motion correction and normalization of features in optical coherence tomography
CN111292338A (en) Method and system for segmenting choroidal neovascularization from fundus OCT image
US11854199B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
JP2022520708A (en) Segmentation and classification of map-like atrophy patterns in patients with age-related macular degeneration in wide-angle spontaneous fluorescence images
CN108618749B (en) Retina blood vessel three-dimensional reconstruction method based on portable digital fundus camera
CN109377472B (en) Fundus image quality evaluation method
WO2020137678A1 (en) Image processing device, image processing method, and program
CN110575132A (en) Method for calculating degree of strabismus based on eccentric photography
CN115443480A (en) OCT orthopathology segmentation using channel-encoded slabs
Naz et al. Glaucoma detection in color fundus images using cup to disc ratio
Garcia-Marin et al. Patch-based CNN for corneal segmentation of AS-OCT images: Effect of the number of classes and image quality upon performance
CN110033496B (en) Motion artifact correction method for time sequence three-dimensional retina SD-OCT image
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
EP2693397B1 (en) Method and apparatus for noise reduction in an imaging system
Laaksonen Spectral retinal image processing and analysis for ophthalmology
JP2017512627A (en) Method for analyzing image data representing a three-dimensional volume of biological tissue
Avetisov et al. Calculation of anisotropy and symmetry coefficients of corneal nerve orientation based on automated recognition of digital confocal images
Syga et al. Fully automated detection of lamina cribrosa in optical coherence tomography: Framework and illustrative examples
Kucukgoz et al. Evaluation of 2D and 3D deep learning approaches for predicting visual acuity following surgery for idiopathic full-thickness macular holes in spectral domain optical coherence tomography images
Sinha et al. Low Quality Retinal Blood Vessel Image Boosting Using Fuzzified Clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant