CN114332283A - Training method based on double-domain neural network and photoacoustic image reconstruction method - Google Patents

Training method based on double-domain neural network and photoacoustic image reconstruction method Download PDF

Info

Publication number
CN114332283A
CN114332283A CN202111683421.5A CN202111683421A CN114332283A CN 114332283 A CN114332283 A CN 114332283A CN 202111683421 A CN202111683421 A CN 202111683421A CN 114332283 A CN114332283 A CN 114332283A
Authority
CN
China
Prior art keywords
net network
submodule
image
domain
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111683421.5A
Other languages
Chinese (zh)
Inventor
田超
沈康
刘松德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Zhejiang Lab
Original Assignee
University of Science and Technology of China USTC
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC, Zhejiang Lab filed Critical University of Science and Technology of China USTC
Priority to CN202111683421.5A priority Critical patent/CN114332283A/en
Publication of CN114332283A publication Critical patent/CN114332283A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a training method based on a two-domain neural network and a corresponding photoacoustic image reconstruction method, which comprise the following steps: constructing a DI-net network model, wherein the DI-net network model comprises a data domain D-net network, an image domain I-net network and a back projection layer between the data domain D-net network and the image domain I-net network; acquiring a training sample data set, wherein the training sample data set comprises a photoacoustic signal and a photoacoustic image; and training the DI-net network model based on the training sample data set to obtain the trained DI-net network model. And inputting the sparse visual angle photoacoustic signal into a DI-net network model to obtain a reconstructed image. And image reconstruction is performed by using the trained DI-net network model, so that the streak artifact caused by a sparse view angle can be inhibited, and the image quality is improved.

Description

Training method based on double-domain neural network and photoacoustic image reconstruction method
Technical Field
The invention relates to the field of medical image reconstruction, in particular to a training method based on a two-domain neural network and a photoacoustic image reconstruction method using the training method.
Background
The photoacoustic computed tomography (PACT) is a nondestructive biomedical imaging mode, has high optical imaging contrast and ultrasonic imaging penetration depth, and has unique application value in the biomedical field. In order to obtain a high-quality photoacoustic image, the signal acquisition apparatus of the imaging system needs to perform spatial sampling of a dense angle of view with a high-density array probe. However, in practical application, due to the limitations of factors such as economic cost, process technology, imaging time and the like, the arrangement of the detectors is often sparse, only the space undersampled photoacoustic signals can be obtained, and the condition of data completeness cannot be met, so that severe streak artifacts appear in the reconstructed image, and the readability and the quantitative accuracy of the image are reduced. Therefore, in order to improve the imaging quality, the imaging system needs to be equipped with a reconstruction algorithm that can reconstruct a high-quality photoacoustic image using sparse photoacoustic data.
The traditional analytic reconstruction algorithm is derived through a strict mathematical physical model, and the necessary condition for realizing stable reconstruction of the traditional analytic reconstruction algorithm is data completeness, so that the algorithm is not suitable for sparse visual angle PACT image reconstruction.
The iterative reconstruction algorithm based on compressed sensing can reconstruct a high-quality photoacoustic image by utilizing sparse photoacoustic data, but the algorithm needs to repeatedly calculate the photoacoustic forward and backward processes for many times, so that the reconstruction efficiency is low, and the application scene is limited.
In recent years, researchers have begun to apply deep learning techniques to sparse view PACT image reconstruction. In 2018, Hauptman et al propose an iterative reconstruction algorithm based on deep learning, and the basic idea is to expand the solving process of the traditional iterative algorithm into a cascaded neural network, so that the network automatically learns the regular expression and the optimization process of the algorithm, and the reconstruction speed and the image quality are improved. However, similar to the conventional iterative algorithm, the algorithm needs to calculate the photoacoustic forward and backward processes repeatedly, and the reconstruction efficiency is low. In addition, under the end-to-end training mode, the algorithm has a large requirement on the GPU video memory and is not suitable for reconstructing high-resolution images. 2019Davoudi et al propose a PACT image Post-processing technique (Post-Unet) based on a convolutional neural network, and the main principle of the technique is to realize image enhancement by enabling a U-Net network to learn the mapping relationship between a low-quality image and a high-quality image. The main drawback of such methods is that when the input image quality is low, structures in the image that have been occluded by lost details or artifacts remain difficult to recover.
Disclosure of Invention
In view of the above, the main objective of the present invention is to disclose a training method based on a two-domain neural network and a photoacoustic image reconstruction method, so as to partially solve at least one of the above-mentioned technical problems.
In order to achieve the above object, an aspect of the present invention discloses a training method based on a dual-domain neural network, including:
acquiring a training sample data set, wherein the training sample data set comprises a photoacoustic signal and a photoacoustic image;
training the DI-net network model based on the training sample data set to obtain the trained DI-net network model, wherein the DI-net network model comprises a data domain D-net network, an image domain I-net network and a back projection layer between the data domain D-net network and the image domain I-net network.
According to one embodiment of the invention, the data domain D-net network comprises a first encoder and a first decoder:
a jump connection exists between the first encoder and the first decoder; the hopping connection is used for reducing data loss caused by a data domain D-net network in a down-sampling process, wherein the down-sampling process is an operation performed in the first encoder;
wherein the first encoder includes:
the first coding submodule, the second coding submodule, the third coding submodule and the fourth coding submodule are cascaded, and each of the first coding submodule, the second coding submodule, the third coding submodule and the fourth coding submodule comprises two convolutional layers and a maximum pooling layer;
in the first coding sub-module, the second coding sub-module, the third coding sub-module and the fourth coding sub-module, the number of convolution kernels of the two convolutional layers is the same, and the number of convolution kernels in the first coding sub-module, the second coding sub-module, the third coding sub-module and the fourth coding sub-module which are cascaded is doubled in sequence; and
the first decoder of the data domain D-net network comprises:
the decoding device comprises a fourth decoding submodule, a third decoding submodule, a second decoding submodule and a first decoding submodule which are cascaded, wherein the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule respectively comprise two convolution layers and a maximum pooling layer;
the number of convolution kernels of the two convolution layers of the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule is the same, and the number of convolution kernels of the cascaded fourth decoding submodule, third decoding submodule, second decoding submodule and first decoding submodule is reduced by half in sequence;
wherein the submodules in the first encoder and the first decoder of the data domain D-net network, which have the same number of convolution kernels, are same-layer submodules.
According to one embodiment of the invention, the image domain I-net network comprises a second encoder and a second decoder:
a skip connection exists between the second encoder and the second decoder; the skip connection is used to reduce data loss caused by a down-sampling process in the image domain I-net network, the down-sampling being an operation performed in the second encoder;
wherein the second encoder comprises:
the device comprises a fifth coding submodule, a sixth coding submodule, a seventh coding submodule and an eighth coding submodule which are cascaded, wherein the fifth coding submodule, the sixth coding submodule, the seventh coding submodule and the eighth coding submodule respectively comprise two convolutional layers and a maximum pooling layer;
in the fifth coding sub-module, the sixth coding sub-module, the seventh coding sub-module and the eighth coding sub-module, the number of convolution kernels of the two convolutional layers is the same, and the number of convolution kernels in the fifth coding sub-module, the sixth coding sub-module, the seventh coding sub-module and the eighth coding sub-module which are cascaded is doubled in sequence; and
the second decoder of the image domain I-net network comprises:
the decoding device comprises an eighth decoding submodule, a seventh decoding submodule, a sixth decoding submodule and a fifth decoding submodule which are cascaded, wherein the eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule respectively comprise two convolution layers and a maximum pooling layer;
the convolutional kernels of the two convolutional layers of the eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule are the same in number, and the number of the convolutional kernels in the cascaded eighth decoding submodule, seventh decoding submodule, sixth decoding submodule and fifth decoding submodule is reduced by half in sequence;
wherein the sub-modules in the second encoder and the second decoder of the image domain I-net network having the same number of convolution kernels are same-layer sub-modules.
According to an embodiment of the invention, the data domain D-net network and the image domain I-net network further comprise:
establishing a jump connection between the network and the last layer of nerves between the same-layer sub-modules of the data domain D-net network and the image domain I-net network;
a hopping connection established between a first layer of neural network and a last layer of neural network of the data domain D-net network; a hopping connection established between a first layer neural network and a last layer neural network of the image domain I-net network; and
adding a normalization layer and a nonlinear activation function layer to each convolution layer of the first encoder and the second encoder.
According to one embodiment of the invention, the data domain D-net network and the image domain I-net network are connected by a back-projection layer, represented in the form of a matrix as follows:
P=BY;
the P is a one-dimensional column vector containing image information to be reconstructed, and is rearranged to form an image matrix to be reconstructed and input to the image domain I-net network; b is a back projection matrix, through which the data domain D-net network and the image domain I-net network are connected; y is a matrix of back-projected items, generated based on the data domain D-net network.
According to an embodiment of the invention, the training sample data set comprises:
the space undersampled photoacoustic signal, the space full-sampling photoacoustic signal and a photoacoustic image corresponding to the space full-sampling photoacoustic signal are obtained by the space full-sampling photoacoustic signal through an image reconstruction algorithm; the space undersampled photoacoustic signal and the space fully-sampled photoacoustic signal form a first data set pair, and the space undersampled photoacoustic signal and the photoacoustic image corresponding to the fully-sampled photoacoustic signal form a second data set pair.
According to an embodiment of the present invention, the spatially undersampled photoacoustic signal and the spatially fully sampled photoacoustic signal are obtained by filtering the photoacoustic signals collected by the detector apparatus with different channel numbers, and are represented in the form of a matrix as follows:
Y0=FX;
wherein F is a filter matrix; x is a photoacoustic signal matrix with different channel numbers acquired by a detector device; y is0Is the spatial undersampled photoacoustic signal matrix or the spatial fully sampled photoacoustic signal matrix.
According to an embodiment of the invention, said training said DI-net network comprises:
training the data domain D-net network independently to obtain a first optimal network structure of the data domain D-net network;
training the image domain I-net network, obtaining a second optimal network structure of the image domain I-net network and fixing a second weight parameter of the image domain I-net network based on end-to-end training of the DI-net network model after obtaining a first optimal network structure of the data domain D-net network and fixing the first weight parameter of the data domain D-net network;
recovering the trainability of the first weight parameter of the data domain D-net network, and performing end-to-end fine tuning training on the DI-net network model to obtain the DI-net network model;
wherein the fixing of the first and second weight parameters is monitored based on a squared error loss function.
According to an embodiment of the invention, the Adam algorithm is used for optimization during the training process; in the training process, a learning rate attenuation strategy is used, and the learning rate attenuation strategy comprises controlling the attenuation of the learning rate based on the loss finger descending condition of the verification set.
To another aspect of the present invention, a photoacoustic image reconstruction method is disclosed, comprising:
inputting the sparse visual angle photoacoustic signal into a DI-net network model to obtain a reconstructed image; wherein the DI-net network model is trained by the method of the above training method.
According to the training method based on the two-domain neural network and the photoacoustic image reconstruction method using the training method, the sparse view photoacoustic signal is input into the DI-net network model to obtain the reconstructed image. And image reconstruction is performed by using the trained DI-net network model, so that the streak artifact caused by a sparse view angle can be inhibited, and the image quality is improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1(a) is a schematic diagram illustrating a DI-net network model architecture of an embodiment of the present invention;
FIG. 1(b) is a diagram schematically illustrating a data domain D-net network architecture according to an embodiment of the present invention;
FIG. 2 schematically illustrates a flow diagram for training a DI-net network in accordance with an embodiment of the present invention;
FIG. 3 schematically shows a schematic diagram of an experimental setup according to an embodiment of the present invention;
FIG. 4 is a graph schematically illustrating the reconstruction of mouse tomographic data under 128 detection view angles according to an embodiment of the present invention;
FIG. 5 is a graph schematically illustrating the reconstruction of mouse tomographic data under 256 detection view angles according to an embodiment of the present invention;
fig. 6 schematically shows the statistical results of the quantitative evaluation of one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings in combination with the embodiments.
The terminology used herein is for the purpose of describing embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The invention discloses a training method of a double-domain neural network on one hand, which comprises the following steps:
acquiring a training sample data set, wherein the training sample data set comprises a photoacoustic signal and a photoacoustic image;
and training the DI-net network model by the training sample data set, obtaining the trained DI-net network model, and constructing the DI-net network model, wherein the DI-net network model comprises a data domain D-net network, an image domain I-net network and a back projection layer between the data domain D-net network and the image domain I-net network. Fig. 1(a) schematically shows a DI-net network model architecture diagram of an embodiment of the present invention.
According to an exemplary embodiment of the present invention, as shown in fig. 1(a), the DI-net network model includes a data domain D-net network, an image domain I-net network, and a back-projection layer between the data domain D-net network and the image domain I-net network.
More specifically, in the training process, the data domain D-net network extracts the characteristics of the photoacoustic signals from the input photoacoustic signals and processes the characteristics to obtain the mapping relation between the sparse photoacoustic signals of the training sample data set and the dense photoacoustic signals of the training sample data set; the image domain I-net network extracts the characteristics of the photoacoustic image from the input photoacoustic image and processes the photoacoustic image to obtain the mapping relation between the photoacoustic image processed by the data domain D-net network and the high-quality photoacoustic image corresponding to the dense photoacoustic signal of the training sample data set; and connecting the data domain D-net network and the image domain I-net network by the back projection layer to obtain the predicted high-quality photoacoustic image.
Fig. 1(b) schematically shows a data domain D-net network structure diagram of an embodiment of the present invention.
According to an exemplary embodiment of the present invention, as shown in fig. 1(b), a left-side dashed-line block diagram schematically shows a first encoder structure of a data domain D-net network, where the first encoder is composed of a first encoding sub-module, a second encoding sub-module, a third encoding sub-module, and a fourth encoding sub-module, which are cascaded, and each of the first encoding sub-module, the second encoding sub-module, the third encoding sub-module, and the fourth encoding sub-module includes two convolutional layers and one maximum pooling layer, where the number of convolutional cores of the two convolutional layers included in each of the first encoding sub-module, the second encoding sub-module, the third encoding sub-module, and the fourth encoding sub-module is the same, and the number of convolutional cores in the first encoding sub-module, the second encoding sub-module, the third encoding sub-module, and the fourth encoding sub-module is doubled in sequence; the right-side dotted line block diagram schematically shows a first decoder structure of the image domain I-net network, the first decoder is composed of a fourth decoding submodule, a third decoding submodule, a second decoding submodule and a first decoding submodule which are cascaded, the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule respectively comprise two convolution layers and a maximum pooling layer, the number of convolution kernels of the two convolution layers respectively contained in the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule is the same, and the number of convolution kernels of the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule which are cascaded is reduced by half in sequence.
More specifically, the photoacoustic signal with the size of M × N enters a data domain D-net network and then enters a first coding sub-module, the initial channel number of the first coding sub-module is k, the numerical value of k is determined by the initial filter number of the data domain D-net network, image feature extraction is performed through two convolutional layers in the first coding sub-module, the method can be that the photoacoustic signal passes through a 3 × 3 convolutional kernel, a normalization layer and a non-linear activation function layer, and maximum pooling layer down-sampling is performed, the size of input data is reduced by half after the input data passes through the first coding sub-module, the number of channels is doubled, the operation process of the data output from the first coding sub-module after passing through a second coding sub-module, a third coding sub-module and a fourth coding sub-module which are cascaded is the same as or similar to that of the first coding sub-module, no description is given here, and finally the output size of the fourth coding sub-module is M/16 × N/16, data with 16k channels; after data with the size of M/16 XN/16 and the number of channels of 16k is input into a first decoder, the data enters a fourth decoding submodule and is subjected to upsampling processing, the size of an input data size matrix is doubled, the number of channels is halved, the method of the cascaded third decoding submodule, the cascaded second decoding submodule and the first decoding submodule is the same or similar, description is not repeated, and finally the first decoding submodule outputs an image with the size of M multiplied by N.
According to an exemplary embodiment of the present invention, in each convolution layer of each sub-module of the first encoder of the data domain D-net network, a normalization layer and a nonlinear activation function layer are further included.
More specifically, a normalization layer and a nonlinear activation function layer are included in each convolution layer of a first encoder of the data domain D-net network, so that the performance and the training stability of the data domain D-net network are improved.
According to an exemplary embodiment of the present invention, the sub-modules having the same number of convolution kernels in the first encoder and the first decoder of the data domain D-net network are the same layer sub-modules.
More specifically, jump connection is established between sub-modules in the same layer of a data domain D-net network, when a decoding sub-module processes an optical sound signal, the jump connection enables the decoding sub-module to use the characteristics appearing after upsampling processing and also use the image characteristics in the coding sub-module in the same layer, so that the problem of spatial resolution loss possibly caused by multiple downsampling operations in a first coding sub-module is solved, and image details are prevented from being lost; the jump connection is established between the first layer of neural network and the last layer of neural network of the data domain D-net network, and the jump connection enables the data domain D-net network to learn only the difference matrix between the input photoacoustic signal matrix and the output photoacoustic signal matrix, so that the learning difficulty of the data domain D-net network is reduced.
According to an exemplary embodiment of the present invention, the structure of the image domain I-net network structure diagram and the structures of the second encoder and the second decoder of an embodiment of the present invention are the same as or similar to the structure of the data domain D-net network, except that the image domain I-net network needs to extract and process more detailed features of the input data, the initial number of channels of the image domain I-net network is set to be 2 times the number of channels of the data domain D-net network, the number of convolution kernels corresponding to each sub-module is also 2 times the number of channels of the data domain D-net network, and the rest parts are not described herein again.
According to an exemplary embodiment of the present invention, a data D-net network and an image domain I-net network of an embodiment of the present invention are connected by a back-projection layer, represented in the form of a matrix as follows:
P=BY; (1)
wherein, P is a one-dimensional column vector containing the information of the image to be reconstructed, and the image matrix to be reconstructed is formed after rearrangement and is input to the image domain I-net network; b is a back projection matrix, and the data domain D-net network and the image domain I-net network are connected through the back projection matrix B; y is a matrix of back-projected terms, generated based on the data domain D-net network.
More specifically, the data domain D-net network extracts photoacoustic signal characteristics for processing based on input photoacoustic signals, the image domain I-net network extracts photoacoustic image characteristics for processing from an input image matrix to be reconstructed, and the mapping relation between the data domain D-net network and the image domain I-net network is enhanced by the connection of a back projection matrix B in a back projection layer, so that a predicted photoacoustic image with high quality is obtained.
According to an exemplary embodiment of the present invention, a training sample data set of an embodiment of the present invention includes:
the space undersampled photoacoustic signal and the space fully-sampled photoacoustic signal, and the space fully-sampled photoacoustic image corresponding to the space fully-sampled photoacoustic signal obtained by the space fully-sampled photoacoustic signal through a corresponding image reconstruction algorithm form a first data set pair, and the space fully-sampled photoacoustic signal form a second data set pair corresponding to the photoacoustic image.
More specifically, the photoacoustic signals collected by the existing collecting device are down-sampled by different multiples, so that under-sampled photoacoustic signals with different channel numbers can be obtained to meet different training requirements, and the space under-sampled photoacoustic signals and the space full-sampled photoacoustic signals are obtained after being processed by the filter matrix and are expressed in the form of the following matrix:
Y0=FX; (2)
wherein F is a filter matrix; x is an original photoacoustic signal matrix with different channel numbers acquired by a detector device; y is0Is a spatial undersampled photoacoustic signal matrix or the spatial fully sampled photoacoustic signal matrix.
Fig. 2 schematically illustrates a flow diagram for training a DI-net network in accordance with an embodiment of the present invention.
According to an exemplary embodiment of the present invention, as shown in fig. 2, training a DI-net network comprises:
s1: training the data domain D-net network independently to obtain a first optimal network structure of the data domain D-net network;
s2: training the image domain I-net network, obtaining a first optimal network structure of the data domain D-net network and fixing a first weight parameter of the data domain D-net network, and obtaining a second optimal network structure of the image domain I-net network and fixing a second weight parameter of the image domain I-net network based on end-to-end training of the DI-net network model;
s3: recovering the trainability of a first weight parameter of the data domain D-net network, and carrying out end-to-end fine tuning training on the DI-net network model to obtain the DI-net network model;
and when the result of the square error loss function reaches the standard that the data domain D-net network and the image domain I-net network reach the optimal network structure, fixing the current weight parameter to obtain the first weight parameter and the second weight parameter.
According to an exemplary embodiment of the invention, an Adam algorithm is used for optimization during the training process; in the training process, a learning rate attenuation strategy is used, and the learning rate attenuation strategy comprises controlling the attenuation of the learning rate based on the loss finger descending condition of the verification set.
The invention also discloses a photoacoustic image reconstruction method, which comprises the following steps:
and inputting the sparse visual angle photoacoustic imaging image into a DI-net network model to obtain a reconstructed image, wherein the DI-net network model is trained by using the training method based on the two-domain neural network. The reconstructed image obtained by the trained DI-net network model restrains image artifacts and recovers more image details.
Fig. 3 schematically shows a schematic diagram of an experimental setup according to an embodiment of the present invention.
As shown in fig. 3, the acquisition device may use an annular detector array comprising 512 individual detection units, and the detection radius and the size of the imaging area of the annular detector array may be similar to the size required for simulation, so as to facilitate the processing of the data.
According to an embodiment of the invention, experimental tests show that the initial M of the data domain D-net network is 512 pixels, the N is 768 pixels, the initial channel number k is 16, two convolution layers respectively comprise a convolution kernel with the size of 3 x 3 and the step length of 1, a normalization layer and a nonlinear activation function layer, then the convolution layers enter a maximum pooling layer for down-sampling processing, then output is carried out, the maximum pooling layer comprises a kernel with the size of 2 x 2 and the step length of 2, the initial channel number k of the image domain I-net network is 32, and the setting data can effectively reduce parameter calculation amount while ensuring the reconstruction performance of the DI-net network model.
More specifically, the DI-net network model encoder module may be a contraction path, the decoder module may be an expansion path, the transposed convolution layer in the decoder module performs an upsampling operation to restore the image resolution to the original size, and multiple downsampling operations in the encoder module may cause spatial resolution loss; in order to prevent the loss of image details, jump connection is introduced between an encoder module and a decoder module of a data domain D-net network and an image domain I-net network, and the jump connection enables a decoding submodule to use the characteristics appearing after upsampling processing and also can use the image characteristics in a same-layer coding submodule; a normalization layer (InstanceNorm) and a nonlinear activation function layer (Leaky ReLu) are added to convolution layers of a first encoder and a second encoder of the DI-net network model, so that the performance and the training stability of the DI-net network model can be improved.
According to an embodiment of the invention, sparse view photoacoustic imaging training sample data sets of two groups of living mice can be constructed, the detection view numbers of sparse view photoacoustic imaging signals are respectively 128 and 256, 2-time and 4-time downsampling can be performed on 512 channels of photoacoustic signals collected by the device shown in fig. 3, the 256 channels of photoacoustic signals and the 128 channels of photoacoustic signals are obtained, the size of the photoacoustic signals is 512 × 768 pixels, and the 512 channels of photoacoustic signals and corresponding reconstructed images thereof are used as references for network training.
More specifically, tomographic imaging was performed on 14 mice, and photoacoustic signals of 9074 mouse slices were obtained in total (the number of slices acquired by different mice is not exactly the same). Of these, 12 were used for training (7500 slices), 1 for validation (787 slices) and the remaining 1 for testing (787 slices).
The reference photoacoustic image is reconstructed by a filtering back projection algorithm, and the algorithm reconstruction process can be represented as follows:
Figure BDA0003450968300000121
in the formula (I), the compound is shown in the specification,
Figure BDA0003450968300000131
wherein p is0(rs) For the photoacoustic reconstruction of the image, b (r)dT) is a back-projection term, rsIs the position of the photoacoustic source, rdFor the position of the detector, the solid angle enclosed by the Ω detection surface (infinite plane geometry Ω is 2 π, spherical and cylindrical geometry Ω is 4 π), d Ω is the solid angle corresponding to the area d σ corresponding to a single detection unit, and p (r [ [ r ] ]) (r ] ])dT) is the photoacoustic signal obtained by the detector, t is time, c is the speed of sound, and d Ω can be expressed as:
Figure BDA0003450968300000132
wherein n isdA unit normal vector representing the detector surface pointing to the region of interest.
2-time and 4-time down-sampling is carried out on the collected photoacoustic signals with 512 channels respectively to obtain sparse photoacoustic signals with 128 channels and 256 channels respectively; in order to simplify the application of a DI-Net network model and accelerate the network training process to carry out filtering processing on sparse photoacoustic signals, the preprocessed photoacoustic signals are used as the input of the network, and the preprocessing process is as follows: firstly, interpolating sparse 128 or 256-channel photoacoustic signals into low-quality 512-channel photoacoustic signals by using a linear interpolation method; the interpolated sampled signal is then filtered using the above equation (4) to obtain a pre-processed photoacoustic signal, which is used as an input to the network. The filtering process can be expressed in the form of a matrix as follows:
Y0=FX; (2)
wherein F is a filter matrix; x is an original photoacoustic signal matrix with different channel numbers acquired by a detector device; obtained Y0Namely a space undersampled photoacoustic signal matrix or a space fully sampled photoacoustic signal matrix for network training.
According to one embodiment of the present invention, Y is obtained from the above formula (2)0After passing through the data domain D-net network, the back projection term matrix Y in the above formula (1) can be obtained.
The data D-net network and the image domain I-net network are connected by a back projection layer, and are represented in the form of a matrix as follows:
P=BY; (1)
wherein, P is a one-dimensional column vector containing the information of the image to be reconstructed, and the image matrix to be reconstructed is formed after rearrangement and is input to the image domain I-net network; b is a back projection matrix, and the data domain D-net network and the image domain I-net network are connected through the back projection matrix B; y is a matrix of back-projected terms, generated based on the data domain D-net network.
More specifically, the data domain D-net network extracts photoacoustic signal characteristics for processing by inputting photoacoustic signals, the image domain I-net network is based on an input to-be-reconstructed image matrix, the to-be-reconstructed image matrix is obtained by a one-dimensional column vector P containing to-be-reconstructed image information, the size of the to-be-reconstructed image matrix is determined by specific conditions, the size of the to-be-reconstructed image matrix can be 256 × 256 pixels, the photoacoustic image characteristics are extracted for processing, and a predicted high-quality photoacoustic image is obtained by connecting a back projection matrix B in a back projection layer with the data domain D-net network and the image domain I-net network.
According to one embodiment of the invention, training a DI-net network includes the steps of:
s1: training the data domain D-net network to obtain a first optimal network structure of the data domain D-net network, wherein through the training at this stage, the data domain D-net network can learn the mapping relation between the space under-sampling photoacoustic data and the space full-sampling photoacoustic data;
s2: training the image domain I-net network, after obtaining a first optimal network structure of the data domain D-net network and fixing a first weight parameter of the data domain D-net network, performing end-to-end training based on a DI-net network model to obtain a second optimal network structure of the image domain I-net network and fixing a second weight parameter of the image domain I-net network, wherein the image domain I-net network can map the photoacoustic image output by the back projection layer into a high-quality photoacoustic image through the training at this stage;
s3: recovering the trainability of the first weight parameter of the data domain D-net network, and performing end-to-end fine tuning training on the DI-net network model to obtain the DI-net network model;
and in the training process, an Adam algorithm is adopted for optimization, and the size of the Batch size is 8. The initial learning rate in step S1 and step S2 is set to 5 × 10-4Training 100 epochs; the initial learning rate of the fine tuning training in step S3 is set to 1 × 10-530 epochs were trained. In the training process, a learning rate attenuation strategy is usedWhen the loss value on the verification set does not decrease within 3 epochs, the learning rate will automatically decay to 0.8 times the original rate. All training is completed under a Tensorflow 2.0 framework, and a computer used for training is configured to be an InterXeon Glod 6226R CPU, an NVIDIA RTX TITAN GPU (24 GB in video memory) and an Ubuntu operating system.
The image quality of the reconstructed image obtained by the DI-net network model after training is superior to that of the reconstructed image obtained by other reconstruction methods of the sparse view photoacoustic signal.
Fig. 4 is a diagram schematically illustrating the reconstruction result of mouse tomographic data under 128 detection viewing angles according to an embodiment of the present invention.
As shown in fig. 4, the part a of fig. 4 is a reference image under 128 detection viewing angle conditions, the parts b of fig. 4, c of fig. 4, and d of fig. 4 are FBP algorithm reconstruction results, Post-Unet algorithm reconstruction results, and DI-Net algorithm reconstruction results under 128 detection viewing angle conditions in sequence, the parts e of fig. 4, f of fig. 4, and g of fig. 4 are difference image maps between the reference image and the FBP reconstruction image, the Post-Unet reconstruction image, and the DI-Net reconstruction image in sequence, and the part h of fig. 4 is a quantitative evaluation result of MSE, PSNR, and SSIM under 128 detection viewing angle conditions. As shown by schematic arrows in part b of fig. 4, part c of fig. 4, and part d of fig. 4, due to the undersampling of the spatial domain, severe streak artifacts appear in the image reconstructed by the FBP algorithm, and the artifacts block the real photoacoustic structure, so that many details are lost in the reconstructed image; compared with the FBP algorithm, the Post-Unet algorithm well inhibits the artifacts, but the reconstructed image still has the problem of detail loss; the DI-Net algorithm realizes the reconstruction of the sparse visual angle photoacoustic image without artifacts, and the details in the reconstructed image are more complete and clearer; the difference image and quantitative evaluation results shown in section h of fig. 4 indicate that DI-Net discloses the best reconstruction results with reconstructed images with lower MSE, higher PSNR and SSIM.
Fig. 5 is a graph schematically showing the reconstruction result of mouse tomographic data under 256 detection viewing angles according to an embodiment of the present invention.
As shown in fig. 5, the part a of fig. 5 is a reference image under 256 detection viewing angle conditions, the parts b of fig. 5, 5 and d of fig. 5 are FBP algorithm reconstruction results, Post-Unet algorithm reconstruction results and DI-Net algorithm reconstruction results under 256 detection viewing angle conditions in sequence, the parts e of fig. 5, f of fig. 5 and g of fig. 5 are difference maps between the reference image and the FBP reconstruction images, the Post-Unet reconstruction images and the DI-Net reconstruction images in sequence, and the part h of fig. 5 is a quantitative evaluation result of MSE, PSNR and SSIM under 256 detection viewing angle conditions. As can be seen from the part b of fig. 5, due to the increase of the number of the detection viewing angles, the quality of the image reconstructed by the FBP algorithm is significantly improved, but an obvious streak artifact still exists in the image, which results in low image contrast and poor overall visual quality; these artifacts are well suppressed by the Post-Unet algorithm and the DI-Net algorithm shown in section c of FIG. 5 and section d of FIG. 5, resulting in artifact-free high quality images; the difference image and the quantitative evaluation result shown in the h part of fig. 5 show that under the condition of 256 detection visual angles, Post-Unet can reconstruct an image with the visual quality equivalent to that of DI-Net, but the reconstruction precision of DI-Net is higher, and the reconstructed image is closer to the reference image, and has lower MSE, higher PSNR and SSIM.
Fig. 6 schematically shows the statistical results of the quantitative evaluation of one embodiment of the present invention.
As shown in fig. 6, in the portion a of fig. 6, the quantitative evaluation results of the MSE evaluation indexes of the FBP reconstructed image, the Post-Unet reconstructed image and the DI-Net reconstructed image under the condition of 128 detection viewing angles are sequentially obtained from left to right; in the part b of fig. 6, the quantitative evaluation results of the PSNR evaluation indexes of the FBP reconstructed image, the Post-Unet reconstructed image, and the DI-Net reconstructed image under the condition of 128 detection view angles are sequentially obtained from left to right; the part c of FIG. 6 is the quantitative evaluation result of the SSIM evaluation index of the FBP reconstructed image, Post-Unet reconstructed image and DI-Net reconstructed image method under the condition of 128 detection viewing angles in sequence from left to right; in the part d of fig. 6, quantitative evaluation results of MSE evaluation indexes of the FBP reconstructed image, Post-Unet reconstructed image and DI-Net reconstructed image methods under 256 detection view angle conditions are sequentially obtained from left to right; in the part e of fig. 6, the quantitative evaluation results of the PSNR evaluation indexes of the FBP reconstructed image, Post-Unet reconstructed image, and DI-Net reconstructed image methods under the condition of 256 detection view angles are sequentially provided from left to right; the part f of FIG. 6 is the quantitative evaluation result of the SSIM evaluation index of the FBP reconstruction image, Post-Unet reconstruction image and DI-Net reconstruction image under 256 detection view angle conditions sequentially from left to right. Compared with FBP algorithm reconstruction and Post-Unet algorithm reconstruction, DI-Net reconstruction accuracy is higher, and the reconstructed image is closer to the reference image and has lower MSE, higher PSNR and SSIM.
The invention also discloses a training device, which can comprise:
the acquisition module is used for acquiring a training sample data set, wherein the training sample data set comprises a photoacoustic signal and a photoacoustic image;
the training module is used for training the DI-net network model based on the training sample data set acquired by the acquisition module to obtain the trained DI-net network model;
wherein, the training module includes:
the first sub-module is used for processing the extracted photoacoustic signal characteristics to obtain a mapping relation between a sparse photoacoustic signal of a training sample data set and a dense photoacoustic signal of the training sample data set;
the second sub-module is used for processing the extracted photoacoustic image characteristics to obtain a mapping relation between a low-quality photoacoustic image obtained by training a sparse photoacoustic signal of the sample data set and a high-quality photoacoustic image obtained by training a dense photoacoustic signal of the sample data set;
and the third sub-module is used for connecting the first sub-module and the second sub-module, so that the algorithm can enhance the photoacoustic signals and the photoacoustic images in the data domain and the image domain at the same time.
The present invention also discloses a photoacoustic image reconstruction apparatus, which may include:
the acquisition module is used for acquiring a sparse visual angle photoacoustic imaging image;
and the reconstructed image module is used for acquiring a reconstructed image, and the reconstructed image module is obtained by utilizing the training device module.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It will be appreciated by a person skilled in the art that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present invention are possible, even if such combinations or combinations are not explicitly recited in the present invention. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present invention may be made without departing from the spirit or teaching of the invention. All such combinations and/or associations fall within the scope of the present invention.
The embodiments of the present invention have been described above. However, the examples are for illustrative purposes only and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the invention, and these alternatives and modifications are intended to fall within the scope of the invention.

Claims (10)

1. A training method based on a two-domain neural network comprises the following steps:
acquiring a training sample data set, wherein the training sample data set comprises a photoacoustic signal and a photoacoustic image;
training the DI-net network model based on the training sample data set to obtain the trained DI-net network model, wherein the DI-net network model comprises a data domain D-net network, an image domain I-net network and a back projection layer between the data domain D-net network and the image domain I-net network.
2. The method of claim 1, wherein the data domain D-net network comprises a first encoder and a first decoder:
a jump connection exists between the first encoder and the first decoder; the hopping connection is used for reducing data loss caused by a data domain D-net network in a down-sampling process, wherein the down-sampling process is an operation performed in the first encoder;
wherein the first encoder includes:
the first coding submodule, the second coding submodule, the third coding submodule and the fourth coding submodule are cascaded, and each of the first coding submodule, the second coding submodule, the third coding submodule and the fourth coding submodule comprises two convolutional layers and a maximum pooling layer;
in the first coding sub-module, the second coding sub-module, the third coding sub-module and the fourth coding sub-module, the number of convolution kernels of the two convolution layers contained in each coding sub-module is the same, and the number of convolution kernels in the first coding sub-module, the second coding sub-module, the third coding sub-module and the fourth coding sub-module which are cascaded is doubled in sequence; and
the first decoder of the data domain D-net network comprises:
the decoding device comprises a fourth decoding submodule, a third decoding submodule, a second decoding submodule and a first decoding submodule which are cascaded, wherein the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule respectively comprise two convolution layers and a maximum pooling layer;
the number of convolution kernels of the two convolution layers contained in each of the fourth decoding submodule, the third decoding submodule, the second decoding submodule and the first decoding submodule is the same, and the number of convolution kernels of the cascaded fourth decoding submodule, third decoding submodule, second decoding submodule and first decoding submodule is reduced by half in sequence;
wherein the submodules in the first encoder and the first decoder of the data domain D-net network, which have the same number of convolution kernels, are same-layer submodules.
3. The method of claim 2, wherein the image domain I-net network comprises a second encoder and a second decoder:
a skip connection exists between the second encoder and the second decoder; the skip connection is used to reduce data loss caused by a down-sampling process in the image domain I-net network, the down-sampling being an operation performed in the second encoder;
wherein the second encoder comprises:
the device comprises a fifth coding submodule, a sixth coding submodule, a seventh coding submodule and an eighth coding submodule which are cascaded, wherein the fifth coding submodule, the sixth coding submodule, the seventh coding submodule and the eighth coding submodule respectively comprise two convolutional layers and a maximum pooling layer;
in the fifth encoding sub-module, the sixth encoding sub-module, the seventh encoding sub-module, and the eighth encoding sub-module, the number of convolution kernels of the two convolution layers included in each of the fifth encoding sub-module, the sixth encoding sub-module, the seventh encoding sub-module, and the eighth encoding sub-module is the same, and the number of convolution kernels in the fifth encoding sub-module, the sixth encoding sub-module, the seventh encoding sub-module, and the eighth encoding sub-module that are cascaded is doubled in sequence; and
the second decoder of the image domain I-net network comprises:
the decoding device comprises an eighth decoding submodule, a seventh decoding submodule, a sixth decoding submodule and a fifth decoding submodule which are cascaded, wherein the eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule respectively comprise two convolution layers and a maximum pooling layer;
in the eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule, the number of convolution kernels of the two convolution layers respectively contained in the eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule are the same, and the number of convolution kernels in the cascaded eighth decoding submodule, the seventh decoding submodule, the sixth decoding submodule and the fifth decoding submodule is reduced by half in sequence;
wherein the sub-modules in the second encoder and the second decoder of the image domain I-net network having the same number of convolution kernels are same-layer sub-modules.
4. The method of claim 3, wherein the data domain D-net network and the image domain I-net network further comprise:
establishing a hopping connection between the first encoder and the same-tier sub-modules of the first decoder of the data domain D-net network, a hopping connection established between a first-tier neural network and a last-tier neural network of the data domain D-net network;
establishing a hopping connection between the second encoder of the image domain I-net network and the same layer sub-module of the second decoder; a hopping connection established between a first layer neural network and a last layer neural network of the image domain I-net network; and
adding a normalization layer and a nonlinear activation function layer to each convolution layer of the first encoder and the second encoder.
5. The method of claim 4, wherein the data domain D-net network and the image domain I-net network are connected by a backprojection layer, represented in matrix form as follows:
P=BY;
the P is a one-dimensional column vector containing image information to be reconstructed, and is rearranged to form an image matrix to be reconstructed and input to the image domain I-net network; b is a back projection matrix, through which the data domain D-net network and the image domain I-net network are connected; y is a matrix of back-projected items, generated based on the data domain D-net network.
6. The method of claim 5, wherein the training sample data set comprises:
the space undersampling photoacoustic signal and the space full-sampling photoacoustic signal, and a photoacoustic image corresponding to the full-sampling photoacoustic signal obtained by the space full-sampling photoacoustic signal through an image reconstruction algorithm; the space undersampled photoacoustic signal and the space fully-sampled photoacoustic signal form a first data set pair, and the space undersampled photoacoustic signal and the photoacoustic image corresponding to the space fully-sampled photoacoustic signal form a second data set pair.
7. The method of claim 6, wherein the spatially undersampled photoacoustic signal and the spatially fully sampled photoacoustic signal are obtained by filtering the photoacoustic signals collected by the detector apparatus with different channel numbers, and are represented in a matrix form as follows:
Y0=FX;
wherein F is a filter matrix; x is a photoacoustic signal matrix with different channel numbers acquired by a detector device; y is0Is the spatial undersampled photoacoustic signal matrix or the spatial fully sampled photoacoustic signal matrix.
8. The method of claim 7, wherein the training comprises:
training the data domain D-net network independently to obtain a first optimal network structure of the data domain D-net network;
training the image domain I-net network, after obtaining a first optimal network structure of the data domain D-net network and fixing a first weight parameter of the data domain D-net network, performing end-to-end training based on the DI-net network model to obtain a second optimal network structure of the image domain I-net network and fixing a second weight parameter of the image domain I-net network;
recovering the trainability of the first weight parameter of the data domain D-net network, and performing end-to-end fine tuning training on the DI-net network model to obtain the DI-net network model;
wherein the fixing of the first and second weight parameters is monitored based on a squared error loss function.
9. The method of claim 8, wherein the training process is optimized using an Adam algorithm; in the training process, a learning rate attenuation strategy is used, and the learning rate attenuation strategy comprises controlling the attenuation of the learning rate based on the loss finger descending condition of the verification set.
10. A photoacoustic image reconstruction method comprising:
inputting the sparse visual angle photoacoustic signal into a DI-net network model to obtain a reconstructed image;
wherein the DI-net network model is trained using the method according to one of claims 1-9.
CN202111683421.5A 2021-12-31 2021-12-31 Training method based on double-domain neural network and photoacoustic image reconstruction method Pending CN114332283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111683421.5A CN114332283A (en) 2021-12-31 2021-12-31 Training method based on double-domain neural network and photoacoustic image reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111683421.5A CN114332283A (en) 2021-12-31 2021-12-31 Training method based on double-domain neural network and photoacoustic image reconstruction method

Publications (1)

Publication Number Publication Date
CN114332283A true CN114332283A (en) 2022-04-12

Family

ID=81022156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111683421.5A Pending CN114332283A (en) 2021-12-31 2021-12-31 Training method based on double-domain neural network and photoacoustic image reconstruction method

Country Status (1)

Country Link
CN (1) CN114332283A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548191A (en) * 2022-04-27 2022-05-27 之江实验室 Photoacoustic imaging annular sparse array signal prediction method and device
CN115619889A (en) * 2022-11-09 2023-01-17 哈尔滨工业大学(威海) Multi-feature fusion photoacoustic image reconstruction method suitable for annular array
CN118154656A (en) * 2024-05-09 2024-06-07 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Ultra-fast photoacoustic image reconstruction method based on filtering back projection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548191A (en) * 2022-04-27 2022-05-27 之江实验室 Photoacoustic imaging annular sparse array signal prediction method and device
CN115619889A (en) * 2022-11-09 2023-01-17 哈尔滨工业大学(威海) Multi-feature fusion photoacoustic image reconstruction method suitable for annular array
CN118154656A (en) * 2024-05-09 2024-06-07 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Ultra-fast photoacoustic image reconstruction method based on filtering back projection

Similar Documents

Publication Publication Date Title
CN110047113B (en) Neural network training method and apparatus, image processing method and apparatus, and storage medium
US11120582B2 (en) Unified dual-domain network for medical image formation, recovery, and analysis
CN114332283A (en) Training method based on double-domain neural network and photoacoustic image reconstruction method
CN109509235B (en) Reconstruction method, device and equipment of CT image and storage medium
CN110544205B (en) Image super-resolution reconstruction method based on visible light and infrared cross input
KR102210457B1 (en) Apparatus and Method for Reconstructing Magnetic Resonance Image use Learning, Under-sampling Apparatus and Method about it, and Recording Medium thereof
CN113269818B (en) Deep learning-based seismic data texture feature reconstruction method
CN109146813B (en) Multitask image reconstruction method, device, equipment and medium
CN110889895A (en) Face video super-resolution reconstruction method fusing single-frame reconstruction network
CN109191411B (en) Multitask image reconstruction method, device, equipment and medium
CN105488759B (en) A kind of image super-resolution rebuilding method based on local regression model
CN105654425A (en) Single-image super-resolution reconstruction method applied to medical X-ray image
CN113506222A (en) Multi-mode image super-resolution method based on convolutional neural network
CN114187181B (en) Dual-path lung CT image super-resolution method based on residual information refining
CN112037304A (en) Two-stage edge enhancement QSM reconstruction method based on SWI phase image
Zhou et al. Spatial orthogonal attention generative adversarial network for MRI reconstruction
Feng et al. Deep multi-modal aggregation network for MR image reconstruction with auxiliary modality
KR20230013778A (en) Image processing method and system using super-resolution model based on symmetric series convolutional neural network
CN115239836A (en) Extreme sparse view angle CT reconstruction method based on end-to-end neural network
Muhammad et al. IRMIRS: Inception-ResNet-Based Network for MRI Image Super-Resolution.
CN115311135A (en) 3 DCNN-based isotropic MRI resolution reconstruction method
CN114998142A (en) Motion deblurring method based on dense feature multi-supervision constraint
Alvarez-Ramos et al. Image super-resolution via wavelet feature extraction and sparse representation
CN112924913A (en) Space-time coding magnetic resonance imaging super-resolution reconstruction method and system
Kumar et al. Fractional Sailfish Optimizer with Deep Convolution Neural Network for Compressive Sensing Based Magnetic Resonance Image Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination