CN111127532B - Medical image deformation registration method and system based on deep learning characteristic optical flow - Google Patents
Medical image deformation registration method and system based on deep learning characteristic optical flow Download PDFInfo
- Publication number
- CN111127532B CN111127532B CN201911413634.9A CN201911413634A CN111127532B CN 111127532 B CN111127532 B CN 111127532B CN 201911413634 A CN201911413634 A CN 201911413634A CN 111127532 B CN111127532 B CN 111127532B
- Authority
- CN
- China
- Prior art keywords
- optical flow
- image
- registration
- deep learning
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses a medical image deformation registration method and system based on deep learning characteristic optical flow. According to the method, good feature descriptors are extracted through the Siemese features, and the matching precision of the pixel points is improved; by comparing the mapping relation learned by the loss function, points which are the same in category but far away in the high-dimensional space are closer in the feature dimensional space; points of different types but close to each other are farther away in the feature dimension space; the Siamese features extracted by the Siamese convolutional neural network based on the training of the contrast loss function have higher discrimination degree and more stability than the SIFT features and general deep learning features and the like, and are more suitable for difference calculation and more accurate calculation results.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a medical image deformation registration method and system based on deep learning characteristic optical flow.
Background
Currently, the closest prior art: an Optical Flow method (Optical Flow) performs registration by calculating an Optical Flow field between a reference image and an image to be registered, and is an effective deformation registration method. The existing optical Flow method solves an optical Flow field based on a Demons algorithm of gray values of pixels between images or an SIFT Flow optical Flow method of SIFT feature difference, cannot process large deformation and truly describe a deformation field, and cannot obtain an accurate registration result.
Demons algorithm in the prior art: the algorithm calculates the optical flow based on pixel gray value differences between images. The optical flow field cannot be accurately estimated only by the characteristic of gray value, and simultaneously, because each pixel point of the floating image in the algorithm can move freely, all pixel points with a certain specific gray value in the floating image are mapped to the same pixel in the reference image, so that image registration error is caused.
The SIFT Flow optical Flow method adopts a double-layer Belief Propagation (Dual-layer Belief Propagation) algorithm and a coarse-to-fine feature matching strategy to solve the optical Flow field. In the algorithm, since the SIFT feature is a feature based on image gradient information, in the case where the image gradient does not change much, mismatching is easily caused. Moreover, the SIFT features cannot represent the abstract features of the image at a higher level, and more accurate optical flow field estimation cannot be performed.
In summary, the problems of the prior art are as follows: (1) the existing optical flow method solves an optical flow field based on the gray value of pixels between images or the SIFT feature difference, cannot process large deformation and truly describe a deformation field, and cannot obtain an accurate registration result. In order to solve the problem, the deep convolutional neural network is used for extracting more accurate, higher-discrimination and more stable features, so that a more accurate registration result is achieved.
(2) The Demons algorithm cannot accurately estimate the optical flow field only by using the characteristic of gray values, and meanwhile, one-to-many mapping is easy to occur between pixels of a floating image and a reference image, so that an erroneous registration result is caused. In order to solve the problem, the mapping relation is learned by comparing a loss function, so that a more accurate registration result is obtained.
(3) The SIFT Flow adopts a Dual-layer Belief propagation (Dual-layer Belief propagation) algorithm and a coarse-to-fine feature matching strategy to solve the optical Flow field. In the algorithm, under the condition that the gradient of the image is not changed greatly, the SIFT feature is easy to cause mismatching. Moreover, the SIFT features cannot represent the abstract features of the image at a higher level, and more accurate optical flow field estimation cannot be performed. In order to solve the problem, the optical flow field is solved by using the Simese characteristics which are more stable, have higher discrimination and are easy to calculate in a difference mode and based on pixels between images, and experimental results show that the obtained optical flow field is more accurate and robust.
The difficulty of solving the technical problems is as follows: in order to solve the problems that an existing optical flow method solves an optical flow field based on the gray value of pixels between images or the SIFT feature difference, large deformation cannot be processed, a deformation field can not be truly described, and an accurate registration result cannot be obtained, a Simese convolution neural network is designed to extract deep learning features of an image block where the pixels are located, the Simese features have SIFT features and are high in degree of distinguishing, and the problem of mismatching easily caused in medical images with unobvious changes can be solved. The difficulty is difficult.
In order to solve the problem that the Demons algorithm is easy to generate wrong optical flow field due to one-to-many mapping, the method provides that a more accurate optical flow field is calculated through the Siemese characteristics which are extracted by the Siemese convolutional neural network trained based on the contrast loss function and are more suitable for difference calculation.
The significance of solving the technical problems is as follows: in order to solve the problems that the optical flow field cannot be accurately estimated by the Demons algorithm only through the characteristic of gray values, and meanwhile, the pixels of a floating image and a reference image are easy to generate one-to-many mapping to cause difficulty in obtaining accurate registration results, the invention provides a Simense characteristic with high discrimination and high stability, and the optical flow field can be accurately calculated based on the characteristic.
The Simese Flow algorithm is based on deep learning, can more accurately depict the difference between different features, and simultaneously solves the problem that the SIFT Flow optical Flow method cannot represent the higher-level abstract features of the image, so that the matching precision of pixel points is improved, and a more accurate deformation field is obtained.
The algorithm can solve the problem of mismatching of SIFT features based on gradient information, and can process medical images with weak contrast and complex structures.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a medical image deformation registration method and system based on deep learning feature optical flow.
The invention is realized in such a way that a medical image deformation registration method based on deep learning characteristic optical flow comprises the following steps:
step one, densely extracting image blocks where pixels are located (feature extraction).
And step two, extracting the deep learning characteristics (optical flow field estimation) of the pixels by using the trained Simese network.
And step three, finally solving the optical flow field (transform interpolation) based on the characteristics.
Further, in the first step, extracting the Siemese characteristics of the image block where the pixel is located through a Siemese convolutional neural network; the input of the network is 8 multiplied by 8 image blocks, and the output is 128-dimensional Siamese feature vectors; all convolutional layers of the network use Relu activation functions, and Dropout is added behind the fully-connected layers to prevent overfitting.
Further, in the step two, in training the Siamese network, the siemese network setting is carried out, which comprises the following steps:
1) the loss function of the network is defined as shown in the following equation:
n is the number of input samples, di=||xi1-xi2||2The euclidean distance for each pair of samples; margin is a constant of 1; y isiIs the label of the input sample pair, and the positive and negative samples are respectively represented by 1 and 0; when the same class is used, d is optimizediThe maximum value is minimized, and when the types are different, max (margin-d) is obtainedi,0)2The size is reduced as much as possible;
2) providing an energy loss function based on the Siamese characteristic optical flow field;
the sum of absolute errors of the Siemese characteristics of the point p in the reference image and the point q in the image to be registered is calculated.
Further, the siense network trained in the second step includes:
network framework: using Keras, Epochs set to 50, Batch Size set to 512;
and (3) network optimization algorithm: using the existing rmsprop (root Mean Square prop) optimization algorithm, the initial learning rate is set to 0.001, and the momentum factor is set to 0.9;
training environment: and (5) training by using a GPU.
And further, in the step three, solving the optical flow field, adopting a double-layer confidence coefficient propagation algorithm and a coarse-to-fine characteristic matching strategy to solve the optical flow field.
And further, after the third step, the image is registered, and after the optical flow field is obtained, the image to be registered is subjected to transformation interpolation by adopting the conventional cubic spline interpolation algorithm to obtain a registered image.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the method for deformable registration of medical images based on deep-learned feature optical flow when executed on an electronic device.
Another object of the present invention is to provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to execute the method for deformable registration of medical images based on deep-learning feature optical flow.
Another object of the present invention is to provide a depth-learning feature optical flow-based medical image deformation registration system for implementing the depth-learning feature optical flow-based medical image deformation registration method, the depth-learning feature optical flow-based medical image deformation registration system including:
the characteristic extraction module is used for extracting image blocks where the pixels are densely located;
the optical flow field estimation module is used for extracting the deep learning characteristics of the pixels by utilizing the trained Siemese network;
and the transformation interpolation module is used for solving the optical flow field based on the characteristics.
Further, the depth-learning feature optical flow-based medical image deformation registration system further comprises:
and the image registration module is used for performing transformation interpolation on the image to be registered by adopting a cubic spline interpolation algorithm to obtain a registration image after solving the optical flow field.
In summary, the advantages and positive effects of the invention are: the method extracts image blocks where the pixels are located densely (feature extraction), extracts the deep learning features of the pixels by using a trained Simese network (optical flow field estimation), and finally solves the optical flow field based on the features (transformation interpolation). Due to the good characteristic representation capability of the deep convolutional neural network, the optical flow field obtained by the algorithm is closer to a real deformation field. The algorithm has the registration effect superior to that of a Demons algorithm, an SIFT Flow optical Flow method and Elastix software, and has the advantages of accuracy, robustness and capability of processing large deformation.
The method aims to solve the problem that in a Demons algorithm, each pixel of an image to be registered can move freely, so that all pixels with a certain specific gray value in the image to be registered are mapped to the same pixel on a reference image, and an error registration result is caused. Good feature descriptors are extracted through the Simese features, matching precision of pixel points is improved, and a more accurate and robust deformation field is obtained; the mapping learned by the comparison loss function. Points of the same category but farther apart in the high dimensional space may be made closer together in the feature dimensional space; while points of different classes but closer distances are further apart in the feature dimension space. The Siamese features extracted by the Siamese convolutional neural network based on the training of the contrast loss function have higher discrimination degree and more stability than the SIFT features and general deep learning features and the like, and are more suitable for difference calculation and more accurate calculation results.
Drawings
Fig. 1 is a flowchart of a medical image deformation registration method based on deep learning feature optical flow according to an embodiment of the present invention.
Fig. 2 is a road map of a medical image deformation registration method based on deep learning feature optical flow according to an embodiment of the present invention.
Fig. 3 is a network structure diagram provided in the embodiment of the present invention.
FIG. 4 is a diagram of an example of a medical image deformation registration system based on deep learning feature optical flow according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a system for deformable registration of medical images based on deep learning feature optical flow according to an embodiment of the present invention.
In the figure: 1. a feature extraction module; 2. an optical flow field estimation module; 3. a transformation interpolation module; 4. an image registration module.
Fig. 6 is a brainw difference diagram provided by an embodiment of the present invention (deformation parameter λ is 150). (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
Fig. 7 is an ACDC disparity map (deformation parameter λ is 150) provided in an embodiment of the present invention. (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
Fig. 8 is an EMPIRE10 difference diagram provided by an embodiment of the present invention (deformation parameter λ is 250). (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The existing optical flow method solves an optical flow field based on the gray value of pixels between images or the SIFT feature difference, cannot process large deformation and truly describe a deformation field, and cannot obtain an accurate registration result. The Demons algorithm cannot accurately estimate an optical Flow field only by using the gray value, and meanwhile, one-to-many mapping is easy to occur between pixels of a floating image and a reference image, so that the problem of difficulty in obtaining an accurate registration result is solved. Moreover, the SIFT features cannot represent the abstract features of the image at a higher level, and more accurate optical flow field estimation cannot be performed.
Aiming at the problems in the prior art, the invention provides a medical image deformation registration method and system based on deep learning feature optical flow, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a method for registering a deformation of a medical image based on deep learning feature optical flow according to an embodiment of the present invention includes:
and S101, densely extracting image blocks where pixels are located (feature extraction).
And S102, extracting the deep learning characteristics (optical flow field estimation) of the pixels by using the trained Simese network.
S103, an optical flow field is solved based on the feature (transform interpolation).
In step S101, Siamese feature extraction is performed. The siemese convolutional neural network is designed to extract the siemese characteristics of the image block where the pixel is located, and the network structure is shown in fig. 3. The input of the network is 8 multiplied by 8 image blocks, and the output is 128-dimensional Siamese feature vector; all convolutional layers of the network were fitted with Relu activation function, adding a Dropout behind fully connected layer 1(Fc1) to prevent overfitting.
Step S102, extracting the deep learning characteristics (light flow field estimation) of the pixels by using the trained Siemese network, and setting the Siemese network, wherein the method comprises the following steps:
1) the Loss function of the network is a specific Loss function (comparative Loss), which is defined as shown in equation (1):
n is the number of input samples, di=||xi1-xi2||2The euclidean distance for each pair of samples; margin is a constant of 1; y isiIs the label of the input sample pair, and the positive and negative samples are respectively represented by 1 and 0; when the same class is used, d is optimizediThe maximum value is minimized, and when the types are different, max (margin-d) is obtainedi,0)2The size is reduced as much as possible.
2) And providing an energy loss function based on the Siemese characteristic optical flow field.
The method comprises the following steps that (1) an equation (2) is a data item, and the sum of absolute value errors of the Siemese characteristics of a point p in a reference image and a point q in an image to be registered is calculated; equations (3) and (4) are regular terms.
The training of the Siamese network in step S102 includes:
network framework: using Keras (Tensorflow backend), Epochs was set to 50 and Batch Size was set to 512.
And (3) network optimization algorithm: using the RMSprop optimization algorithm, the initial learning rate was set to 0.001 and the momentum factor was set to 0.9.
Training environment: and (5) training by using a GPU.
Step S103 solves the optical flow field. And solving the optical flow field by adopting a double-layer confidence coefficient propagation algorithm and a coarse-to-fine characteristic matching strategy.
After step S103, the images are registered. And after the optical flow field is obtained, performing transformation interpolation on the image to be registered by adopting a cubic spline interpolation algorithm to obtain a registered image.
Fig. 2 is a road map of a medical image deformation registration method based on deep learning feature optical flow according to an embodiment of the present invention.
As shown in fig. 4-5, the medical image deformation registration system based on deep learning feature optical flow provided by the present invention includes:
and the feature extraction module 1 is used for extracting the image blocks where the pixels are densely located.
And the optical flow field estimation module 2 is used for extracting the deep learning characteristics of the pixels by using the trained Siemese network.
And the transformation interpolation module 3 is used for solving the optical flow field based on the characteristics.
And the image registration module 4 is used for performing transformation interpolation on the image to be registered by adopting a cubic spline interpolation algorithm to obtain a registration image after the optical flow field is obtained.
The invention is further described below in connection with specific experiments.
Experimental parameter setting
The invention is compared with a Demons algorithm, an SIFT Flow optical Flow method and open source software Elastix. The main parameters of the comparison algorithm in the experiment are set as follows: in the Demons algorithm, the number of levels of the histogram is set to 1024, and the number of iterations is set to 50; in the SIFT Flow optical Flow method, a regular term coefficient eta is set to be 0.005, alpha is set to be 2, and the iteration number is set to be 200; in the Elastix algorithm, the transformation type is set to "BSPLINE", and the number of iterations is set to 500.
Second experiment evaluation index
A. RMSD (root Mean Squared difference). RMSD reflects the difference between the two figures and is calculated as follows:
wherein, I1And I2Is the two images that are needed to calculate the RMSD,I1(xi) And I2(xi) Is the gray value, Ω, of the point at the same position in the two graphsIIs I1And I2Is in the image domain, | ΩIL is I1Number of pixels (sum I)2The number of middle pixels is the same), and the similarity of the two images is measured by calculating the root mean square error of the gray values. The smaller the RMSD value of the fixed image and the registered image is, the more approximate the gray value of the same position point in the two images is, the more similar the whole image is, and the better the registration effect is.
B. DICE COEFFICIENT. The DICE COEFFICIENT is used for evaluating the registration accuracy of the region of interest, and the registration effect is evaluated by calculating the contact degree of ROI (region of interest), and the expression is as follows:
wherein X and Y are the areas of the two ROI, respectively, and X ^ Y is the area of the overlapped part. The larger the DICE value of the ROI of the fixed image and the registered image is, the higher the ROI coincidence degree is, and the better the registration effect is.
C. And (4) a difference graph. The disparity map is one of the most common techniques for visually assessing the effect of registration. By mapping the absolute value of the difference of the two images into a gray space, where the smaller the absolute value of the difference is, the darker the place is, and the larger the absolute value of the difference is, the brighter the place is.
(iii) results of the experiment
A. Registration results
Under the influence of the data sets of BrainWeb and EMPIRE10 and different deformation strengths alpha (50, 100, 150 and 200 in sequence), registration accuracy comparison experiments are carried out on the method and the three comparison methods. The method comprises the steps of respectively and randomly taking 20 pairs of reference images and images to be registered (with the same deformation strength) from BrainWeb and EMPIRE10, and averaging experimental results. Table 1 shows RMSD averages of Demons, SIFT Flow, Elastix of the reference image and the image to be registered (deformation image) and the registered image obtained by the method of the present invention, Siamese Flow. Table 2 shows the DICE average value between Demons, SIFT Flow, Elastix of the reference image Mask and the Mask to be registered image Mask and the Mask corresponding to the registered image obtained by the method of the present invention, siense Flow.
TABLE 1 RMSD mean before and after registration of BrainWeb with EMPIRE10
Data set | Before registration | Demons | SIFT Flow | Elastix | Siamese Flow |
BrainWeb(λ=50) | 13.4995 | 1.6058 | 1.8556 | 1.5575 | 1.4993 |
BrainWeb(λ=100) | 24.0700 | 5.5993 | 6.2845 | 4.6100 | 4.1537 |
BrainWeb(λ=150) | 33.1581 | 11.7212 | 8.4916 | 7.3635 | 5.8927 |
BrainWeb(λ=200) | 35.8655 | 15.6968 | 12.2182 | 10.5017 | 6.8428 |
EMPIRE10(λ=50) | 11.0737 | 6.8969 | 5.1111 | 5.9442 | 2.7277 |
EMPIRE10(λ=100) | 15.9267 | 8.9559 | 6.0131 | 7.3422 | 3.3316 |
EMPIRE10(λ=150) | 21.2234 | 11.7477 | 7.3080 | 8.5241 | 4.1567 |
EMPIRE10(λ=200) | 22.4466 | 14.1875 | 8.4884 | 9.6205 | 5.0380 |
Table 2 DICE mean before and after EMPIRE10 registration
Data set | Before registration | Demons | SIFT Flow | Elastix | Siamese Flow |
EMPIRE10(λ=50) | 0.9822 | 0.9970 | 0.9953 | 0.9993 | 0.9997 |
EMPIRE10(λ=100) | 0.9643 | 0.9961 | 0.9938 | 0.9989 | 0.9991 |
EMPIRE10(λ=150) | 0.9297 | 0.9853 | 0.9912 | 0.9962 | 0.9967 |
EMPIRE10(λ=200) | 0.9199 | 0.9788 | 0.9884 | 0.9940 | 0.9961 |
Fig. 6brainw difference plot (deformation parameter λ 150). (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
As can be seen from the data in tables 1 and 2, the Siamese Flow achieved the lowest RMSD value and the highest DICE value compared to Demons, SIFT Flow and Elastix, with the highest registration accuracy. Meanwhile, fig. 6 shows a difference map of camons, SIFT Flow, Elastix, and Siamese Flow versus BrainWeb image registration: fig. 6(g) compares to fig. 6(a) - (f), the disparity map is the darkest in color, indicating that the disparity is small and the registration is best. The disparity map further visually indicates that the siemese Flow registration is most accurate.
B. Generalization ability
In order to evaluate the generalization ability of the algorithm, the experiment adopts ACDC and NPC data sets which do not participate in network training, and the comparison experiment is carried out on the method and other three methods under the influence of different deformation strengths lambda (50, 100, 150 and 200 in sequence). The ACDC and NPC respectively randomly take 20 pairs of reference images and images to be registered (with the same deformation strength), and average the experimental results. Table 3 shows RMSD averages of registered images obtained from Demons, SIFT Flow, Elastix, and Siamese Flow of the reference image and the image to be registered (deformation image). Table 4 shows the DICE mean values between the reference image Mask and the masks corresponding to the registered images obtained from Demons, SIFT Flow, Elastix and Siamese Flow of the image Mask to be registered.
TABLE 3 RMSD mean before and after ACDC and NPC registration
Table 4 DICE mean before and after ACDC registration
Data set | Before registration | Demons | SIFTFlow | Elastix | SiameseFlow |
ACDC(λ=50) | 0.9889 | 0.9986 | 0.9966 | 0.9991 | 0.9995 |
ACDC(λ=100) | 0.9682 | 0.9956 | 0.9929 | 0.9979 | 0.9987 |
ACDC(λ=150) | 0.9516 | 0.9822 | 0.9908 | 0.9946 | 0.9964 |
ACDC(λ=200) | 0.9364 | 0.9752 | 0.9871 | 0.9921 | 0.9935 |
Fig. 7 shows an ACDC disparity map (deformation parameter λ 150). (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
As can be seen from the data in tables 3 and 4, the Siamese Flow of the present invention still achieves the lowest RMSD value and the highest DICE value on the data set not involved in training, and the registration accuracy is higher than Demons, SIFT Flow and Elastix. Meanwhile, fig. 7 shows a difference map of Demons, SIFT Flow, Elastix and Siamese Flow versus ACDC image registration: fig. 7(g) compares to fig. 7(a) - (f), the disparity map is the darkest in color, indicating that the disparity is small and the registration is best. The difference chart shows that the Simese Flow is robust, deformation registration can be carried out on other similar medical images, and the registration accuracy is higher than other methods.
C. Large deformation
Since the lungs and heart often produce large deformations due to human physiological activity (respiration, heartbeat, etc.), the experiment used a large deformation parameter λ of 250 to evaluate the ability of the Siamese Flow to handle large deformations. Comparative experiments were performed on the EMPIRE10, ACDC data sets, for the method of the invention and for the other three methods. The EMPIRE10 and the ACDC respectively randomly take 20 pairs of reference images and images to be registered (with the same deformation strength), and average the experimental results. Table 5 gives the RMSD mean values of the registration images obtained for Demons, SIFT Flow, Elastix and Siamese Flow of the reference image and the image to be registered (deformation image). Table 6 shows the DICE mean values between reference image masks and masks corresponding to registration images obtained by Demons, SIFT Flow, Elastix, and Siamese Flow.
TABLE 5 RMSD mean before and after EMPIRE10 and ACDC registration
TABLE 6 DICE mean before and after EMPIRE10 and ACDC registration
Fig. 8 shows an EMPIRE10 difference map (deformation parameter λ: 250). (a) A reference image; (b) an image to be registered; (c) a difference graph of the reference image and the image to be registered; (d) demons registration result graph; (e) SIFT Flow registration result graph; (f) elastix registration result graph; (g) a Siamese Flow registration result graph; (h) demons registration result difference map; (i) SIFT Flow registration result difference graph; (j) elastix registration result difference map; (k) siemese Flow registration result difference map.
As can be seen from the data of tables 5 and 6, even at a large deformation level λ of 250, the Siamese Flow still achieves the lowest RMSD value and the highest DICE value, compared to Demons, SIFT Flow, and Elastix, with the highest registration accuracy. Meanwhile, fig. 8 shows a difference map of Demons, SIFT Flow, Elastix, and Siamese Flow registration to EMPIRE10 images: fig. 8(g) compares to fig. 8(a) - (f), the disparity map is the darkest in color, indicating that the disparity is small and the registration is best. As can be seen from the visual effect of the difference map, the Siamese Flow is obviously superior to other methods, and the Siamese Flow has the capability of processing large deformation and higher registration accuracy. The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (7)
1. A medical image deformation registration method based on deep learning characteristic optical flow is characterized by comprising the following steps:
step one, densely extracting image blocks where pixels are located; extracting the Siemese characteristics of the image block where the pixel is located through a Siemese convolutional neural network; the input of the network is 8 multiplied by 8 image blocks, and the output is 128-dimensional Siamese feature vectors; all the convolution layers of the network adopt Relu activation functions, and Dropout is added behind the full connection layer to prevent overfitting;
secondly, extracting the deep learning characteristics of the pixels by using the trained Simese network; in the training of the Siamese network, the Siamese network setting is carried out, and the method comprises the following steps:
1) the loss function of the network is the following equation:
n is the number of input samples, di=||xi1-xi2||2For each pair of samplesA distance of formula (II); margin is a constant of 1; y isiIs the label of the input sample pair, and the positive and negative samples are respectively represented by 1 and 0; when the same class is used, d is optimizediDecrease, max (margin-d) when in different categoriesi,0)2Decrease;
2) providing an energy loss function based on the Siamese characteristic optical flow field;
the function is divided into three parts, wherein the first part is a data item, and the second part and the third part are regular items; calculating the sum of absolute value errors of the Siemese characteristics of a point p in a reference image and a point q in an image to be registered;
and step three, solving the optical flow field based on the characteristics.
2. The method for deformable registration of medical images based on deep learning feature optical flow as claimed in claim 1, wherein the siemese network trained in step two comprises:
network framework: using Keras, Epochs set to 50, Batch Size set to 512;
and (3) network optimization algorithm: using an RMSprop optimization algorithm, the initial learning rate is 0.001, and the momentum factor is 0.9;
training environment: and (5) training by using a GPU.
3. The method for deformation registration of medical images based on deep learning feature optical flow as claimed in claim 1, wherein in the step three, the optical flow field is solved by using a two-layer belief propagation algorithm and a coarse-to-fine feature matching strategy.
4. The method for deformation registration of medical images based on deep learning feature optical flow as claimed in claim 1, wherein after the third step, the image registration is performed, and after the optical flow field is obtained, the image to be registered is subjected to transformation interpolation by a cubic spline interpolation algorithm to obtain the registration image.
5. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method for deformable registration of medical images based on deep-learned feature optical flow of any one of claims 1-4.
6. A medical image deformation registration system based on deep learning characteristic optical flow, which implements the medical image deformation registration method based on deep learning characteristic optical flow of any one of claims 1-4, wherein the medical image deformation registration system based on deep learning characteristic optical flow comprises:
the characteristic extraction module is used for extracting image blocks where the pixels are densely located;
the optical flow field estimation module is used for extracting the deep learning characteristics of the pixels by utilizing the trained Siemese network;
and the transformation interpolation module is used for solving the optical flow field based on the characteristics.
7. The deep-learned feature-optical-flow-based medical image deformation registration system of claim 6, wherein the deep-learned feature-optical-flow-based medical image deformation registration system further comprises:
and the image registration module is used for performing transformation interpolation on the image to be registered by adopting a cubic spline interpolation algorithm to obtain a registration image after solving the optical flow field.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911413634.9A CN111127532B (en) | 2019-12-31 | 2019-12-31 | Medical image deformation registration method and system based on deep learning characteristic optical flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911413634.9A CN111127532B (en) | 2019-12-31 | 2019-12-31 | Medical image deformation registration method and system based on deep learning characteristic optical flow |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111127532A CN111127532A (en) | 2020-05-08 |
CN111127532B true CN111127532B (en) | 2020-12-22 |
Family
ID=70506581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911413634.9A Expired - Fee Related CN111127532B (en) | 2019-12-31 | 2019-12-31 | Medical image deformation registration method and system based on deep learning characteristic optical flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111127532B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113242547B (en) * | 2021-04-02 | 2022-10-04 | 浙江大学 | Method and system for filtering user behavior privacy in wireless signal based on deep learning and wireless signal receiving and transmitting device |
CN114485417B (en) * | 2022-01-07 | 2022-12-13 | 哈尔滨工业大学 | Structural vibration displacement identification method and system |
CN116363175A (en) * | 2022-12-21 | 2023-06-30 | 北京化工大学 | Polarized SAR image registration method based on attention mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101297321A (en) * | 2005-10-25 | 2008-10-29 | 布拉科成像S.P.A.公司 | Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduc |
CN103700101A (en) * | 2013-12-19 | 2014-04-02 | 华东师范大学 | Non-rigid brain image registration method |
CN109409263A (en) * | 2018-10-12 | 2019-03-01 | 武汉大学 | A kind of remote sensing image city feature variation detection method based on Siamese convolutional network |
CN110390351A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10204299B2 (en) * | 2015-11-04 | 2019-02-12 | Nec Corporation | Unsupervised matching in fine-grained datasets for single-view object reconstruction |
US10467767B2 (en) * | 2016-12-23 | 2019-11-05 | International Business Machines Corporation | 3D segmentation reconstruction from 2D slices |
CN109711316B (en) * | 2018-12-21 | 2022-10-21 | 广东工业大学 | Pedestrian re-identification method, device, equipment and storage medium |
CN109801314B (en) * | 2019-01-17 | 2020-10-02 | 同济大学 | Binocular dynamic vision sensor stereo matching method based on deep learning |
CN110136175A (en) * | 2019-05-21 | 2019-08-16 | 杭州电子科技大学 | A kind of indoor typical scene matching locating method neural network based |
CN110490881A (en) * | 2019-08-19 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Medical image dividing method, device, computer equipment and readable storage medium storing program for executing |
CN110619655B (en) * | 2019-08-23 | 2022-03-29 | 深圳大学 | Target tracking method and device integrating optical flow information and Simese framework |
-
2019
- 2019-12-31 CN CN201911413634.9A patent/CN111127532B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101297321A (en) * | 2005-10-25 | 2008-10-29 | 布拉科成像S.P.A.公司 | Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduc |
CN103700101A (en) * | 2013-12-19 | 2014-04-02 | 华东师范大学 | Non-rigid brain image registration method |
CN109409263A (en) * | 2018-10-12 | 2019-03-01 | 武汉大学 | A kind of remote sensing image city feature variation detection method based on Siamese convolutional network |
CN110390351A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning |
Non-Patent Citations (3)
Title |
---|
Efficient MRF deformation model for non-rigid image matching;Alexander Shekhovtsov等;《Computer Vision and Image Understanding》;20080723;第112卷(第1期);第91-99页 * |
Modern Techniques and Applications for Real-Time Non-rigid Registration;Sofien Bouaziz等;《SA "16: SIGGRAPH ASIA 2016 Courses》;20161108;第1-25页 * |
基于光流和多层次B样条自由变形的医学图像鲁棒形变配准;王敏尤 等;《上海交通大学学报》;20081028;第42卷(第10期);第1660-1664页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111127532A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108492272B (en) | Cardiovascular vulnerable plaque identification method and system based on attention model and multitask neural network | |
CN111127532B (en) | Medical image deformation registration method and system based on deep learning characteristic optical flow | |
CN107610087B (en) | Tongue coating automatic segmentation method based on deep learning | |
CN109272512B (en) | Method for automatically segmenting left ventricle inner and outer membranes | |
CN106530341B (en) | Point registration algorithm for keeping local topology invariance | |
CN113361542B (en) | Local feature extraction method based on deep learning | |
CN111260701B (en) | Multi-mode retina fundus image registration method and device | |
CN107301643B (en) | Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms | |
CN107862680B (en) | Target tracking optimization method based on correlation filter | |
CN111652317A (en) | Hyper-parameter image segmentation method based on Bayesian deep learning | |
CN103679720A (en) | Fast image registration method based on wavelet decomposition and Harris corner detection | |
CN111325750A (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN104331885A (en) | Circular target detection method based on voting line clustering | |
Liu et al. | An enhanced neural network based on deep metric learning for skin lesion segmentation | |
CN113379788B (en) | Target tracking stability method based on triplet network | |
Lorette et al. | Fully unsupervised fuzzy clustering with entropy criterion | |
CN113361431B (en) | Network model and method for face shielding detection based on graph reasoning | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN111553250B (en) | Accurate facial paralysis degree evaluation method and device based on face characteristic points | |
CN111080649B (en) | Image segmentation processing method and system based on Riemann manifold space | |
CN110008902B (en) | Finger vein recognition method and system fusing basic features and deformation features | |
CN109886320B (en) | Human femoral X-ray intelligent recognition method and system | |
CN111553249B (en) | H-B grading-based accurate facial paralysis degree evaluation method and device under CV | |
CN115331021A (en) | Dynamic feature extraction and description method based on multilayer feature self-difference fusion | |
CN112183596B (en) | Linear segment matching method and system combining local grid constraint and geometric constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201222 Termination date: 20211231 |