CN110163897B - Multi-modal image registration method based on synthetic ultrasound image - Google Patents

Multi-modal image registration method based on synthetic ultrasound image Download PDF

Info

Publication number
CN110163897B
CN110163897B CN201910335812.4A CN201910335812A CN110163897B CN 110163897 B CN110163897 B CN 110163897B CN 201910335812 A CN201910335812 A CN 201910335812A CN 110163897 B CN110163897 B CN 110163897B
Authority
CN
China
Prior art keywords
image
ultrasonic image
registration
magnetic resonance
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910335812.4A
Other languages
Chinese (zh)
Other versions
CN110163897A (en
Inventor
杨峰
武潺
董嘉慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARIEMEDI MEDICAL SCIENCE (BEIJING) Co.,Ltd.
Original Assignee
Airui Maidi Technology Shijiazhuang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airui Maidi Technology Shijiazhuang Co ltd filed Critical Airui Maidi Technology Shijiazhuang Co ltd
Priority to CN201910335812.4A priority Critical patent/CN110163897B/en
Publication of CN110163897A publication Critical patent/CN110163897A/en
Application granted granted Critical
Publication of CN110163897B publication Critical patent/CN110163897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a multi-mode image registration method based on a synthetic ultrasonic image, which is characterized in that according to a magnetic resonance image and a real ultrasonic image, a generating countermeasure network comprising a generator and a discriminator is constructed to generate a simulated synthetic ultrasonic image, the simulated synthetic ultrasonic image and the real ultrasonic image are registered to obtain registration parameters, and the registration parameters are applied to the magnetic resonance image to complete the final registration fusion of the magnetic resonance image and the ultrasonic image. The method can synthesize the ultrasonic image from the magnetic resonance image in real time, and meet the requirement of real-time image-guided surgery; the synthesized true-to-true ultrasonic image is closer to a real ultrasonic image, the image quality is higher, and important detail information is better saved; when the magnetic resonance image contains the tumor, the simulation ultrasonic image can be accurately synthesized; the final registration technology does not need a complex registration algorithm, and can achieve a good registration effect only by a traditional simple registration algorithm.

Description

Multi-modal image registration method based on synthetic ultrasound image
Technical Field
The invention relates to the field of multi-modal image registration, in particular to a multi-modal image registration method based on a synthesized ultrasonic image.
Background
Both ultrasound and magnetic resonance images are currently widely used in the diagnosis of various medical cases, for example, to detect infarcts and tumors in the head, to detect acute and chronic changes in the liver, and to navigate through various clinical procedures. Real-time imaging of the liver is essential for detecting lesions or clinical treatment, the requirement of real-time imaging can be met by utilizing an ultrasonic probe to scan and image, and the ultrasonic imaging is less harmful to a human body due to non-invasive imaging. However, compared with the magnetic resonance image, the ultrasound image has lower imaging quality, and the magnetic resonance image can provide more anatomical detail information to better assist diagnosis and treatment. However, the magnetic resonance image can only be acquired before operation, and cannot be adjusted in real time according to changes such as real-time pose of a patient during operation, so that the effect of providing two images for auxiliary treatment simultaneously in the clinical operation process can be better, and therefore, the registration, fusion and display of the two images in the operation process are very necessary.
It is difficult to directly register a magnetic resonance image to an ultrasound image using conventional methods because the two images are very different. The method is characterized in that ultrasonic synthesis is carried out through a magnetic resonance image, and the synthesis ultrasonic and the real ultrasonic are fused, so that the method is one way in a registration fusion technology, and the registration of the images of two different modes is converted into the image registration in the same mode, so that the registration difficulty is reduced, and the registration precision is improved.
In recent years, many scholars have been engaged in simulation studies of ultrasound images. These studies mainly involve simulating ultrasound images from both computed tomography images and magnetic resonance images to address the registration problem of different modality images prior to intervention. To simulate an ultrasound image, an automated image registration algorithm is presented using an ultrasound physics-based model and measuring linear correlation using a linearly combined correlation such that images of two different modalities complete spatial alignment. However, the simulation of the ultrasound image takes a lot of time, so that this method cannot be used in a real-time surgical navigation system, and cannot accurately simulate the information of the tumor region, and the lack of the tumor can cause unstable image registration before and after the resection. The registration fusion technology based on ultrasonic simulation becomes a research hotspot and achieves certain results, but the following defects still exist: the simulation of a three-dimensional magnetic resonance image into a three-dimensional ultrasonic image consumes a lot of time and is not suitable for a real-time image-guided surgery navigation system; when a tumor is included in the magnetic resonance image, an ultrasound image of the tumor at the corresponding position cannot be correctly simulated.
Therefore, we propose an ultrasound image synthesis technique based on deep learning and apply the technique to multi-modal image registration. The liver multi-modality image registration technology based on the magnetic resonance image synthesis ultrasonic image must meet the following conditions: (1) synthesizing the magnetic resonance image into an ultrasonic image in real time; (2) an ultrasound image containing the tumor can be synthesized.
In view of the above, it is a technical problem to be solved in the art to provide a new multi-modal image registration method based on a synthesized ultrasound image, which overcomes the above drawbacks in the prior art.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art and providing a multi-modal image registration method based on a synthesized ultrasound image.
The object of the invention can be achieved by the following technical measures:
the invention provides a multi-modal image registration method based on a synthesized ultrasonic image, which comprises the following steps:
s1, acquiring a plurality of three-dimensional magnetic resonance images of the same part and real ultrasonic images corresponding to the three-dimensional magnetic resonance images as training samples;
s2, constructing a generation countermeasure network, wherein the generation countermeasure network comprises a generator and a discriminator;
s3, inputting the acquired three-dimensional magnetic resonance image into a generator to obtain an output result, inputting the output result of the generator and the corresponding real ultrasonic image into a discriminator to train the generation countermeasure network, and generating a corresponding composite ultrasonic image of the three-dimensional magnetic resonance image by using the generator in the trained generation countermeasure network;
s4, registering and fusing the synthesized ultrasonic image and a real ultrasonic image corresponding to the three-dimensional magnetic resonance image to obtain registration parameters, and registering the three-dimensional magnetic resonance image and the real ultrasonic image according to the registration parameters.
Further, in step S3, the step of "training the generated countermeasure network" includes:
obtaining an L1 loss function of the generator according to the output result of the generator and the corresponding real ultrasonic image;
based on the least square loss function of the generator and the judger in the generation impedance network, obtaining the total loss function of the generator according to the output result of the judger and the L1 loss function of the real ultrasonic image, and obtaining the total loss function of the judger according to the output result of the judger;
and respectively updating parameters in the network structures of the arbiter and the generator according to the total loss function of the arbiter and the total loss function of the generator until the generation of the confrontation network convergence.
Further, in step S3, the step of "training the generated countermeasure network" further includes:
different learning rates are set for the generator and the discriminator.
Further, the arbiter comprises a local arbiter and a global arbiter, and the local arbiter comprises a first local arbiter and a second local arbiter.
Further, the L1 loss function is defined as
Figure BDA0002039102600000031
Wherein, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor the input true ultrasound image, p (I)US) For true ultrasound data distribution, p (I)MR) Is a magnetic resonance data distribution;
the least squares loss function of the generator is
Figure BDA0002039102600000032
The least squares loss function of the discriminator is
Figure BDA0002039102600000033
Wherein, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor the input real ultrasound image, a and b are labels of the generated data and the real data, respectively, and c is a label of the data considered as false by the generator and the discriminator;
setting a to 0 and b to c to 1, and substituting them into the above formula, the overall loss function of the generator can be obtained as follows:
Figure BDA0002039102600000041
the overall loss function of the judger is
Figure BDA0002039102600000042
Further, the step S4 includes
Setting a pyramid into multiple layers by utilizing a pyramid algorithm, wherein each layer corresponds to one scale, and sampling the synthesized ultrasonic image and the first real ultrasonic image in a layering manner respectively;
initializing deformation parameters from the lowest layer, superposing the deformation parameters on the real ultrasonic image, and calculating the similarity measure and the deformation field of the synthesized ultrasonic image and the real ultrasonic image under the scale corresponding to the same layer;
superposing the calculated deformation field to the upper layer of the pyramid as the deformation parameter of the layer, and continuing to perform similarity measurement and optimized deformation calculation on the synthesized ultrasonic image and the real ultrasonic image of the layer until the last layer of the pyramid to obtain a registration parameter;
and directly applying the registration parameters obtained by registering the synthesized ultrasonic image and the real ultrasonic image to the three-dimensional magnetic resonance image, and finishing registration fusion of the three-dimensional magnetic resonance image and the real ultrasonic image after the three-dimensional magnetic resonance image is transformed by the registration parameters.
Further, the step of calculating the "similarity measure" includes: and characterizing the same layer of the synthesized ultrasonic image and the real ultrasonic image structure by using a neighborhood description operator MIND, and then obtaining the similarity measure of the two images by using the difference square sum SSD of the characterization results as a registration measure.
Further, in the step of calculating the deformation field method, the optimized deformation function of the same layer is obtained by using a method of Gaussian Newton gradient descent.
Further, the MIND calculation is defined as:
Figure BDA0002039102600000051
wherein n is a normalized vector, R belongs to R as a search area, and the distance Dp(I,x1,x2)=∑p∈P(I(x1+p)-I(x2+p))2All patches within the search region R and in two voxels x are calculated1And x2SSD within a centered patch, the similarity measure being
Figure BDA0002039102600000052
An optimization function of the optimal deformation field is
Figure BDA0002039102600000053
Wherein, u ═ w (u, v, w)TRepresenting the deformation field.
Further, the pyramid layer number is 3, and the sampling ratio is 2 × 2 × 2.
The invention has the beneficial effects that the invention provides a multi-mode image registration method based on a synthesized ultrasonic image, a magnetic resonance image and a real ultrasonic image at the same position corresponding to the magnetic resonance image are given, a simulated synthesized ultrasonic image is generated by constructing a generation countermeasure network of a generator and a discriminator, the synthesized ultrasonic image and the real ultrasonic image are registered to obtain a registration parameter, the registration parameter is applied to the magnetic resonance image, and the final registration fusion of the magnetic resonance image and the ultrasonic image is completed, compared with the prior art, the method has the following advantages:
1. the ultrasonic image can be synthesized from the magnetic resonance image in real time, and the requirement of real-time image-guided surgery is met;
2. the synthesized ultrasonic image is closer to a real ultrasonic image, the image quality is higher, and important detail information is better saved;
3. when the magnetic resonance image contains the tumor, the simulation ultrasonic image can be accurately synthesized;
4. the final registration technology does not need a complex registration algorithm, and can achieve a good registration effect only by a traditional simple registration algorithm.
Drawings
Fig. 1 is a flow chart of a multi-modality image registration method based on a synthesized ultrasound image according to an embodiment of the present invention.
FIG. 2 is a flow chart of a composite ultrasound image of an embodiment of the present invention.
Figure 3 is a flow chart of the registration of a magnetic resonance image and a true ultrasound image of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to make the description of the present disclosure more complete and complete, the following description is given for illustrative purposes with respect to the embodiments and examples of the present invention; it is not intended to be the only form in which the embodiments of the invention may be practiced or utilized. The embodiments are intended to cover the features of the various embodiments as well as the method steps and sequences for constructing and operating the embodiments. However, other embodiments may be utilized to achieve the same or equivalent functions and step sequences.
Referring to fig. 1, fig. 1 is a flowchart illustrating a multi-modality image registration method for a magnetic resonance image synthesis ultrasound image according to an embodiment of the invention, and the invention is specifically explained with reference to fig. 1.
In step S1, acquiring a plurality of three-dimensional magnetic resonance images of the same part of the same patient and a real ultrasound image corresponding to each three-dimensional magnetic resonance image as a training sample;
in step S2, a generation countermeasure network including a generator and a discriminator is constructed;
in the step S3, inputting the acquired three-dimensional magnetic resonance image into the generator to obtain an output result, and inputting the output result of the generator and the corresponding real ultrasound image into the discriminator to train the generation countermeasure network, and generating a corresponding composite ultrasound image of the three-dimensional magnetic resonance image by using the generator in the generation countermeasure network after training;
in step S4, the synthetic ultrasound image and the real ultrasound image corresponding to the three-dimensional magnetic resonance image are registered and fused to obtain registration parameters, and the three-dimensional magnetic resonance image and the real ultrasound image are registered according to the registration parameters.
According to the method, a magnetic resonance image and a real ultrasonic image at the same position corresponding to the magnetic resonance image are given, a simulated synthetic ultrasonic image is generated by constructing a generator and a generation countermeasure network of a discriminator, registration parameters are obtained by registering the simulated synthetic ultrasonic image and the real ultrasonic image, the registration parameters are applied to the magnetic resonance image, and the final registration fusion of the magnetic resonance image and the ultrasonic image is completed.
Referring to fig. 2, fig. 2 is a flow chart illustrating the generation of a composite ultrasound image according to an embodiment of the present invention, and the generation process of the composite ultrasound image is explained in detail with reference to fig. 2.
The generation type countermeasure network comprises a generator and a discriminator, wherein the discriminator comprises a local discriminator and a global discriminator, the generator is used for generating a synthetic image which is approximate to a target ultrasonic image from a magnetic resonance image, the synthetic image is input into the discriminator, the discriminator cannot distinguish the true and false of the synthetic image and the target image as far as possible, and the discriminator is used for distinguishing the target image and the synthetic image as far as possible and transmitting false information to the generator for the generator to update parameters. The discriminator comprises a global discriminator and a local discriminator, the global discriminator ensures that the synthesized image and the target image are similar as much as possible in global structure information, the local discriminator ensures that the synthesized image and the target image are similar as much as possible in local detail information, and the local discriminator comprises a first local discriminator and a second local discriminator.
The process of deep network learning is a process of continuously iterating and optimizing a loss function, in order to ensure that a synthetic image is as close to a real image as possible, similarity calculation is carried out on the synthetic ultrasonic image and the real ultrasonic image, and the L1 loss of the synthetic ultrasonic image and the real ultrasonic image is calculated, the L1 loss can stabilize training of the whole network while ensuring low-frequency characteristic similarity, and the loss of the synthetic image generated by a generator is calculated by the L1 loss:
Figure BDA0002039102600000071
wherein, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor a true ultrasound image p (I)US) For true ultrasound data distribution, p (I)MR) Is a magnetic resonance data distribution; the generation of the antagonistic network is further generated by deep learning of the generator and the discriminator, and in order to ensure the stability of network training, least square loss is used as a loss function of the generation of the antagonistic network, so that the problem of gradient disappearance in the network training process can be effectively avoided, and the network is easier to converge and more stable. The loss function of the least squares generated impedance network is defined as:
Figure BDA0002039102600000072
Figure BDA0002039102600000073
wherein, minVLSGAN(G) To the least-squares loss function of the generator, minVLSGAN(D) Least squares loss function as a discriminant, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor a true ultrasound image, a and b are labels of the generated data and the true data, respectively, and c represents a label of the data that the generator and the discriminator consider false. In the training process, we use the rule of 0-1 coding to set c-b-1 and a-0, so the loss function is set as:
Figure BDA0002039102600000081
Figure BDA0002039102600000082
the total loss of the generator in the network is the sum of the least square loss of the generator in the generation countermeasure network and the L1 loss of the synthesized ultrasonic image generated by the generator relative to the real ultrasonic image, the total loss of the judger in the network is the sum of the least square loss of the global judger, the first local judger and the second local judger in the generation countermeasure network, therefore, the total loss function loss of the network is:
Figure BDA0002039102600000083
Figure BDA0002039102600000084
where minL (G) is the overall loss function of the generators in the network, and minL (D) is the overall loss function of the generators in the network.
Therefore, the training process for generating the countermeasure network is to obtain an L1 loss function of the real ultrasound image according to the output result of the generator and the corresponding real ultrasound image; based on the least square loss function of a generator and a discriminator in the generated countermeasure network, obtaining the total loss function of the generator according to the output result of the discriminator and the L1 loss function of the real ultrasonic image, and obtaining the total loss function of the discriminator according to the output result of the discriminator; and respectively updating parameters in the network structures of the arbiter and the generator according to the total loss function of the arbiter and the total loss function of the generator until the generation of the confrontation network convergence.
In training the network, different learning rates may be set for the generator and the arbiter in order to balance the training speeds of both the generator and the arbiter.
Generating an anti-type network through the deep network learning method, inputting a three-dimensional magnetic resonance image to a generator of the trained anti-type network, and then generating a synthetic ultrasonic image corresponding to the magnetic resonance image, wherein the synthetic ultrasonic image is a highly simulated ultrasonic image of a real ultrasonic image corresponding to the magnetic resonance image.
Referring to fig. 3, fig. 3 is a flowchart illustrating registration of a magnetic resonance image and a real ultrasound image according to an embodiment of the present invention, and the following explains fig. 3 in detail.
The pyramid algorithm is utilized to input a fixed image (a synthesized ultrasonic image) and a floating image (a real ultrasonic image), sampling is respectively carried out according to the ratio of 2 multiplied by 2, the pyramid layer number is set to be 3, and the pyramid layer number is divided into 3 different scales to carry out similarity measurement and deformation field calculation on the fixed image and the floating image of each layer.
Starting from the lowest layer, the deformation field parameters are initialized and the deformation field is superimposed on the floating image.
The size of the neighborhood descriptor, the Modality Index Neighboring Descriptor (MIND), of the layer of fixed images and floating images is calculated, respectively. MIND calculation is defined as:
Figure BDA0002039102600000091
wherein n is a normalized vector, R belongs to R as a search area, and the distance Dp(I,x1,x2)=∑p∈P(I(x1+p)-I(x2+p))2All patches within the search region R and in two voxels x are calculated1And x2The difference of the Sum of Squares (SSD) of all voxels within the central patch.
Calculating the difference between MIND of the fixed image and MIND of the floating image by using the SSD, and obtaining the similarity measure of the two images.
Figure BDA0002039102600000092
And optimizing the energy function by using a Gauss-Newton gradient descent method to obtain the optimal deformation field of the layer, wherein the optimization function is as follows:
Figure BDA0002039102600000093
wherein u ═(u,v,w)TRepresenting the deformation field.
After the optimal deformation field of the layer is obtained, the calculated deformation field is superposed to the upper layer of the pyramid to be used as the deformation parameters of the layer, and the calculation is continued to the upper layer of the pyramid until the optimal deformation field of the pyramid at the uppermost layer is obtained, wherein the optimal deformation field is the final registration parameter; and (4) directly acting the registration parameters obtained by registering the synthesized ultrasonic image and the real ultrasonic image on the magnetic resonance image, namely finishing the registration fusion of the final magnetic resonance image and the ultrasonic image.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A multi-modality image registration method based on a synthesized ultrasound image, the multi-modality image registration method comprising:
s1, acquiring a plurality of three-dimensional magnetic resonance images of the same part and real ultrasonic images corresponding to the three-dimensional magnetic resonance images as training samples;
s2, constructing a generation countermeasure network, wherein the generation countermeasure network comprises a generator and a discriminator;
s3, inputting the acquired three-dimensional magnetic resonance image into a generator to obtain an output result, inputting the output result of the generator and the corresponding real ultrasonic image into a discriminator to train the generation countermeasure network, and generating a corresponding composite ultrasonic image of the three-dimensional magnetic resonance image by using the generator in the trained generation countermeasure network;
s4, registering and fusing the synthesized ultrasonic image and a real ultrasonic image corresponding to the three-dimensional magnetic resonance image to obtain registration parameters, and registering the three-dimensional magnetic resonance image and the real ultrasonic image according to the registration parameters;
the step S4 includes:
utilizing a pyramid algorithm to set a pyramid into multiple layers, wherein each layer corresponds to one scale, and sampling the synthesized ultrasonic image and the real ultrasonic image in a layered manner respectively;
initializing deformation parameters from the lowest layer, superposing the deformation parameters on the real ultrasonic image, and calculating the similarity measure and the deformation field of the synthesized ultrasonic image and the real ultrasonic image under the corresponding scale of the same layer;
superposing the calculated deformation field to the upper layer of the pyramid as the deformation parameter of the layer, and continuing to perform similarity measurement and optimized deformation calculation on the synthesized ultrasonic image and the real ultrasonic image of the layer until the last layer of the pyramid to obtain a registration parameter;
and directly applying the registration parameters obtained by registering the synthesized ultrasonic image and the real ultrasonic image to the three-dimensional magnetic resonance image, wherein the three-dimensional magnetic resonance image is subjected to registration fusion with the real ultrasonic image after being transformed by the registration parameters, and the registration parameters comprise the optimal deformation field of each pyramid layer.
2. The method for multi-modal image registration based on synthesized ultrasound images as claimed in claim 1, wherein the step of "training the generation of the countermeasure network" in step S3 comprises:
obtaining an L1 loss function of the generator according to the output result of the generator and the corresponding real ultrasonic image;
based on the least square loss function of the generator and the arbiter in the generation countermeasure network, obtaining the total loss function of the generator according to the output result of the arbiter and the L1 loss function of the real ultrasonic image, and obtaining the total loss function of the arbiter according to the output result of the arbiter;
and respectively updating parameters in the network structures of the arbiter and the generator according to the total loss function of the arbiter and the total loss function of the generator until the generation of the confrontation network convergence.
3. The method for multi-modal image registration based on synthesized ultrasound images as claimed in claim 2, wherein the step of "training the generation of the countermeasure network" in step S3 further comprises:
different learning rates are set for the generator and the discriminator.
4. The method of multimodal image registration based on composite ultrasound images according to claim 3, wherein the discriminators comprise a local discriminator and a global discriminator, the local discriminator comprising a first local discriminator and a second local discriminator.
5. The method of claim 4, wherein the L1 loss function is defined as
Figure FDA0002935237100000021
Wherein, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor the input true ultrasound image, p (I)US) For true ultrasound data distribution, p (I)MR) Is a magnetic resonance data distribution;
the least squares loss function of the generator is
Figure FDA0002935237100000022
The least squares loss function of the discriminator is
Figure FDA0002935237100000031
Wherein, IMRRepresenting a magnetic resonance image, G (I)MR) Representing a composite ultrasound image, IUSFor the input real ultrasound image, a and b are labels of the generated data and the real data, respectively, and c is a label of the data considered as false by the generator and the discriminator;
setting a to 0 and b to c to 1, and substituting the above formula, the overall loss function of the generator is obtained as follows:
Figure FDA0002935237100000032
the overall loss function of the discriminator is
Figure FDA0002935237100000033
6. The method of claim 1, wherein the step of calculating the "similarity measure" comprises: and characterizing the same layer of the synthesized ultrasonic image and the real ultrasonic image structure by using a neighborhood description operator MIND, and then obtaining the similarity measure of the two images by using the difference square sum SSD of the characterization results as a registration measure.
7. The method for multi-modal image registration based on synthesized ultrasound images as claimed in claim 6, wherein in the step of "calculation of deformation field", the optimized deformation function of the same layer is obtained by using a method of Gaussian Newton gradient descent.
8. The method of claim 7 wherein the MIND calculation is defined as:
Figure FDA0002935237100000034
wherein n is a normalized vector, R belongs to R as a search area, and the distance Dp(I,x1,x2)=∑p∈P(I(x1+p)-I(x2+p))2All patches within the search region R and in two voxels x are calculated1And x2Is a small centerSSD within a block, the similarity measure being
Figure FDA0002935237100000035
An optimization function of the optimal deformation field is
Figure FDA0002935237100000036
Wherein, u ═ w (u, v, w)TRepresenting the deformation field.
9. The method of claim 1, wherein the pyramid level is 3 levels and the sampling ratio is 2 x 2.
CN201910335812.4A 2019-04-24 2019-04-24 Multi-modal image registration method based on synthetic ultrasound image Active CN110163897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910335812.4A CN110163897B (en) 2019-04-24 2019-04-24 Multi-modal image registration method based on synthetic ultrasound image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910335812.4A CN110163897B (en) 2019-04-24 2019-04-24 Multi-modal image registration method based on synthetic ultrasound image

Publications (2)

Publication Number Publication Date
CN110163897A CN110163897A (en) 2019-08-23
CN110163897B true CN110163897B (en) 2021-06-29

Family

ID=67640060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910335812.4A Active CN110163897B (en) 2019-04-24 2019-04-24 Multi-modal image registration method based on synthetic ultrasound image

Country Status (1)

Country Link
CN (1) CN110163897B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047629B (en) * 2019-11-04 2022-04-26 中国科学院深圳先进技术研究院 Multi-modal image registration method and device, electronic equipment and storage medium
WO2021087659A1 (en) * 2019-11-04 2021-05-14 中国科学院深圳先进技术研究院 Multi-modal image registration method and device, electronic apparatus, and storage medium
CN110866888B (en) * 2019-11-14 2022-04-26 四川大学 Multi-modal MRI (magnetic resonance imaging) synthesis method based on potential information representation GAN (generic antigen)
CN111091589B (en) * 2019-11-25 2023-11-17 北京理工大学 Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning
CN110838140A (en) * 2019-11-27 2020-02-25 艾瑞迈迪科技石家庄有限公司 Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN111260741B (en) * 2020-02-07 2022-05-10 北京理工大学 Three-dimensional ultrasonic simulation method and device by utilizing generated countermeasure network
CN111932443B (en) * 2020-07-16 2024-04-02 江苏师范大学 Method for improving registration accuracy of ultrasound and magnetic resonance by combining multiscale expression with contrast agent
CN114511665A (en) * 2020-10-28 2022-05-17 北京理工大学 Virtual-real fusion rendering method and device based on monocular camera reconstruction
CN112801863A (en) * 2021-02-25 2021-05-14 浙江工业大学 Unsupervised multi-modal medical image registration method based on image conversion and domain generalization
CN113096169B (en) * 2021-03-31 2022-05-20 华中科技大学 Non-rigid multimode medical image registration model establishing method and application thereof
CN113012204B (en) * 2021-04-09 2024-01-16 福建自贸试验区厦门片区Manteia数据科技有限公司 Registration method, registration device, storage medium and processor for multi-mode image
CN113160221B (en) * 2021-05-14 2022-06-28 深圳市奥昇医疗科技有限责任公司 Image processing method, image processing device, computer equipment and storage medium
CN113763442B (en) * 2021-09-07 2023-06-13 南昌航空大学 Deformable medical image registration method and system
CN116563189B (en) * 2023-07-06 2023-10-13 长沙微妙医疗科技有限公司 Medical image cross-contrast synthesis method and system based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260522A (en) * 2010-12-16 2013-08-21 皇家飞利浦电子股份有限公司 Apparatus for CT-RI and nuclear hybrid imaging, cross calibration, and performance assessment
CN102512246B (en) * 2011-12-22 2014-03-26 中国科学院深圳先进技术研究院 Surgery guiding system and method
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
CN109377520B (en) * 2018-08-27 2021-05-04 西安电子科技大学 Heart image registration system and method based on semi-supervised circulation GAN
CN109523584B (en) * 2018-10-26 2021-04-20 上海联影医疗科技股份有限公司 Image processing method and device, multi-modality imaging system, storage medium and equipment

Also Published As

Publication number Publication date
CN110163897A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110163897B (en) Multi-modal image registration method based on synthetic ultrasound image
Liu et al. Deformable registration of cortical structures via hybrid volumetric and surface warping
Litjens et al. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge
CN110338844B (en) Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
CN103402453B (en) Auto-initiation and the system and method for registration for navigation system
US7855723B2 (en) Image registration using locally-weighted fitting
CN105025803B (en) Segmentation from multiple 3-D views to blob
US20130170726A1 (en) Registration of scanned objects obtained from different orientations
CN107456278A (en) A kind of ESS air navigation aid and system
CN103229210A (en) Image registration apparatus
US9129392B2 (en) Automatic quantification of mitral valve dynamics with real-time 3D ultrasound
CN115830016B (en) Medical image registration model training method and equipment
CN105894508A (en) Method for evaluating automatic positioning quality of medical image
US11741614B2 (en) Method, system and computer program for determining position and/or orientation parameters of an anatomical structure
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
Lee et al. Dynamic shape instantiation for intra-operative guidance
Deng et al. Detecting vertebra landmarks from ultrasound image using single shot multibox detector
Gibbons MRI-Derived Cardiac Adipose Tissue Modeling for Use in Ultrasound Tissue Labeling and Classification
Huang et al. 3d hand bones and tissue estimation from a single 2d x-ray image via a two-stream deep neural network
Leotta Three-dimensional spatial compounding of ultrasound images acquired by freehand scanning: Volume reconstruction of the rotator cuff
Chen QUiLT (Quantitative Ultrasound in Longitudinal Tissue Tracking): Stitching 2D images into 3D Volumes for Organ Health Monitoring
Warfield et al. Capturing brain deformation
CN117218190A (en) Stimulation target point positioning system and method based on point cloud deep learning
MONTEIRO DEEP LEARNING APPROACH FOR THE SEGMENTATION OF SPINAL STRUCTURES IN ULTRASOUND IMAGES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yang Feng

Inventor after: Wu Chan

Inventor after: Dong Jiahui

Inventor before: Yang Feng

Inventor before: Wu Chan

Inventor before: Dong Jiahui

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211009

Address after: 100081 0810 Haidian Science and technology building, Zhongguancun South Street, Haidian District, Beijing

Patentee after: ARIEMEDI MEDICAL SCIENCE (BEIJING) Co.,Ltd.

Address before: 050000 3rd floor, unit 1, building 7, Runjiang international headquarters, 319 Changjiang Avenue, high tech Zone, Shijiazhuang, Hebei Province

Patentee before: AIRUI MAIDI TECHNOLOGY SHIJIAZHUANG Co.,Ltd.

TR01 Transfer of patent right