WO2022257344A1 - 图像配准融合方法及装置、模型训练方法及电子设备 - Google Patents

图像配准融合方法及装置、模型训练方法及电子设备 Download PDF

Info

Publication number
WO2022257344A1
WO2022257344A1 PCT/CN2021/128241 CN2021128241W WO2022257344A1 WO 2022257344 A1 WO2022257344 A1 WO 2022257344A1 CN 2021128241 W CN2021128241 W CN 2021128241W WO 2022257344 A1 WO2022257344 A1 WO 2022257344A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
mri
point cloud
medical
Prior art date
Application number
PCT/CN2021/128241
Other languages
English (en)
French (fr)
Inventor
刘星宇
张逸凌
Original Assignee
刘星宇
北京长木谷医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 刘星宇, 北京长木谷医疗科技有限公司 filed Critical 刘星宇
Publication of WO2022257344A1 publication Critical patent/WO2022257344A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Definitions

  • the present disclosure relates to the technical field of medical image processing, and relates to an image registration and fusion method and device, a model training method, and electronic equipment.
  • Iterative closest point method using the iterative closest point method to calculate the transformation matrix between images, which has high requirements for the initial alignment of each image, making It is easy to fall into a local optimal solution during the solving process; this method also needs to perform a rough registration process before solving, and the complexity is relatively high; moreover, this method can only be applied to solving rigid registration problems.
  • the method of solving the optimization problem of the distance function between the images to be registered so that the distance function of the images to be registered is deformed is the smallest.
  • this method can properly solve the non-rigid registration situation, it needs to automatically apply the distance function between the images.
  • the distance function of the method leads to a high requirement for the similarity between the images to be registered, and when the differences between the collected images of different modalities of the patient are large, the accuracy of the registration results of this method is low; and, this method
  • the complexity of this method is high, which will lead to excessive time cost of the overall registration process.
  • the present disclosure provides an image registration and fusion method, including: acquiring two-dimensional medical images of at least two modalities of a patient; and inputting the two-dimensional medical images of at least two modalities into corresponding pre-trained image segmentation network model to obtain the output of the two-dimensional medical images of the position areas of each modality body; respectively perform three-dimensional reconstruction on the two-dimensional medical images of the position areas of each modality body and then perform point cloud registration and fusion to obtain multiple Modal fusion of 3D medical images.
  • the present disclosure also provides a model training method, which includes the training process of the CT image segmentation network model and the training process of the MRI image segmentation network model, wherein the training process of the CT image segmentation network model includes: acquiring multiple A two-dimensional CT medical image data set of a patient, wherein the two-dimensional CT medical image data set contains a plurality of two-dimensional CT medical images; each of the two-dimensional CT medical images is marked by at least one of automatic labeling and manual labeling.
  • each marked two-dimensional CT medical image is divided into a CT training data set and a CT test data set according to a preset ratio; based on the CT training data set combined with a neural network algorithm and deep learning to train a CT image segmentation network model; wherein, the training process of the MRI image segmentation network model may include: obtaining two-dimensional MRI medical image data sets of multiple patients, wherein the two-dimensional MRI medical image data A plurality of two-dimensional MRI medical images are included in the collection; at least one of automatic labeling and manual labeling is used to mark the femoral position area in each of the two-dimensional MRI medical images; each two-dimensional MRI medical image after labeling The image is divided into an MRI training data set and an MRI test data set according to a preset ratio; an MRI image segmentation network model is trained based on the MRI training data set combined with neural network algorithms and deep learning.
  • the present disclosure also provides an image registration and fusion device, including: an acquisition module configured to acquire two-dimensional medical images of a patient in at least two modalities; a two-dimensional image processing module configured to combine the at least two modalities
  • the two-dimensional medical image of the state is input to the pre-trained image segmentation network model to obtain the output of the two-dimensional medical image of each modality body position area respectively;
  • the three-dimensional reconstruction and fusion module is configured to convert the position of each modality body
  • the 2D medical images of the region are respectively 3D reconstructed and then point cloud registration and fusion are performed to obtain multimodal fused 3D medical images.
  • the present disclosure also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the image according to any one of the above items is realized.
  • the present disclosure also provides a computer-readable storage medium, on which a computer program is stored.
  • the image registration and fusion method according to any one of the above, or the model according to any one of the above is implemented. All or part of the steps of the training method.
  • the present disclosure provides an image registration and fusion method and device, a model training method, and electronic equipment.
  • the method performs image segmentation processing on two-dimensional medical images of the same part of the same patient with different modalities, and then performs three-dimensional reconstruction. Finally, the The 3D medical images of different modalities after 3D reconstruction are accurately registered and fused with point clouds to obtain multimodal fused 3D medical images. In the case of multi-modal fusion, it can also be applied to non-rigid registration. The registration result is accurate and can provide accurate treatment reference for medical staff.
  • Fig. 1 is one of the schematic flow charts of the image registration and fusion method provided by the present disclosure
  • FIG. 2 is the second schematic flow diagram of the image registration and fusion method provided by the present disclosure
  • Fig. 3A is the two-dimensional CT medical image of the femoral position region provided by the embodiment of the present disclosure
  • Fig. 3B is the two-dimensional MRI medical image of the femoral position region provided by the embodiment of the present disclosure
  • Fig. 3C is the femoral necrosis position provided by the embodiment of the present disclosure 2D MRI medical images of the region;
  • Fig. 4 is the three-dimensional CT medical image of the femur position area after image segmentation and three-dimensional reconstruction provided by the image registration and fusion method provided by the present disclosure
  • Fig. 5 is the three-dimensional MRI medical image of the femur position area after the image segmentation and three-dimensional reconstruction provided by the image registration and fusion method provided by the present disclosure
  • FIG. 6 is a three-dimensional medical image fused with a CT modality and an MRI modality after registration and fusion through the image registration and fusion method provided by the present disclosure
  • Fig. 7 is the three-dimensional MRI medical image of the femoral necrosis position area after image segmentation and three-dimensional reconstruction provided by the image registration and fusion method provided by the present disclosure
  • Fig. 8 is a three-dimensional medical image fused with the CT modality of the femoral position region after registration and fusion by the image registration fusion method provided by the present disclosure and the MRI modality of the femoral position region and the femoral necrosis position region;
  • FIG. 9 is a schematic flow diagram of the training process of the CT image segmentation network model in the method provided by the present disclosure.
  • FIG. 10 is a schematic flow diagram of the training process of the MRI image segmentation network model in the method provided by the present disclosure
  • Fig. 11 is a deep learning training network structure diagram of the training process shown in Fig. 9 and Fig. 10;
  • Fig. 12 is one of the schematic structural diagrams of the image registration and fusion device provided by the present disclosure
  • Fig. 13 is the second structural schematic diagram of the image registration and fusion device provided by the present disclosure
  • Fig. 14 is a schematic structural diagram of the electronic device provided by the present disclosure.
  • 1110 acquisition module
  • 1120 two-dimensional image processing module
  • 1130 three-dimensional reconstruction and fusion module
  • 1131 three-dimensional image reconstruction module
  • 1132 point set determination module
  • 1133 registration module
  • 1310 processor
  • 1320 communication interface
  • 1330 memory
  • 1340 communication bus.
  • CT medical images have a high spatial resolution and can clearly locate rigid bones, but their imaging contrast to soft tissues is low and cannot clearly display the lesion itself;
  • MRI medical images have high contrast to anatomical structures such as soft tissues, blood vessels, and organs, etc. Imaging, but the spatial resolution is lower than that of CT medical images, and lacks rigid bone structure as a reference for the location of lesions. Therefore, in clinical applications, medical images of a single modality often cannot provide comprehensive medical reference information for relevant medical staff.
  • this disclosure combines the artificial intelligence image segmentation algorithm with multi-modal medical image fusion technology, integrates the advantages of various medical imaging technologies, extracts the complementary information of medical images of different modalities, and generates a picture that is better than any single modality after fusion.
  • the image contains composite images with more effective medical reference information to help relevant medical staff diagnose, stage and treat various types of diseases such as osteonecrosis of the femoral head.
  • FIG. 1 is one of the flow diagrams of the image registration and fusion method provided in the present disclosure. As shown in FIG. 1 , the method includes:
  • Acquire 2D medical images of two or more modalities for the same part of the body of the same patient, for example, for a patient with hip joint disease obtain a 2D CT medical image of the patient's hip joint femur 2D medical images in multiple modalities, such as 2D MRI medical images.
  • S120. Input the two-dimensional medical images of the at least two modalities into corresponding pre-trained image segmentation network models, so as to obtain outputs of the two-dimensional medical images of the body position regions of each modality.
  • Fig. 3A is the two-dimensional CT medical image of the femur position area provided by the embodiment of the present disclosure
  • Fig. 3B is the two-dimensional MRI medical image of the femur position area provided by the embodiment of the present disclosure
  • the two-dimensional MRI medical image of the femoral necrosis location region provided in the embodiment.
  • the two-dimensional medical images of multiple modalities acquired in step S110 into the corresponding pre-trained image segmentation network model one by one for example, input the patient's two-dimensional CT medical image into its corresponding CT image
  • the two-dimensional MRI medical image of the patient is input into its corresponding MRI image segmentation network model for MRI images to output the two-dimensional medical images.
  • other two-dimensional medical images of the same body part of the patient may also be input into their corresponding image segmentation network models for processing. If the body part of the patient does not have any symptoms, the two-dimensional medical images of each modality are normal images, and there will be no images about lesions or necrosis.
  • At least one of the two-dimensional medical images of the modality body position area can show the mode A two-dimensional medical image of the patient's body necrosis area in a state.
  • the image includes a two-dimensional medical image of the patient's body necrosis area in the MRI mode, or the two-dimensional medical image of the patient's body necrosis area in the MRI mode can also be understood as the MRI mode
  • Another independent two-dimensional medical image exists side by side with the two-dimensional medical image of the body position area under the MRI modality, but it is still regarded as a whole with the two-dimensional medical image of the body position area under the same modality.
  • step S130 Perform three-dimensional reconstruction on the two-dimensional medical images of the body position regions of each modality, and then perform point cloud registration and fusion, so as to obtain a multi-modal fusion three-dimensional medical image.
  • the two-dimensional medical images of the body position areas of each modality can also be registered and fused with point clouds before three-dimensional reconstruction processing, so as to obtain multi-modal fusion three-dimensional medical images.
  • Fig. 2 is the second schematic flow diagram of the image registration and fusion method provided by the present disclosure.
  • the step S130 is executed to convert the location areas of the various modal bodies
  • the two-dimensional medical images are three-dimensionally reconstructed and then point cloud registration and fusion are performed to obtain multi-modal fusion three-dimensional medical images, it may include:
  • step S131 Based on the three-dimensional image reconstruction method, respectively reconstruct the two-dimensional medical images of the position regions of each modality body into three-dimensional medical images of the position regions of each modality body; based on the three-dimensional image reconstruction method (using a three-dimensional image processing library), the The 2D medical images of the position areas of each modal body output in step S120 are respectively subjected to 3D reconstruction, and the 3D medical images of the position areas of each modal body are correspondingly obtained.
  • the 3D image reconstruction method can be performed with reference to existing 3D image open source processing libraries and other technologies, and details are not described here.
  • the point cloud sets corresponding to each modality are determined according to the body landmark point set and the body head landmark point set respectively.
  • Both the body mark point and the body head mark point can be set by selecting reference points according to actual needs.
  • the body mark point and the body head mark point can also be selected as the body center point and the body center point, so as to determine the body center point set and the body head center point set in each mode.
  • Both the center point of the body area and the center point of the body head can be used as reference points, so the corresponding point cloud sets of each mode are calculated and determined based on these points.
  • the two-dimensional medical images of the at least two modalities include at least one of two-dimensional CT medical images, two-dimensional MRI medical images, two-dimensional ultrasonic medical images, and two-dimensional PETCT medical images.
  • the body includes a femur
  • the body head includes a femoral head.
  • the two-dimensional medical images of at least two modalities include at least two of two-dimensional CT medical images, two-dimensional MRI medical images, two-dimensional ultrasound medical images, and two-dimensional PETCT medical images. Of course, it can also include two-dimensional medical images in other modalities.
  • the applied patient is the type of patient suffering from hip joint disease
  • the two-dimensional medical image of the hip joint part, especially the femur part, of this type of patient can be collected, so as to facilitate the diagnosis and reference of medical staff. Therefore, in this embodiment, it is set that the body is understood as a femur, and correspondingly, the head of the body is a femoral head. Therefore, the two-dimensional medical images of the body position area of each modality output by the model in step S120 are, for example, the two-dimensional medical images of the femur position area under the CT mode and the MRI mode.
  • step S120 of the method combining the two-dimensional medical images of the at least two modalities Input to the corresponding pre-trained image segmentation network model to obtain the output of the two-dimensional medical image of each modality body position area respectively, which may include:
  • the two-dimensional CT medical image is input to the pre-trained CT image segmentation network model to obtain the CT medical image of the femoral position region; and/or, the two-dimensional MRI medical image is input to the pre-trained MRI image segmentation network model Obtain an MRI medical image of the femoral position region; And/or, input the two-dimensional ultrasonic medical image to a pre-trained ultrasonic image segmentation network model to obtain an ultrasonic medical image of the femoral position region; and/or, combine the two The three-dimensional PETCT medical image is input to the pre-trained PETCT image segmentation network model to obtain the PETCT medical image of the femoral position area.
  • step S120 may include: S121, inputting the two-dimensional CT medical image into a pre-trained CT image segmentation network model to obtain a CT medical image of the femoral position area, S122, inputting the two-dimensional MRI medical image into A pre-trained MRI image segmentation network model to obtain MRI medical images of the femur location region.
  • step S120 may include: inputting the two-dimensional CT medical image and the two-dimensional MRI medical image into respective corresponding pre-trained image segmentation network models, thereby outputting the CT medical image of the femoral position area and the femoral position MRI medical image of a region.
  • the MRI medical image of the femoral position area includes the MRI medical image of the femoral necrosis position area, that is, the output MRI medical image of the femoral position area contains the representation of the MRI medical image of the femoral necrosis position area under the MRI mode .
  • the MRI medical image of the femoral necrosis position area can also be understood as another independent two-dimensional medical image that coexists with the MRI medical image of the femoral position area under the MRI modality, but it still needs to be separated from the femoral position area.
  • MRI medical images are logically considered as a whole.
  • step S130 carried out the method described in steps S131-S133
  • the MRI medical image in conjunction with the femoral position area included the setting of the femoral necrosis position area MRI medical image, and the process was described as follows:
  • Step S131 based on the three-dimensional image reconstruction method, respectively reconstruct the two-dimensional medical images of the position areas of each modality body into three-dimensional medical images of the position areas of each modality body. That is, based on the three-dimensional image reconstruction method, a three-dimensional image processing library can be used to reconstruct the CT medical image of the femoral position area into a three-dimensional CT medical image of the femoral position area, and the MRI medical image of the femoral position area (including the femoral necrosis position area MRI Medical image) is reconstructed into a three-dimensional MRI medical image of the femoral position area (including a three-dimensional MRI medical image of the femoral necrosis position area).
  • the three-dimensional MRI medical image of the femoral necrosis position area can be understood as another independent three-dimensional medical image coexisting with the three-dimensional MRI medical image of the femoral position area, and can also be understood as a three-dimensional MRI medical image included in the femoral position area.
  • the image together with it serves as a three-dimensional medical image as a whole.
  • Step S132 based on the three-dimensional medical images of the body position areas of each modality, determine the body landmark point set and the body head landmark point set respectively as the corresponding point cloud set of each modality three-dimensional medical image, which may be to determine the body center point set and body head center point set as the corresponding point cloud set of each modal 3D medical image, including: that is, based on the 3D CT medical image of the femoral position area, determine its femoral center point set and femoral head center point set as the CT model The corresponding first point cloud set of the three-dimensional CT medical image in the MRI mode; based on the three-dimensional MRI medical image of the femoral position area, determine its femoral center point set and the femoral head center point set as the corresponding second point cloud set of the three-dimensional MRI medical image in the MRI mode Point cloud collection; the center point of the femur and the center point of the femoral head can be used as reference points, so based on these points, the corresponding point
  • the process of determining the corresponding point cloud set of each modality 3D medical image includes: based on the 3D CT medical image of the femoral position area, determining its femoral center point set and femoral head center point set as the first point cloud set M corresponding to the CT modality.
  • the femoral area is displayed on the two-dimensional cross-section, and the level of the femoral head is approximately circular, so the center point of the femoral head can be directly calculated, and then in the medullary cavity Determine the center point of the medullary cavity of each layer to form the center point of the femur.
  • These points can also be obtained from the 3D CT medical image of the femoral position area after the 3D reconstruction is performed based on the 2D image.
  • the three-dimensional CT medical images of a plurality of femoral position regions can obtain a femoral center point set and a femoral head center point set, and then form the first point cloud set M by their combination.
  • the three-dimensional MRI medical image of the femoral position area including the three-dimensional MRI medical image of the femoral necrosis position area
  • Step S133 based on the point cloud registration algorithm, perform point cloud registration and fusion on the point cloud sets corresponding to the three-dimensional medical images of each modality, so as to obtain a multi-modal fusion medical image. That is, based on the ICP point cloud registration algorithm, the two sets of point clouds of the first point cloud set M and the second point cloud set N are subjected to point cloud registration and fusion, and then a fused three-dimensional image of the CT modality and the MRI modality is obtained. For medical images, the registration accuracy is higher and the registration time cost is low.
  • the ICP point cloud registration algorithm can adopt the existing three-dimensional point cloud registration method: calculate the first reference coordinate system corresponding to the point cloud set to be registered and the second reference coordinate system corresponding to the reference point cloud set based on the principal component analysis method system; based on the first reference coordinate system and the second reference coordinate system, perform initial registration on the point cloud set to be registered and the reference point cloud set; Register the nearest points in the point cloud to obtain multiple sets of corresponding point pairs; respectively calculate the direction vector angles between multiple sets of corresponding point pairs; based on the preset angle threshold and direction vector angle, treat the registration point cloud set and reference point Yunji performs fine registration, and finally obtains a 3D medical image fused with CT modality and MRI modality.
  • Fig. 4 is the three-dimensional CT medical image of the femur position area after image segmentation and three-dimensional reconstruction by the image registration fusion method provided by the present disclosure
  • Fig. 5 is the femur after image segmentation and three-dimensional reconstruction by the image registration fusion method provided by the present disclosure
  • the three-dimensional MRI medical image of the position area
  • Fig. 6 is the three-dimensional medical image fused with the CT modality and the MRI modality after registration and fusion provided by the image registration and fusion method provided by the present disclosure
  • Fig. 7 is the image registration and fusion image provided by the present disclosure
  • the 3D medical image is fused with the MRI modality and the MRI modality of the femoral necrosis location area.
  • Figure 4 is the 3D CT medical image of the patient's femur position area obtained after the above steps of 3D reconstruction
  • Figure 5 is the patient's image obtained after the above steps of 3D reconstruction
  • the three-dimensional MRI medical image of the femoral position area, Figure 4 and Figure 5 can be fused first to obtain Figure 6,
  • Figure 6 shows the fused three-dimensional image under the condition of no femoral necrosis
  • Figure 7 is the image registration fusion provided by the present disclosure
  • the three-dimensional MRI medical image of the femoral necrosis position area after image segmentation and three-dimensional reconstruction can also be understood as the three-dimensional MRI medical image of the independent femoral necrosis position area.
  • the medical image and the 3D MRI medical image of the femoral region in Fig. 5 are used together as a whole 3D medical image, but when performing point cloud registration and fusion processing, the two 3D medical images are essentially fused first and then used as a single
  • the whole of the 3D medical image is based on the whole of the new 3D MRI medical image of the femoral position area, and then performs point cloud registration and fusion with the 3D CT medical image of the femoral position area.
  • Fig. 4 and Fig. 5 can be fused first to obtain Fig. 6, and then Fig. 7 and Fig. 6 can be fused to obtain Fig. 8, that is, finally obtain the three-dimensional CT medical image and
  • the 3D MRI medical image of the femoral position area and the 3D MRI medical image of the femoral necrosis position area are registered together according to the ICP point cloud registration algorithm, and the comprehensive result obtained: the fusion of CT and MRI modalities. .
  • the fusion of the CT modality and the MRI modality 3D medical image accurately fuses the different features of the images of the CT modality and the MRI modality, and can also reflect the patient's true femoral necrosis location area (as shown in the femoral head in Fig. 8 As shown in the special-shaped small area above the inside), it can provide medical staff with an accurate reference before treating the patient with hip joint disease.
  • the training process of the CT image segmentation network model in the method includes: A two-dimensional CT medical image data set, wherein the two-dimensional CT medical image data set contains a plurality of two-dimensional CT medical images; obtain a large number of two-dimensional CT medical image data sets of patients with hip joint diseases, wherein the The two-dimensional CT medical image data set includes multiple two-dimensional CT medical images. S620.
  • each labeled two-dimensional CT medical image into a CT training dataset and a CT testing dataset according to a preset ratio; before dividing the training dataset and the testing dataset, the labeled two-dimensional
  • Each two-dimensional CT medical image in the CT medical image data set undergoes corresponding format conversion, so that it can be successfully entered into the image segmentation network for processing.
  • the two-dimensional cross-sectional DICOM format of each two-dimensional CT medical image in the labeled two-dimensional CT medical image dataset is converted into a picture in JPG format.
  • Each two-dimensional CT medical image after labeling and format conversion is divided into a CT training data set and a CT testing data set according to a pre-ratio of 7:3.
  • the CT training data set is used as an input of a CT image segmentation network to train a CT image segmentation network model.
  • the CT test data set is used to subsequently test and optimize the performance of the CT image segmentation network model.
  • a CT image segmentation network model based on the CT training data set in combination with neural network algorithms and deep learning; identify the CT image using multiple downsampling based on the CT training data set in combination with neural network algorithms and deep learning
  • the deep features of the image data in the training data set and use multiple upsampling to reversely store the learned deep features into the image data, so as to obtain the rough image segmentation results through the first image segmentation network (backbone image segmentation network) , and then through the second image segmentation network (subordinate image segmentation network), perform precise segmentation processing on multiple points with uncertain classification to obtain precise segmentation results.
  • a CT image segmentation network model is trained.
  • FIG. 10 is a schematic flow diagram of the training process of the MRI image segmentation network model in the method provided by the present disclosure.
  • the training process of the MRI image segmentation network model in the method includes:
  • step 610 is the two-dimensional MRI medical image data set of the same patient
  • the two-dimensional MRI medical image data set contains a plurality of two-dimensional MRI medical images
  • S720 Use at least one of automatic labeling and manual labeling to mark the femur position area in each of the two-dimensional MRI medical images; for each two-dimensional MRI medical image in the two-dimensional MRI medical image data set, automatically or respectively Manually mark the area of the femur.
  • Automatic labeling can be done with the help of labeling software. In this way, a two-dimensional MRI medical image data set formed by the labeled two-dimensional MRI medical images is obtained.
  • each marked two-dimensional MRI medical image into an MRI training data set and an MRI test data set according to a preset ratio;
  • Each two-dimensional MRI medical image in the MRI medical image data set undergoes corresponding format conversion, so that it can be successfully entered into the image segmentation network for processing.
  • the original format of each two-dimensional MRI medical image in the labeled two-dimensional MRI medical image dataset is converted into a picture in PNG format.
  • Each two-dimensional MRI medical image that has been marked and format-converted is divided into an MRI training data set and an MRI testing data set according to a pre-ratio of 7:3.
  • the MRI training data set is used as the input of the MRI image segmentation network to train the MRI image segmentation network model.
  • the MRI test data set is used for subsequent testing and optimization of the performance of the MRI image segmentation network model.
  • an MRI image segmentation network model based on the MRI training data set combined with a neural network algorithm and deep learning. Based on the MRI training data set combined with neural network algorithms and deep learning, multiple downsampling is used to identify the deep features of the image data in the MRI training data set, and multiple upsampling is used to reversely store the learned deep features to the In the image data, the rough image segmentation result is obtained through the first image segmentation network (backbone image segmentation network), and then the multiple points with uncertain classification are accurately processed through the second image segmentation network (subordinate image segmentation network). Segmentation processing to obtain accurate segmentation results. Finally, the MRI image segmentation network model is trained.
  • FIG. 11 is FIG. 9
  • the training process of the model can include the following steps:
  • the first image segmentation model (unet backbone neural network, referred to as unet main neural network) is used to perform coarse segmentation processing (coarse prediction) on the CT training data set or the MRI training data set.
  • the first stage performs 4 times of down-sampling to learn the deep features of each image data of the CT training data set or the MRI training data set.
  • Each downsampling layer includes 2 convolutional layers and 1 pooling layer.
  • the size of the convolution kernel in the convolution layer is 3*3, and the size of the convolution kernel in the pooling layer is 2*2.
  • Each The number of convolution kernels in the convolution layer is 128, 256, 512 and so on.
  • Upsampling is performed four times on the downsampled image data, so as to re-store the deep features of each image data learned by downsampling into the image data.
  • Each upsampling layer includes 1 upsampling layer and 2 convolution layers, where the convolution kernel size of the convolution layer is 3*2, and the convolution kernel size in the upsampling layer is 2*2,
  • the number of convolution kernels in each upsampling layer is 512, 256, 128, etc.
  • the above-mentioned sampling process of the convolutional neural network is a process of feature extraction for each image data. We need to identify the characteristic parts in each original image, learn its deep features repeatedly through the convolutional neural network, and finally reversely store them on the original image.
  • the Adam classification optimizer is used for rough image classification processing, and the result of rough image segmentation is obtained.
  • the rough image segmentation result obtained after the unet main neural network is processed is further finely segmented.
  • the upsampling learning calculation is carried out on the coarse segmentation result of the image to obtain the dense feature map of each image; for the dense feature map of each image, a plurality of unknown points are selected from it, and also That is to select N points with the most uncertain classification, such as selecting multiple points with a confidence/probability of 0.5, then calculate the table and extract the deep feature representation of these N points, and use the MLP multi-layer perceptron to predict point by point
  • the respective classifications of the N points after fine segmentation such as judging whether the point belongs to the femoral region or the non-femoral region; repeat the above steps until the classification of each of the N points after fine segmentation is predicted one by one complete.
  • the effect of setting the loss function is that the size of the number of samples for each training can be adjusted according to the change of the loss function during the model pre-training process.
  • the initial value of the size of the number of samples Batch_Size for each training is set to 6, and the learning rate Set to 1e-4, the optimizer uses the Adam optimizer, and sets the loss function DICE loss.
  • the Batch_Size of the number of samples for each training can be effectively adjusted in real time according to the change of the loss function during the training process. Splitting processing stages improves processing accuracy.
  • the method further includes: setting activation functions after each convolutional layer; wherein, all convolutional layers are also equipped with activation functions, such as relu activation function, Sigmoid activation function, tanh activation function, leaky relu activation function, etc., to enhance the nonlinear factors of the convolutional neural network, so that the convolutional neural network can better solve the more complex calculation process.
  • activation functions such as relu activation function, Sigmoid activation function, tanh activation function, leaky relu activation function, etc.
  • a dropout layer is set after the last up-sampling; the last up-sampling After sampling, or after the last upsampling layer, there is a dropout layer, which is used to temporarily discard some neural network units from the network according to a certain probability during the training process of the deep learning network to further improve model training. the accuracy.
  • the probability of the dropout layer is set to 0.7.
  • FIG. 12 is one of the structural schematic diagrams of the image registration and fusion device provided by the present disclosure. As shown in FIG. 12 , the device includes: an acquisition module 1110 and a two-dimensional image processing module 1120.
  • the three-dimensional reconstruction and fusion module 1130 wherein the acquisition module 1110 is configured to acquire two-dimensional medical images of the patient in at least two modalities; the two-dimensional image processing module 1120 is configured to convert the at least The two-dimensional medical images of two modalities are input to the pre-trained image segmentation network model to obtain the output of the two-dimensional medical images of the body position area of each modality respectively; the three-dimensional reconstruction and fusion module 1130 is configured to The two-dimensional medical images in the body position area of each modality are three-dimensionally reconstructed, and then the point cloud registration and fusion are performed to obtain a multi-modal fusion three-dimensional medical image.
  • FIG. 13 is the second structural schematic diagram of the image registration and fusion device provided in the present disclosure.
  • the three-dimensional reconstruction and The fusion module 1130 further includes: a three-dimensional image reconstruction module 1131, a point set determination module 1132 and a registration module 1133, wherein the three-dimensional image reconstruction module 1131 is configured to integrate the various modality ontology based on a three-dimensional image reconstruction method
  • the two-dimensional medical images of the position areas are respectively reconstructed into three-dimensional medical images of the position areas of each modality body;
  • the point set determination module 1132 is configured to determine the body
  • the set of landmark points and the set of landmark points of the body head are respectively used as point cloud sets corresponding to each modal three-dimensional medical image; Perform point cloud registration and fusion to obtain multimodal fused 3D medical images.
  • FIG. 14 is a schematic structural diagram of the electronic device provided by the present disclosure.
  • the electronic device may include: a processor (processor) 1310, a communication interface (Communications Interface) 1320, a memory (memory) 1330 and a communication bus 1340 , wherein, the processor 1310 , the communication interface 1320 , and the memory 1330 communicate with each other through the communication bus 1340 .
  • the processor 1310 may invoke logic instructions in the memory 1330 to execute all or part of the steps of the image registration and fusion method or the model training method.
  • the present disclosure also provides a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer During execution, the computer can execute all or part of the steps of the image registration and fusion method or the model training method provided by the above embodiments.
  • the present disclosure also provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the image registration and fusion method described in the above embodiments, or the model training All or part of the steps of the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

一种图像配准融合方法及装置、模型训练方法及电子设备,该融合方法包括:获取患者的至少两种模态的二维医学图像(S110);将至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出(S120);将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像(S130)。该方法图像配准精度高、时间成本低,还能处理较为复杂的多模态融合情况,以及还可应用于非刚性配准情况,配准结果准确,能够为医护人员提供准确的治疗参考依据。

Description

图像配准融合方法及装置、模型训练方法及电子设备
相关申请的交叉引用
本申请要求于2021年06月07日提交中国专利局,申请号为202110633927.9,发明名称为“多模态医学图像配准融合方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及医学图像处理技术领域,涉及一种图像配准融合方法及装置、模型训练方法及电子设备。
背景技术
现有的多模态图像配准技术中,主要存在两类方法:1、迭代最近点方法,使用该迭代最近点方法计算图像间变换矩阵,这对各图像初始对齐状况的要求较高,使得其在求解过程中极易陷入局部最优解;该法还需要在求解前预先进行粗配准过程,复杂度较高;而且该法也只能应用于求解刚性配准问题,在已采集的患者的多模态图像的拍摄时间不同、患者姿态不同时,配准融合结果会存在较大误差。2、求解待配准图像间距离函数最优化问题以使待配准图像经变形后距离函数最小的方法,该法虽能适当解决非刚性配准情况,但是,该法需要自动应以图像间的距离函数,导致其对待配准图像间的相似性要求较高,而在在采集的患者的不同模态图像的差异较大时,该法的配准结果的精确度低下;并且,该法在解决非刚性配准的问题时,由于求解参数数量多,该法求解复杂度高,均会导致整体配准过程的时间成本过大。
发明内容
本公开提供一种图像配准融合方法,包括:获取患者的至少两种模态的二维医学图像;将所述至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出;将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。
本公开还提供一种模型训练方法,公开所述方法包括CT图像分割网络模型的训练过程以及MRI图像分割网络模型的训练过程,其中,所述CT图像分割网络模型的训练过程,包括:获取多个患者的二维CT医学图像数据集,其中,所述二维CT医学图像数据集中包含有多个二维CT医学图像;采用自动标注、手动标注中的至少一种方式,标注出各个所述二维CT医学图像中的股骨位置区域;将经过标注后的各个二维CT医学图像按照预设比例划分为CT训练数据集和CT测试数据集;基于所述CT训练数据集并结合神经网络算法和深度学习训练出CT图像分割网络模型;其中,所述MRI图像分割网络模型的训练过程,可以包括:获取多个患者的二维MRI医学图像数据集,其中,所述二维MRI医学图像数据集中包含有多个二维MRI医学图像;采用自动标注、手动标注中的至少一种方式,标注出各个所述二维MRI医学图像中的股骨位置区域;将经过标注后的各个二维MRI医学图像按照预设比例划分为MRI训练数据集和MRI测试数据集;基于所述MRI训练数据集并结合神经网络算法和深度学习训练出MRI图像分割网络模型。
本公开还提供一种图像配准融合装置,包括:获取模块,被配置为获取患者的至少两种模态的二维医学图像;二维图像处理模块,被配置为将所述至少两种模态的二维医学图像输入至预先训练的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出;三维重建及融合模块,被配置为将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。
本公开还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现根据如上任一项所述图像配准融合方法,或者如上任一项所述模型训练方法的全部或部分步骤。本公开还提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现根据如上任一项所述图像配准融合方法,或者如上任一项所述的模型训练方法的全部或部分步骤。本公开提供一种图像配准融合方法及装置、模型训练方法及电子设备,所述方法通过将同一患者相同部位不同模态的二维医学图像分别进行图像分割处理,再进行三维重建,最后将三维重建后的不同模态的三维医学图像进行精准的点云配准融合,获得多模态相融合三维医 学图像该方法多模态图像配准精度高、时间成本低,还能处理较为复杂的多模态融合情况,以及还可应用于非刚性配准情况,配准结果准确,能够为医护人员提供准确的治疗参考依据。
附图说明
图1是本公开提供的图像配准融合方法的流程示意图之一;
图2是本公开提供的图像配准融合方法的流程示意图之二;
图3A是本公开实施例提供的股骨位置区域的二维CT医学图像;图3B是本公开实施例提供的股骨位置区域的二维MRI医学图像;图3C是本公开实施例提供的股骨坏死位置区域的二维MRI医学图像;
图4是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨位置区域的三维CT医学图像;
图5是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨位置区域的三维MRI医学图像;
图6是通过本公开提供的图像配准融合方法配准融合后的CT模态和MRI模态相融合三维医学图像;
图7是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨坏死位置区域的三维MRI医学图像;
图8是通过本公开提供的图像配准融合方法配准融合后的股骨位置区域的CT模态以及股骨位置区域和股骨坏死位置区域的MRI模态相融合三维医学图像;
图9是本公开提供的方法中CT图像分割网络模型的训练过程的流程示意图;
图10是本公开提供的方法中MRI图像分割网络模型的训练过程的流程示意图;
图11是图9和图10所示训练过程的深度学习训练网络结构图;
图12是本公开提供的图像配准融合装置的结构示意图之一;图13是本公开提供的图像配准融合装置的结构示意图之二;图14是本公开提供的电子设备的结构示意图。
附图标记:1110:获取模块;1120:二维图像处理模块;1130:三维重建及 融合模块;1131:三维图像重建模块;1132:点集确定模块;1133:配准模块;1310:处理器;1320:通信接口;1330:存储器;1340:通信总线。
具体实施方式
CT医学图像空间分辨率较高,可以清晰地定位刚性的骨骼,但是其对软组织的成像对比度较低,无法清晰地显示病灶本身;MRI医学图像虽对软组织、血管、器官等解剖结构有高对比度成像,但空间分辨率低于CT医学图像,缺乏刚性的骨骼结构作为病灶的定位参照。所以,临床应用中,单一模态的医学图像往往不能为相关医护人员提供全面的医学参考信息。
而本公开将人工智能图像分割算法与多模态医学图像融合技术相结合,综合多种医学成像技术的优势,提取不同模态的医学图像的互补信息,融合后生成一幅比任何单一模态图像包含有更多有效医学参考信息的合成图像,以帮助相关医护人员对股骨头坏死等多种类型病症进行诊断、分期和治疗。
本公开提供一种图像配准融合方法,图1是本公开提供的图像配准融合方法的流程示意图之一,如图1所示,所述方法包括:
S110、获取患者的至少两种模态的二维医学图像。
获取同一患者的针对身体同一部位的两种或者两种以上的模态的二维医学图像,比如对于一位患髋关节疾病的患者,则获取该患者的髋关节股骨部位的二维CT医学图像、二维MRI医学图像等多个模态下的二维医学图像。S120、将所述至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出。
参加图3A-图3C,图3A是本公开实施例提供的股骨位置区域的二维CT医学图像;图3B是本公开实施例提供的股骨位置区域的二维MRI医学图像;图3C是本公开实施例提供的股骨坏死位置区域的二维MRI医学图像。
将步骤S110中获取的多种模态的二维医学图像,逐一地分别输入至预先训练的相应的图像分割网络模型,比如该患者的二维CT医学图像输入至其相对应的针对于CT图像的CT图像分割网络模型中,而将该患者的二维MRI医学图像输入至其相对应的针对于MRI图像的MRI图像分割网络模型中,以分别对应地输出各个模态本***置区域的二维医学图像。当然,还可以有患者的该同一身体部位的其他二维医学图像,也均输入至其分别相应的图像分割网络模型中进行处 理。如果该患者该身体部位并没有任何病症,则各模态的二维医学图像均为正常图像,不会出现关于病灶或坏死情况的图像。而若是该患者的该身体部位存在着一定的病灶或坏死情况,则在多个模态的二维医学图像中,至少有一种模态本***置区域的二维医学图像中是能够表示出该模态下该患者的本体坏死位置区域的二维医学图像。比如,所分别输出的CT模态下本***置区域的二维医学图像和MRI模态下本***置区域的二维医学图像当中,至少有一种,比如是MRI模态下本***置区域的二维医学图像中包括着MRI模态下该患者的本体坏死位置区域的二维医学图像,或者也可以将所述MRI模态下该患者的本体坏死位置区域的二维医学图像理解为在MRI模态下与其MRI模态下本***置区域的二维医学图像并列存在的另一个独立的二维医学图像,但仍将其与同模态下本***置区域的二维医学图像视作一个整体。
S130、将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。可选地,还可以将各个模态本***置区域的二维医学图像进行点云配准融合后再进行三维重建处理,以获得多模态相融合三维医学图像。将步骤S120中获得的各个模态本***置区域的二维医学图像分别进行三维重建后获得各个模态本***置区域的三维医学图像,再将各个模态本***置区域的三维医学图像进行点云配准融合,以获得多模态相融合三维医学图像。
图2是本公开提供的图像配准融合方法的流程示意图之二,如图2所示,在图1所示实施例的基础上,所述步骤S130执行将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像时,可以包括:
S131、基于三维图像重建法,将所述各个模态本***置区域的二维医学图像分别重建为各个模态本***置区域的三维医学图像;基于三维图像重建法(使用三维图像处理库),将步骤S120输出的各个模态本***置区域的二维医学图像分别进行三维重建,并分别对应地获得各个模态本***置区域的三维医学图像。所述三维图像重建法可以参照现有的三维图像开源处理库等技术进行,此处不作赘述。
S132、分别基于所述各个模态本***置区域的三维医学图像,确定其本体标志点集和本体头标志点集分别作为该模态相应的点云集;再分别基于步骤S131重建好的各个模态本***置区域的三维医学图像,分别根据其确定的本体标志点集和本体头标志点集确定出各个模态相应的点云集。本体标志点和本体头标志点均可以根据实际需求选取参考点来设置。当然,还可以将本体标志点和本体头标志点均选择本体中心点和本体中心点,以确定出各个模态下的本体中心点集和本体头中心点集。本体区域的中心点和本体头的中心点,均能较好地作为参考点,因此以这些点为基础去计算确定出各个模态相应的点云集。
S133、基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合三维医学图像。最后基于点云配准算法,将步骤S132所确定的各个模态三维医学图像相应的点云集进行综合的点云配准融合,最终获得多模态相融合三维医学图像。根据本公开提供的图像配准融合方法,所述至少两种模态的二维医学图像包括二维CT医学图像、二维MRI医学图像、二维超声医学图像、二维PETCT医学图像中的至少两种;以及,所述本体包括股骨,所述本体头包括股骨头。
本公开提供的图像配准融合方法中,所述至少两种模态的二维医学图像包括二维CT医学图像二维MRI医学图像、二维超声医学图像、二维PETCT医学图像中的至少两种,当然还可以包括其他模态下的二维医学图像。当所应用患者为患髋关节疾病的该类患者时,可以采集该类患者的髋关节部位尤其是股骨部位的二维医学图像,以便于医护人员诊断参考。所以,本实施例设置所述本体理解为股骨,相应地,所述本体头则为股骨头。因此,步骤S120中所述通过模型输出的各个模态本***置区域的二维医学图像,举例为CT模态下和MRI模态下的股骨位置区域的二维医学图像。
根据本公开提供的图像配准融合方法,在上述实施例的基础上,当所述至少两种模态的二维医学图像包括二维CT医学图像、二维MRI医学图像、二维超声医学图像、二维PETCT医学图像中的至少两种时,且优选所述本体包括股骨,所述本体头包括股骨头时,所述方法的步骤S120、将所述至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出,可以包括:
将所述二维CT医学图像输入至预先训练的CT图像分割网络模型以获得股骨位置区域的CT医学图像;和/或,将所述二维MRI医学图像输入至预先训练的MRI图像分割网络模型以获得股骨位置区域的MRI医学图像;和/或,将所述二维超声医学图像输入至预先训练的超声图像分割网络模型以获得股骨位置区域的超声医学图像;和/或,将所述二维PETCT医学图像输入至预先训练的PETCT图像分割网络模型以获得股骨位置区域的PETCT医学图像。
本公开实施例以包括二维CT医学图像和二维MRI医学图像为例进行说明,其他情况同理。此时,步骤S120可以包括:S121、将所述二维CT医学图像输入至预先训练的CT图像分割网络模型以获得股骨位置区域的CT医学图像,S122、将所述二维MRI医学图像输入至预先训练的MRI图像分割网络模型以获得股骨位置区域的MRI医学图像。且,当患者股骨位置区域存在坏死或病灶情况时,还设置所述股骨位置区域的MRI医学图像包括着股骨坏死位置区域MRI医学图像,也可以设置为单独获取将带有股骨坏死的二维MRI医学图像输入至预先训练的MRI图像分割网络模型以获得单独的一个股骨坏死位置区域MRI医学图像。即,步骤S120可以包括:将所述二维CT医学图像和二维MRI医学图像分别输入至各自相应的预先训练的图像分割网络模型中,从而分别输出股骨位置区域的CT医学图像,以及股骨位置区域的MRI医学图像。并且,所述股骨位置区域的MRI医学图像中包括股骨坏死位置区域的MRI医学图像,即,输出的股骨位置区域的MRI医学图像中包含着MRI模态下股骨坏死位置区域的MRI医学图像的表示。或者,也可将股骨坏死位置区域的MRI医学图像理解为是在MRI模态下与股骨位置区域的MRI医学图像并存的另一独立的二维医学图像,但仍将需其与股骨位置区域的MRI医学图像在逻辑上视作一个整体。
而当步骤S130执行步骤S131-S133所述方法时,结合股骨位置区域的MRI医学图像包括着股骨坏死位置区域MRI医学图像的设置,过程说明如下:
步骤S131、基于三维图像重建法,将所述各个模态本***置区域的二维医学图像分别重建为各个模态本***置区域的三维医学图像。即,基于三维图像重建法,可以使用三维图像处理库,将股骨位置区域的CT医学图像重建为股骨位置区域的三维CT医学图像,以及将股骨位置区域的MRI医学图像(包含股骨坏死位置区域MRI医学图像)重建为股骨位置区域的三维MRI医学图像(包含股骨坏 死位置区域的三维MRI医学图像)。其中,所述股骨坏死位置区域的三维MRI医学图像既可以理解为是与股骨位置区域的三维MRI医学图像并存的另一个独立三维医学图像,也可以理解为是包含在股骨位置区域的三维MRI医学图像中与其共同作为一个三维医学图像整体。
步骤S132、分别基于所述各个模态本***置区域的三维医学图像,确定其本体标志点集和本体头标志点集分别作为各个模态三维医学图像相应的点云集,可以是确定其本体中心点集和本体头中心点集作为各个模态三维医学图像相应的点云集,包括:即,基于所述股骨位置区域的三维CT医学图像,确定其股骨中心点集和股骨头中心点集作为CT模态下三维CT医学图像相应的第一点云集;基于所述股骨位置区域的三维MRI医学图像,确定其股骨中心点集和股骨头中心点集作为MRI模态下三维MRI医学图像相应的第二点云集;股骨的中心点和股骨头的中心点,均能较好地作为参考点,因此以这些点为基础去计算确定出各个模态三维医学图像相应的点云集。各个模态三维医学图像相应的点云集的确定过程包括:基于股骨位置区域的三维CT医学图像,确定其股骨中心点集和股骨头中心点集作为CT模态相应的第一点云集M。根据模型输出的股骨位置区域的二维CT医学图像,其股骨区域是在二维横断面上显示的,股骨头层面是近似圆形的,所以可以直接计算出股骨头中心点,然后在髓腔层面确定每一层的髓腔中心点即可构成股骨中心点。根据该二维图像进行三维重建后的股骨位置区域的三维CT医学图像中也可以得出这些点。而多个股骨位置区域的三维CT医学图像则得出股骨中心点集和股骨头中心点集,进而由其组合构成第一点云集M。同理,再基于股骨位置区域的三维MRI医学图像(包含股骨坏死位置区域的三维MRI医学图像),确定其股骨中心点集和股骨头中心点集作为MRI模态相应的第二点云集N。
步骤S133、基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合医学图像。即,基于ICP点云配准算法,将所述第一点云集M和所述第二点云集N这两组点云进行点云配准融合,进而获得CT模态和MRI模态相融合三维医学图像,配准精确度更高且配准时间成本低。其中,所述ICP点云配准算法,可以采用现有的三维点云配准方法:基于主成分析法计算待配准点云集对应的第一参考坐标系以及参考点云集对应的第 二参考坐标系;基于第一参考坐标系以及第二参考坐标系,对待配准点云集和参考点云集进行初始配准;再基于多维二叉搜索树算法,在初始配准后的参考点云集中寻找与待配准点云集中距离最近的点,得到多组对应点对;分别计算多组对应点对之间的方向向量夹角;基于预设夹角阈值和方向向量夹角,对待配准点云集和参考点云集进行精配准,最终获得CT模态和MRI模态相融合三维医学图像。
图4是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨位置区域的三维CT医学图像;图5是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨位置区域的三维MRI医学图像;而图6是通过本公开提供的图像配准融合方法配准融合后的CT模态和MRI模态相融合三维医学图像;图7是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨坏死位置区域的三维MRI医学图像;图8是通过本公开提供的图像配准融合方法配准融合后的股骨位置区域的CT模态以及股骨位置区域的MRI模态和股骨坏死位置区域的MRI模态相融合三维医学图像。
正如图4-图8所示,图4则是经上述步骤进行三维重建后获得的该患者的股骨位置区域的三维CT医学图像,图5则是经上述步骤进行三维重建后获得的该患者的股骨位置区域的三维MRI医学图像,图4和图5可以先进行融合后获得图6,图6表示股骨没有坏死的情况下相融合三维图像,而图7是通过本公开提供的图像配准融合方法图像分割和三维重建后的股骨坏死位置区域的三维MRI医学图像,也可理解为是独立的股骨坏死位置区域的三维MRI医学图像,此时虽是将图7的股骨坏死位置区域的三维MRI医学图像与图5的股骨位置区域的三维MRI医学图像共同作为一个三维医学图像整体,但是在进行点云配准融合处理时本质上还是先将这二者的三维医学图像进行融合处理后作为一个三维医学图像整体,以新的股骨位置区域的三维MRI医学图像的整体,再去与股骨位置区域的三维CT医学图像进行点云配准融合。
当然,也可以如图4-图8所示的,先将图4和图5融合获得图6,再将图7和图6融合获得图8,即最终获得股骨位置区域的三维CT医学图像和股骨位置区域的三维MRI医学图像以及股骨坏死位置区域的三维MRI医学图像,依ICP点云配准算法进行配准到一起后所获得的综合结果:CT模态和MRI模态相融合 三维医学图像。该CT模态和MRI模态相融合三维医学图像精准地融合了CT模态和MRI模态的图像的不同特征,并且还能体现出该患者真正的股骨坏死位置区域(如图8中股骨头内部上方的异形小区域部位所示),进而能够为医护人员提供对该患有髋关节疾病的患者进行治疗前的精准的参考依据。
图9是本公开提供的方法中CT图像分割网络模型的训练过程的流程示意图,如图9所示,该方法中所述CT图像分割网络模型的训练过程,包括:S610、获取多个患者的二维CT医学图像数据集,其中,所述二维CT医学图像数据集中包含有多个二维CT医学图像;获取大量的患有髋关节疾病患者的二维CT医学图像数据集,其中,所述二维CT医学图像数据集中包含有多个二维CT医学图像。S620、采用自动标注、手动标注中的至少一种方式,标注出各个所述二维CT医学图像中的股骨位置区域;对二维CT医学图像数据集中的各个二维CT医学图像,分别自动或者手动地标注出股骨位置区域,将其作为我们的数据库基础。自动标注时可借助标注软件进行。从而获得经过标注后的各个二维CT医学图像形成的二维CT医学图像数据集。S630、将经过标注后的各个二维CT医学图像按照预设比例划分为CT训练数据集和CT测试数据集;在进行训练数据集和测试数据集划分之前,还需要将经过标注后的二维CT医学图像数据集中的各个二维CT医学图像进行相应的格式转换,如此才能顺利地进入图像分割网络进行处理。可选地,是将经过标注后的二维CT医学图像数据集中的各个二维CT医学图像的二维横断面DICOM格式转换成JPG格式的图片。
将经过标注并经过格式转换后的各个二维CT医学图像,按照预比例7:3来划分为CT训练数据集和CT测试数据集。所述CT训练数据集用于作为CT图像分割网络的输入,以训练出CT图像分割网络模型。而所述CT测试数据集用于后续对所述CT图像分割网络模型的性能进行测试和优化。
S640、基于所述CT训练数据集并结合神经网络算法和深度学习训练出CT图像分割网络模型;基于所述CT训练数据集并结合神经网络算法和深度学习,利用多次下采样识别所述CT训练数据集中图像数据的深层特征,还利用多次上采样将学习到的深层特征反向存储至所述图像数据中,以实现通过第一图像分割网络(主干图像分割网络)获得图像粗分割结果,并再通过第二图像分割网络(从 属图像分割网络)对分类不确定的多个点进行精确分割处理,以获得精确分割结果。最终训练出CT图像分割网络模型。
或者,图10是本公开提供的方法中MRI图像分割网络模型的训练过程的流程示意图,如图10所示,该方法中所述MRI图像分割网络模型的训练过程,包括:
S710、获取多个患者的二维MRI医学图像数据集,其中,所述二维MRI医学图像数据集中包含有多个二维MRI医学图像;获取大量的患有髋关节疾病患者(需要与步骤610中为相同患者)的二维MRI医学图像数据集,其中,所述二维MRI医学图像数据集中包含有多个二维MRI医学图像;
S720、采用自动标注、手动标注中的至少一种方式,标注出各个所述二维MRI医学图像中的股骨位置区域;对二维MRI医学图像数据集中的各个二维MRI医学图像,分别自动或者手动地标注出股骨位置区域,当然若有股骨坏死情况,还应该一并标注出股骨坏死位置区域,也将其作为我们的数据库基础。自动标注时可借助标注软件进行。从而获得经过标注后的各个二维MRI医学图像形成的二维MRI医学图像数据集。
S730、将经过标注后的各个二维MRI医学图像按照预设比例划分为MRI训练数据集和MRI测试数据集;在进行训练数据集和测试数据集划分之前,还需要将经过标注后的二维MRI医学图像数据集中的各个二维MRI医学图像进行相应的格式转换,如此才能顺利地进入图像分割网络进行处理。可选地,是将经过标注后的二维MRI医学图像数据集中的各个二维MRI医学图像的原格式转换成PNG格式的图片。
将经过标注并经过格式转换后的各个二维MRI医学图像,按照预比例7:3来划分为MRI训练数据集和MRI测试数据集。所述MRI训练数据集用于作为MRI图像分割网络的输入,以训练出MRI图像分割网络模型。而所述MRI测试数据集用于后续对所述MRI图像分割网络模型的性能进行测试和优化。
S740、基于所述MRI训练数据集并结合神经网络算法和深度学习训练出MRI图像分割网络模型。基于所述MRI训练数据集并结合神经网络算法和深度学习,利用多次下采样识别所述MRI训练数据集中图像数据的深层特征,还利用多次上采样将学习到的深层特征反向存储至所述图像数据中,以实现通过第一 图像分割网络(主干图像分割网络)获得图像粗分割结果,并再通过第二图像分割网络(从属图像分割网络)对分类不确定的多个点进行精确分割处理,以获得精确分割结果。最终训练出MRI图像分割网络模型。
根据本公开提供的图像配准融合方法,基于所述CT训练数据集或MRI训练数据集并结合神经网络算法和深度学习训练出CT图像分割网络模型或MRI图像分割网络模型,图11是图9和图10所示训练过程的深度学习训练网络结构图,再结合图11所示,所述模型的训练过程可以包括以下几个步骤:
(1)通过第一图像分割模型对所述CT训练数据集或所述MRI训练数据集进行粗分割处理:对所述CT训练数据集或所述MRI训练数据集中的图像数据执行多次下采样,以通过卷积层和池化层的处理识别各图像数据的深层特征;对进行下采样后的图像数据执行多次上采样,以通过上采样层和卷积层的处理反向存储所述深层特征至所述图像数据中;利用Adam分类优化器进行图像粗分类处理,获得图像粗分割结果;
首先利用第一图像分割模型(unet backbone主干神经网络,简称unet主神经网络),对所述CT训练数据集或所述MRI训练数据集进行粗分割处理(Coarse prediction)。第一阶段执行4次下采样来学习所述CT训练数据集或所述MRI训练数据集各个图像数据的深层特征。其中每个下采样层中包括2个卷积层和1个池化层,卷积层中的卷积核大小为3*3,池化层中的卷积核大小为2*2,每个卷积层中的卷积核的个数为128、256、512等。对进行下采样后的图像数据再执行4次上采样,以将下采样所学习的各个图像数据的深层特征重新存储到所述图像数据中。其中每个上采样层中包括1个上采样层和2个卷积层,其中,卷积层的卷积核大小为3*2,而上采样层中的卷积核大小为2*2,每个上采样层中的卷积核个数为512、256、128等。上述的卷积神经网络的采样过程就是对各个图像数据进行特征提取的过程。我们需要在各个原图像中识别出特征部分,通过卷积神经网络不断重复地去学习其深层的特征,最后反向存储到原图像上。利用Adam分类优化器进行图像粗分类处理,获得图像粗分割结果。
(2)通过第二图像分割模型对所述图像粗分割结果进行精分割处理:从所述深层特征中筛选预设置信度的特征点数据,对所述特征点数据进行双线性插值 计算,基于计算后的特征点数据识别所述深层特征的所属类别,获得最终的图像分割结果。
再通过第二图像分割模型(pointrend从属神经网络,简称pointrend从神经网络),对经unet主神经网络处理后获得的图像粗分割结果进行进一步的精分割处理。通过Bilinear双线性插值法对所述图像粗分割结果进行上采样学习计算,以获得各个图像的密集特征图;针对每一个图像的密集特征图,均从中选择多个所属分类未知的点,也即是选择N个所属分类最不确定的点,比如选择多个置信度/概率为0.5的点,然后计算表并提取这N个点的深层特征表示,并利用MLP多层感知器逐点预测这N个点经精分割后各自的所属分类,比如判断该点是属于股骨区域,还是非股骨区域;重复执行上述步骤直至这N个点中的每一个点在精分割后的所属分类逐一预测完毕。其利用MLP多层感知器逐点预测这N个点经精分割后各自的所属分类时使用一个小型的分类器去判断这个点属于哪个分类,这其是等价于用一个1*1的卷积来预测。但是对于置信度接近于1或0的点,其所属分类依然很明确,故而这些点不再需要进行逐点预测。减少了所需预测点的数量,并且从整体上提升了最终图像分割结果的精准度。从而,最终获得最优化的图像精分割结果(optimized prediction)。
(3)基于所述最终的图像分割结果以及所述CT训练数据集或所述MRI训练数据集计算损失函数;
(4)基于所述损失函数调整所述CT图像分割网络模型或所述MRI图像分割网络模型的参数,直至所述CT图像分割网络模型或所述MRI图像分割网络模型训练成功。其中,设置损失函数的作用是,可以在模型预训练过程中根据所述损失函数的变化对每次训练的样本数的大小进行调整。可选地,在通过unet主神经网络对所述CT训练数据集或所述MRI训练数据集进行粗分割处理的过程中,每次训练的样本数Batch_Size的大小的初始值设置为6,学习率设置为1e-4,优化器使用Adam优化器,并设置损失函数DICE loss。将CT训练数据集或MRI训练数据集全部送入unet主神经网络进行训练时,便可根据训练过程中损失函数的变化情况,实时有效地调整每次训练的样本数Batch_Size的大小,从而在粗分割处理阶段提升处理精确度。
根据本公开提供的图像配准融合方法,所述方法还包括:在所述各卷积层后均设置激活函数;其中,所有的卷积层后面还均设有激活函数,比如可以是relu激活函数、Sigmoid激活函数、tanh激活函数、leaky relu激活函数等等,以增强卷积神经网络的非线性因素,使得经该卷积神经网络能更好地解决较为复杂的计算处理过程。和/或,所述通过所述第一图像分割模型对所述CT训练数据集或所述MRI训练数据集进行粗分割处理的过程中,最后一次上采样结束后设置有dropout层;最后一次上采样结束后,或者说最后一个上采样层之后,设有一个dropout层,用于在深度学习网络的训练过程中,按照一定的概率将一些神经网络单元暂时从网络中丢弃,以进一步提升模型训练的精确度。其中,所述dropout层的概率设置为0.7。
本公开还提供一种图像配准融合装置,图12是本公开提供的图像配准融合装置的结构示意图之一,如图12所示,所述装置包括:获取模块1110、二维图像处理模块1120、三维重建及融合模块1130,其中,所述获取模块1110,被配置为获取患者的至少两种模态的二维医学图像;所述二维图像处理模块1120,被配置为将所述至少两种模态的二维医学图像输入至预先训练的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出;所述三维重建及融合模块1130,被配置为将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。
根据本公开提供的图像配准融合装置,图13是本公开提供的图像配准融合装置的结构示意图之二,如图13所示,在图12所示实施例基础上,所述三维重建及融合模块1130,还包括:三维图像重建模块1131、点集确定模块1132和配准模块1133,其中,所述三维图像重建模块1131,被配置为基于三维图像重建法,将所述各个模态本***置区域的二维医学图像分别重建为各个模态本***置区域的三维医学图像;所述点集确定模块1132,被配置为分别基于所述各个模态本***置区域的三维医学图像,确定其本体标志点集和本体头标志点集分别作为各个模态三维医学图像相应的点云集;所述配准模块1133,被配置为基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合三维医学图像。
本公开还提供一种电子设备,图14是本公开提供的电子设备的结构示意 图,如图14所示,该电子设备可以包括:处理器(processor)1310、通信接口(Communications Interface)1320、存储器(memory)1330和通信总线1340,其中,处理器1310,通信接口1320,存储器1330通过通信总线1340完成相互间的通信。处理器1310可以调用存储器1330中的逻辑指令,以执行所述图像配准融合方法,或者模型训练方法的全部或部分步骤。另一方面,本公开还提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各实施例所提供的图像配准融合方法,或者模型训练方法的全部或部分步骤。又一方面,本公开还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上各实施例所述图像配准融合方法,或者模型训练方法的全部或部分步骤。

Claims (12)

  1. 一种图像配准融合方法,包括:
    获取患者的至少两种模态的二维医学图像;将所述至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出;将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。
  2. 根据权利要求1所述的图像配准融合方法,其中,将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像,包括:基于三维图像重建法,将所述各个模态本***置区域的二维医学图像分别重建为各个模态本***置区域的三维医学图像;分别基于所述各个模态本***置区域的三维医学图像,确定其本体标志点集和本体头标志点集分别作为各个模态三维医学图像相应的点云集;基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合三维医学图像。
  3. 根据权利要求2所述的图像配准融合方法,其中,基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合三维医学图像,包括:基于ICP点云配准算法,将第一点云集和第二点云集进行点云配准融合,以获得CT模态和MRI模态相融合的三维医学图像;所述基于ICP点云配准算法,将第一点云集和第二点云集进行点云配准融合,以获得CT模态和MRI模态相融合的三维医学图像的方法包括:基于主成分析法计算待配准点云集对应的第一参考坐标系以及参考点云集对应的第二参考坐标系;基于第一参考坐标系以及第二参考坐标系,对待配准点云集和参考点云集进行初始配准;再基于多维二叉搜索树算法,在初始配准后的参考点云集中寻找与待配准点云集中距离最近的点,得到多组对应点对;分别计算多组对应点对之间的方向向量夹角;基于预设夹角阈值和方向向量夹角,对待配准点云集和参考点云集进行精配准,最终获得CT模态和MRI模态相融合三维医学图像;其中,基于股骨位置区域的三维CT医学图像,确定其股骨中心点集和股骨头中心点集作为CT模态相应的第一点云集;基于股骨位置区域的三维MRI医学图像,确定其股骨中心点集和股骨头中心点集作为MRI模态相应的第二点云集;第一点云集包括待配准点云集和参考点云集中任一个,第二点云集包括待配准点云集和参考点云集中与第一点云集不同的一个。
  4. 根据权利要求1或2所述的图像配准融合方法,其中,所述至少两种模态的二维医学图像包括二维CT医学图像、二维MRI医学图像、二维超声医学图像、二维PETCT医学图像中的至少两种;以及,所述本体包括股骨,所述本体头包括股骨头。
  5. 根据权利要求3所述的图像配准融合方法,其中,将所述至少两种模态的二维医学图像分别输入至预先训练的相应的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出,包括:将所述二维CT医学图像输入至预先训练的CT图像分割网络模型以获得股骨位置区域的CT医学图像;和/或,将所述二维MRI医学图像输入至预先训练的MRI图像分割网络模型以获得股骨位置区域的MRI医学图像;和/或,将所述二维超声医学图像输入至预先训练的超声图像分割网络模型以获得股骨位置区域的超声医学图像;和/或,将所述二维PETCT医学图像输入至预先训练的PETCT图像分割网络模型以获得股骨位置区域的PETCT医学图像。
  6. 一种模型训练方法,所述方法包括CT图像分割网络模型的训练过程以及MRI图像分割网络模型的训练过程,其中,所述CT图像分割网络模型的训练过程,包括:获取多个患者的二维CT医学图像数据集,其中,所述二维CT医学图像数据集中包含有多个二维CT医学图像;采用自动标注、手动标注中的至少一种方式,标注出各个所述二维CT医学图像中的股骨位置区域;将经过标注后的各个二维CT医学图像按照预设比例划分为CT训练数据集和CT测试数据集;基于所述CT训练数据集并结合神经网络算法和深度学习训练出CT图像分割网络模型;其中,所述MRI图像分割网络模型的训练过程,包括:获取多个患者的二维MRI医学图像数据集,其中,所述二维MRI医学图像数据集中包含有多个二维MRI医学图像;采用自动标注、手动标注中的至少一种方式,标注出各个所述二维MRI医学图像中的股骨位置区域;将经过标注后的各个二维MRI医学图像按照预设比例划分为MRI训练数据集和MRI测试数据集;基于所述MRI训练数据集并结合神经网络算法和深度学习训练出MRI图像分割网络模型。
  7. 根据权利要求6所述的模型训练方法,其中,基于所述CT训练数据集并结合神经网络算法和深度学习训练出CT图像分割网络模型,或,基于所述MRI训练数据集并结合神经网络算法和深度学习训练出MRI图像分割网络模型,包括:通过第一图像分割模型对所述CT训练数据集或所述MRI训练数据集进行粗分割处理:对所述CT训练数据集或所述MRI训练数据集中的图像数据执行多次下采样,以通过卷积 层和池化层的处理识别各图像数据的深层特征;对进行下采样后的图像数据执行多次上采样,以通过上采样层和卷积层的处理反向存储所述深层特征至所述图像数据中;利用Adam分类优化器进行图像粗分类处理,获得图像粗分割结果;通过第二图像分割模型对所述图像粗分割结果进行精分割处理:从所述深层特征中筛选预设置信度的特征点数据,对所述特征点数据进行双线性插值计算,基于计算后的特征点数据识别所述深层特征的所属类别,获得最终的图像分割结果;基于所述最终的图像分割结果以及所述CT训练数据集或所述MRI训练数据集计算损失函数;基于所述损失函数调整所述CT图像分割网络模型或所述MRI图像分割网络模型的参数,直至所述CT图像分割网络模型或所述MRI图像分割网络模型训练成功。
  8. 根据权利要求7所述的模型训练方法,其中,所述获得最终的图像分割结果的方法包括:通过Bilinear双线性插值法对图像粗分割结果进行上采样学习计算,以获得各个图像的密集特征图;针对每一个图像的密集特征图,从中选择多个所属分类未知的点,选择N个所属分类最不确定的点;计算并提取N个点的深层特征表示,并利用MLP多层感知器逐点预测这N个点经精分割后各自的所属分类;重复执行上述步骤直至N个点中的每一个点在精分割后的所属分类逐一预测完毕。
  9. 根据权利要求7所述的模型训练方法,所述方法还包括:在所述各卷积层后均设置激活函数;和/或,所述通过所述第一图像分割模型对所述CT训练数据集或所述MRI训练数据集进行粗分割处理的过程中,最后一次上采样结束后设置有dropout层。
  10. 一种图像配准融合装置,包括:获取模块,被配置为获取患者的至少两种模态的二维医学图像;二维图像处理模块,被配置为将所述至少两种模态的二维医学图像输入至预先训练的图像分割网络模型,以分别获得各个模态本***置区域的二维医学图像的输出;三维重建及融合模块,被配置为将所述各个模态本***置区域的二维医学图像分别进行三维重建后再进行点云配准融合,以获得多模态相融合三维医学图像。
  11. 根据权利要求10所述的图像配准融合装置,其中,所述三维重建及融合模块,包括:三维图像重建模块,被配置为基于三维图像重建法,将所述各个模态本***置区域的二维医学图像分别重建为各个模态本***置区域的三维医学图像;点集确定模块,被配置为分别基于所述各个模态本***置区域的三维医学图像,确定其本体 标志点集和本体头标志点集分别作为各个模态三维医学图像相应的点云集;配准模块,被配置为基于点云配准算法,将各个模态三维医学图像相应的点云集进行点云配准融合,以获得多模态相融合三维医学图像。
  12. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现根据权利要求1至5中任一项所述图像配准融合方法,或者权利要求6至9中任一项所述的模型训练方法的全部或部分步骤。
PCT/CN2021/128241 2021-06-07 2021-11-02 图像配准融合方法及装置、模型训练方法及电子设备 WO2022257344A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110633927.9A CN113450294A (zh) 2021-06-07 2021-06-07 多模态医学图像配准融合方法、装置及电子设备
CN202110633927.9 2021-06-07

Publications (1)

Publication Number Publication Date
WO2022257344A1 true WO2022257344A1 (zh) 2022-12-15

Family

ID=77811026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128241 WO2022257344A1 (zh) 2021-06-07 2021-11-02 图像配准融合方法及装置、模型训练方法及电子设备

Country Status (2)

Country Link
CN (1) CN113450294A (zh)
WO (1) WO2022257344A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309751A (zh) * 2023-03-15 2023-06-23 北京医准智能科技有限公司 影像处理方法、装置、电子设备及介质
CN116416235A (zh) * 2023-04-12 2023-07-11 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN116580033A (zh) * 2023-07-14 2023-08-11 卡本(深圳)医疗器械有限公司 一种基于图像块相似性匹配的多模态医学图像配准方法
CN116630206A (zh) * 2023-07-20 2023-08-22 杭州安劼医学科技有限公司 快速配准的定位方法及***
CN116758127A (zh) * 2023-08-16 2023-09-15 北京爱康宜诚医疗器材有限公司 股骨的模型配准方法、装置、存储介质和处理器
CN117351215A (zh) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 一种人工肩关节假体设计***及方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506334B (zh) * 2021-06-07 2023-12-15 刘星宇 基于深度学习的多模态医学图像融合方法及***
CN113450294A (zh) * 2021-06-07 2021-09-28 刘星宇 多模态医学图像配准融合方法、装置及电子设备
CN113974920B (zh) * 2021-10-08 2022-10-11 北京长木谷医疗科技有限公司 膝关节股骨力线确定方法和装置、电子设备、存储介质
CN113888663B (zh) * 2021-10-15 2022-08-26 推想医疗科技股份有限公司 重建模型训练方法、异常检测方法、装置、设备及介质
CN114170162A (zh) * 2021-11-25 2022-03-11 深圳先进技术研究院 一种图像预测方法、图像预测装置以及计算机存储介质
CN113870259B (zh) * 2021-12-02 2022-04-01 天津御锦人工智能医疗科技有限公司 多模态医学数据融合的评估方法、装置、设备及存储介质
CN115393527A (zh) * 2022-09-14 2022-11-25 北京富益辰医疗科技有限公司 基于多模影像与交互设备的解剖导航构建方法及装置
CN115410173B (zh) * 2022-11-01 2023-03-24 北京百度网讯科技有限公司 多模态融合的高精地图要素识别方法、装置、设备及介质
CN115690556B (zh) * 2022-11-08 2023-06-27 河北北方学院附属第一医院 一种基于多模态影像学特征的图像识别方法及***
CN116071386B (zh) * 2023-01-09 2023-10-03 安徽爱朋科技有限公司 一种关节疾病的医学影像的动态分割方法
CN116862930B (zh) * 2023-09-04 2023-11-28 首都医科大学附属北京天坛医院 适用于多模态的脑血管分割方法、装置、设备和存储介质
CN116958132B (zh) * 2023-09-18 2023-12-26 中南大学 基于视觉分析的手术导航***
CN117522708A (zh) * 2023-10-27 2024-02-06 中核粒子医疗科技有限公司 图像融合方法、装置、电子设备和存储介质
CN118097156A (zh) * 2024-04-26 2024-05-28 百洋智能科技集团股份有限公司 盆底功能障碍检测方法、装置、计算机设备与存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060227A (zh) * 2019-04-11 2019-07-26 艾瑞迈迪科技石家庄有限公司 多模态影像融合显示方法和装置
CN110321946A (zh) * 2019-06-27 2019-10-11 郑州大学第一附属医院 一种基于深度学习的多模态医学影像识别方法及装置
CN110363802A (zh) * 2018-10-26 2019-10-22 西安电子科技大学 基于自动分割和骨盆对齐的***图像配准***及方法
CN111179231A (zh) * 2019-12-20 2020-05-19 上海联影智能医疗科技有限公司 图像处理方法、装置、设备和存储介质
US20200184660A1 (en) * 2018-12-11 2020-06-11 Siemens Healthcare Gmbh Unsupervised deformable registration for multi-modal images
CN112826590A (zh) * 2021-02-02 2021-05-25 复旦大学 基于多模态融合和点云配准的膝关节置换术空间注册***
CN113450294A (zh) * 2021-06-07 2021-09-28 刘星宇 多模态医学图像配准融合方法、装置及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100498839C (zh) * 2006-03-08 2009-06-10 杭州电子科技大学 一种多模态医学体数据三维可视化方法
CN106204550B (zh) * 2016-06-30 2018-10-30 华中科技大学 一种非刚性多模医学图像的配准方法及***
EP3547252A4 (en) * 2016-12-28 2019-12-04 Shanghai United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR PROCESSING MULTI-MODAL IMAGES
US10769791B2 (en) * 2017-10-13 2020-09-08 Beijing Keya Medical Technology Co., Ltd. Systems and methods for cross-modality image segmentation
CN109360208A (zh) * 2018-09-27 2019-02-19 华南理工大学 一种基于单程多任务卷积神经网络的医学图像分割方法
CN109949404A (zh) * 2019-01-16 2019-06-28 深圳市旭东数字医学影像技术有限公司 基于数字人与ct和/或mri图像融合的三维重建方法及***
CN110660063A (zh) * 2019-09-19 2020-01-07 山东省肿瘤防治研究院(山东省肿瘤医院) 多图像融合的肿瘤三维位置精准定位***
CN111062948B (zh) * 2019-11-18 2022-09-13 北京航空航天大学合肥创新研究院 一种基于胎儿四腔心切面图像的多组织分割方法
CN112381750A (zh) * 2020-12-15 2021-02-19 山东威高医疗科技有限公司 一种超声波图像和ct/mri图像的多模态配准融合方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363802A (zh) * 2018-10-26 2019-10-22 西安电子科技大学 基于自动分割和骨盆对齐的***图像配准***及方法
US20200184660A1 (en) * 2018-12-11 2020-06-11 Siemens Healthcare Gmbh Unsupervised deformable registration for multi-modal images
CN110060227A (zh) * 2019-04-11 2019-07-26 艾瑞迈迪科技石家庄有限公司 多模态影像融合显示方法和装置
CN110321946A (zh) * 2019-06-27 2019-10-11 郑州大学第一附属医院 一种基于深度学习的多模态医学影像识别方法及装置
CN111179231A (zh) * 2019-12-20 2020-05-19 上海联影智能医疗科技有限公司 图像处理方法、装置、设备和存储介质
CN112826590A (zh) * 2021-02-02 2021-05-25 复旦大学 基于多模态融合和点云配准的膝关节置换术空间注册***
CN113450294A (zh) * 2021-06-07 2021-09-28 刘星宇 多模态医学图像配准融合方法、装置及电子设备

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309751A (zh) * 2023-03-15 2023-06-23 北京医准智能科技有限公司 影像处理方法、装置、电子设备及介质
CN116309751B (zh) * 2023-03-15 2023-12-19 浙江医准智能科技有限公司 影像处理方法、装置、电子设备及介质
CN116416235A (zh) * 2023-04-12 2023-07-11 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN116416235B (zh) * 2023-04-12 2023-12-05 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN116580033A (zh) * 2023-07-14 2023-08-11 卡本(深圳)医疗器械有限公司 一种基于图像块相似性匹配的多模态医学图像配准方法
CN116580033B (zh) * 2023-07-14 2023-10-31 卡本(深圳)医疗器械有限公司 一种基于图像块相似性匹配的多模态医学图像配准方法
CN116630206A (zh) * 2023-07-20 2023-08-22 杭州安劼医学科技有限公司 快速配准的定位方法及***
CN116630206B (zh) * 2023-07-20 2023-10-03 杭州安劼医学科技有限公司 快速配准的定位方法及***
CN116758127A (zh) * 2023-08-16 2023-09-15 北京爱康宜诚医疗器材有限公司 股骨的模型配准方法、装置、存储介质和处理器
CN116758127B (zh) * 2023-08-16 2023-12-19 北京爱康宜诚医疗器材有限公司 股骨的模型配准方法、装置、存储介质和处理器
CN117351215A (zh) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 一种人工肩关节假体设计***及方法
CN117351215B (zh) * 2023-12-06 2024-02-23 上海交通大学宁波人工智能研究院 一种人工肩关节假体设计***及方法

Also Published As

Publication number Publication date
CN113450294A (zh) 2021-09-28

Similar Documents

Publication Publication Date Title
WO2022257344A1 (zh) 图像配准融合方法及装置、模型训练方法及电子设备
WO2022257345A1 (zh) 医学图像融合方法及***、模型训练方法及存储介质
Namburete et al. Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning
JP7143008B2 (ja) 深層学習に基づく医用画像検出方法及び装置、電子機器及びコンピュータプログラム
TWI715117B (zh) 醫療影像處理方法及裝置、電子設備及儲存介質
US10452957B2 (en) Image classification apparatus, method, and program
WO2023024882A1 (zh) 基于深度学习的股骨髓腔形态识别方法、装置及存储介质
CN115004223A (zh) 用于在医学图像中自动检测解剖结构的方法和***
JP7114737B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN113096137B (zh) 一种oct视网膜图像领域适应分割方法及***
JP7444382B2 (ja) 画像符号化装置、方法およびプログラム、画像復号化装置、方法およびプログラム、画像処理装置、学習装置、方法およびプログラム、並びに類似画像検索装置、方法およびプログラム
CN115147600A (zh) 基于分类器权重转换器的gbm多模态mr图像分割方法
CN115830016A (zh) 医学图像配准模型训练方法及设备
CN113674251A (zh) 基于多模态影像的腰椎图像分类识别***、设备和介质
CN115249228A (zh) 胸部x射线图像识别方法、装置、计算机设备和存储介质
Qin et al. Residual block-based multi-label classification and localization network with integral regression for vertebrae labeling
Ma et al. Amseg: A novel adversarial architecture based multi-scale fusion framework for thyroid nodule segmentation
Zhang et al. A spine segmentation method under an arbitrary field of view based on 3d swin transformer
Bhavya et al. Cervical spine fracture detection using pytorch
Sha et al. A robust segmentation method based on improved U-Net
CN115953416A (zh) 基于深度学习的膝骨关节核磁共振图像自动分割方法
CN114581459A (zh) 一种基于改进性3D U-Net模型的学前儿童肺部影像感兴趣区域分割方法
CN112967295A (zh) 一种基于残差网络和注意力机制的图像处理方法及***
CN116739951B (zh) 一种图像生成器、图像风格转换装置及方法
Velarde et al. Independent evaluation of state-of-the-art deep networks for mammography

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21944843

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21944843

Country of ref document: EP

Kind code of ref document: A1