CN115272356A - Multi-mode fusion method, device and equipment of CT image and readable storage medium - Google Patents

Multi-mode fusion method, device and equipment of CT image and readable storage medium Download PDF

Info

Publication number
CN115272356A
CN115272356A CN202210827415.0A CN202210827415A CN115272356A CN 115272356 A CN115272356 A CN 115272356A CN 202210827415 A CN202210827415 A CN 202210827415A CN 115272356 A CN115272356 A CN 115272356A
Authority
CN
China
Prior art keywords
image
fused
images
liver
imaging mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210827415.0A
Other languages
Chinese (zh)
Inventor
刘伟奇
陈磊
马学升
陈金钢
徐鹏
赵友源
赵晓彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongxin Zhiyi Technology Beijing Co ltd
Original Assignee
Tongxin Zhiyi Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongxin Zhiyi Technology Beijing Co ltd filed Critical Tongxin Zhiyi Technology Beijing Co ltd
Priority to CN202210827415.0A priority Critical patent/CN115272356A/en
Publication of CN115272356A publication Critical patent/CN115272356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a multi-mode fusion method, a multi-mode fusion device, multi-mode fusion equipment and a readable storage medium for liver CT images, which belong to the technical field of auxiliary medical analysis, and comprise the following steps: acquiring a CT image containing a liver part in the operation process; segmenting a liver part in the CT image through an MIMO-FAN model; taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the mobile imaging mode by using an independent iteration nearest point function; and fusing the CT image after registration with the image to be fused. The multi-mode fusion method of the liver CT image, disclosed by the invention, has the advantages of high practicability, low cost, short time consumption, timely response and high precision, and can reduce the influence on the clinical work flow as much as possible.

Description

Multi-mode fusion method, device and equipment of CT image and readable storage medium
Technical Field
The invention belongs to the technical field of auxiliary medical analysis, and particularly relates to a multi-mode fusion method, a multi-mode fusion device, multi-mode fusion equipment and a readable storage medium for a Computed Tomography (CT) image of a liver.
Background
The "interventional therapy" is a general term for a series of techniques for minimally invasive therapy by introducing specific instruments into a diseased region of a human body through natural orifices or tiny wounds of the human body by using interventional equipment under the guidance and monitoring of high-grade imaging equipment. Despite the increasing number of alternative imaging techniques for interventional guidance, CT remains the dominant imaging technique for interventional guidance during subcutaneous surgery.
It is well known that tumors often have different imaging characteristics, and no single imaging modality currently displays the characteristics of all tumors. In an interventional operation, when a tumor cannot be directly seen by using CT during guiding the placement of an interventional needle, an image fusion technique is generally used as an interventional guide. To achieve image fusion, it is necessary to register the interventional CT image into an imaging modality in which the tumor can be observed. However, robust and fast multi-modal image registration is a very challenging technical challenge.
Based on the obtained image information, multimodal image registration methods can be divided into intensity-based methods and feature-based methods. The main idea of the intensity-based approach is to iteratively search the geometric transformation, optimizing the similarity measure when applied to the motion imaging modality. However, such manual-based and intensity-based registration methods are not stable and easily fail at the time of intervention. If it fails, there is typically no time to adjust the parameters and run the algorithm again. These algorithms often fail to operate particularly when there are large morphological and appearance differences between interventional CT and tumor-visible imaging modalities. Therefore, more powerful registration algorithms should be used during the intervention to minimize the impact on the clinical workflow. On the other hand, feature-based methods provide better solutions for focusing on local structures. Some local representative features are first extracted from the image and matched to compute the corresponding transformations, however, matching these feature points is inherently challenging and difficult and segmenting the plane of the region of interest can help to robustly superimpose the two modalities. The calculated alignment plane between the points is used to register the corresponding images. However, such methods require an accurate and fast image segmentation basis.
Disclosure of Invention
The embodiment of the invention aims to provide a multi-mode fusion method, a multi-mode fusion device, a multi-mode fusion equipment and a readable storage medium for liver CT images, which can solve the technical problems of low practicability, high cost, long time consumption, higher delay and lower precision of the existing liver segmentation method.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a multi-modal fusion method for a liver CT image, including:
s101: acquiring a CT image containing a liver part in the operation process;
s102: segmenting a liver part in the CT image through an MIMO-FAN model;
s103: taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the image to be fused by using an independent Iterative Closest Point function (ICP);
s104: and fusing the CT image after registration with the image to be fused.
Optionally, the image to be fused includes: MRI images, CE-CT images and PET/CT images.
Optionally, before S103, the method further includes:
acquiring a preoperative static CT image;
a set of points is acquired from the preoperative static CT image and the intraoperative interventional CT image to perform surface-based registration.
Optionally, the S103 specifically includes:
s1031: taking the CT image as a reference imaging mode and the image to be fused as a mobile imaging mode, and aligning by adopting a principal component analysis algorithm to obtain an initial guess of the corresponding relation between the CT image and the image to be fused;
s1032: iteration is carried out by adopting a singular value decomposition algorithm so as to improve the corresponding relation between the CT image and the image to be fused;
s1033: and acquiring the root-mean-square distance between the CT image and the image to be fused, and stopping optimization under the condition that the root-mean-square distance is smaller than a preset value.
In a second aspect, an embodiment of the present invention provides a multi-modal fusion apparatus for CT images of a liver, including:
the first acquisition module is used for acquiring a CT image containing a liver part in the operation process;
the segmentation module is used for segmenting the liver part in the CT image through an MIMO-FAN model;
the registration module is used for taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the mobile imaging mode by using an independent iteration nearest point function;
and the fusion module is used for fusing the CT image after registration with the image to be fused.
Optionally, the image to be fused includes: MRI images, CE-CT images and PET/CT images.
Optionally, the multi-modal fusion apparatus for CT images of liver further comprises:
the second acquisition module is used for acquiring a preoperative static CT image;
a point set acquisition module for acquiring a point set from the preoperative static CT image and the intraoperative interventional CT image to perform surface-based registration.
Optionally, the registration module specifically comprises:
the alignment submodule is used for aligning the CT image serving as a reference imaging mode and the image to be fused serving as a mobile imaging mode by adopting a principal component analysis algorithm so as to obtain an initial guess of the corresponding relation between the CT image and the image to be fused;
the iteration submodule is used for iterating by adopting a singular value decomposition algorithm so as to improve the corresponding relation between the CT image and the image to be fused;
and the judging submodule is used for acquiring the root-mean-square distance between the CT image and the image to be fused, and stopping optimization under the condition that the root-mean-square distance is smaller than a preset value.
In a third aspect, an embodiment of the present invention provides an apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the method of the first aspect.
In the embodiment of the invention, the liver part in the CT image is accurately segmented through the MIMO-FAN model, and then the independent iteration nearest point function is used for carrying out surface-based accurate registration on the multi-modal image, so that the multi-modal image can be accurately fused, the practicability is high, the cost is low, the time consumption is short, the response is timely, the precision is high, and the influence on the clinical work flow can be reduced as far as possible.
Drawings
FIG. 1 is a flow chart of a multi-modal fusion method for CT images of a liver according to an embodiment of the present invention;
FIG. 2 is a flow chart of another multi-modal fusion method of CT images of a liver according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-modal fusion apparatus for CT images of a liver according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms first, second and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the invention may be practiced other than those illustrated or described herein, and that the objects identified as "first," "second," etc. are generally a class of objects and do not limit the number of objects, e.g., a first object may be one or more. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
It should be understood that in the present disclosure, "including" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present disclosure, "a plurality" means two or more. "and/or" is merely an association relationship describing an associated object, meaning that there may be three relationships, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of three of A, B, C is comprised, "comprises A, B and/or C" means that any 1 or any 2 or 3 of the three of A, B, C is comprised.
It should be understood that in this disclosure, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" can be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on context.
The multi-modal fusion method, device, apparatus and readable storage medium for liver CT images provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
Example one
Referring to fig. 1, a flow chart of a multi-modal fusion method of liver CT images according to an embodiment of the present invention is shown.
The embodiment of the invention provides a multi-mode fusion method of liver CT images, which comprises the following steps:
s101: and acquiring a CT image containing a liver part in the operation process.
S102: and segmenting the liver part in the CT image through a MIMO-FAN model.
The MIMO-FAN model can realize 2.5D deep learning, and the pyramid input and output architecture is used for fully extracting the multi-scale features of the image, so that the image information can be effectively utilized for segmentation. Referring to fig. 2, the deep learning network integrates a multi-scale mechanism into a U-shaped architecture, which enables the network to extract multi-scale features in an image from beginning to end.
It should be noted that, in order to fuse features of different scales, a significant feature of MIMO-FAN is that the features to be fused at a certain level all pass through the same number of convolutional layers, which helps to maintain similar features of hierarchical semantics. Unlike the U-Net approach, which is classical U-Net based, where the scale is only reduced as the convolution depth increases, MIMO-FAN has multi-scale features at each depth, so that global and local context information can be fully integrated to enhance the extracted features.
S103: taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the image to be fused by using an independent Iterative Closest Point function (ICP).
It should be noted that surface-based CT image registration and patient anatomy in physical space have been well-suited for use in image-guided surgery. It enables the surgeon to position and orient surgical tools through the vertebral anatomy. Different surface-based methods can be used for image fusion.
The standard ICP method and most variants implicitly assume an isotropic noise model. Selected points from these contours are rotated and translated in the x, y and z directions and zero mean, normal distribution, isotropic noise is added to the rotated points for simulating planes obtained with different imaging modalities.
In a possible implementation, S103 includes in particular sub-steps S1031 to S1033:
s1031: and taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and aligning by adopting a Principal Component Analysis (PCA) algorithm to obtain an initial guess of the corresponding relation between the CT image and the image to be fused.
S1032: and iterating by using a Singular Value Decomposition (SVD) algorithm to improve the corresponding relation between the CT image and the image to be fused.
In an iteration, for each transformed source point, the nearest target point is designated as its corresponding point.
S1033: and acquiring the root-mean-square distance between the CT image and the image to be fused, and stopping optimization under the condition that the root-mean-square distance is smaller than a preset value.
The ICP algorithm is always made to converge to the nearest local minimum with respect to the objective function by the substeps S1031 to S1033.
S104: and fusing the CT image after registration with the image to be fused.
Wherein, the image to be fused includes: MRI images, CE-CT images and PET/CT images. That is, the interventional CT image may be fused with the MRI image, the interventional CT image with the CE-CT image, or the interventional CT image with the PET/CT image.
(1) For interventional CT images fused with MRI images. In this case the MRI images can provide clear and detailed information about soft tissues and tumors that CT imaging cannot provide. By fusion, MRI as a mobile imaging modality can be mapped to interventional CT to help guide surgery. MRI can be done preoperatively through manual interaction, with interventional CT being segmented during the procedure. The images are then registered in the same coordinate system by registering the segmented surfaces. It is worth noting that the patient location in this case is very different from the location in the LiTS dataset. All images in the latter are used for diagnostic purposes and the patient is in a normal supine position. However, for interventional surgical guidance, the patient must typically assume a particular posture in order to best access the target area with the surgical instruments. Even in this case, our segmentation algorithm is well able to segment the liver. We help to use multi-scale features throughout the network, which makes possible a perfect combination of high-level overall features and low-level image texture details.
(2) For interventional CT images, the images are fused with CE-CT images. In this case, CE-CT can show well the tumor and vascular structures by using a contrast-enhancing agent. By fusion, the CE-CT, which is the motion imaging modality in this example, can be mapped onto the interventional CT for interventional guidance. CE-CT is acquired and segmented by manual interaction before surgery, and interventional CT is segmented during surgery. Image registration is then performed by aligning the segmented surfaces.
(3) For interventional CT images fused with PET/CT images. In this case, functional imaging by PET and intraoperative guided imaging by interventional CT are combined. PET imaging is a low resolution imaging modality, but can visualize the functional activity of tumors very well. However, it is difficult to directly register the PET with the interventional CT due to the lack of structural information. Thus, the CT image component in a PET/CT scan serves as a bridge for registration with the interventional CT by aligning the segmentation planes. By fusing the PET image with the interventional CT, the tumor can be easily observed during the surgery.
Referring to fig. 2, a flow chart of another multi-modal fusion method for CT images of liver according to an embodiment of the present invention is shown.
In a possible implementation, before S103, the method further includes:
acquiring a preoperative static CT image;
a set of points is acquired from the preoperative static CT image and the intraoperative interventional CT image to perform surface-based registration.
The preoperative static CT image and the intraoperative interventional CT image are used for acquiring the point set, so that the accuracy and the comprehensiveness of the point set are increased undoubtedly, and the registration accuracy is further improved.
In the embodiment of the invention, the liver part in the CT image is accurately segmented through the MIMO-FAN model, and then the independent iteration nearest point function is used for carrying out accurate registration based on the surface on the multi-mode image, so that the multi-mode image can be accurately fused, the practicability is high, the cost is low, the time consumption is short, the response is timely, the precision is high, and the influence on the clinical work flow can be reduced as far as possible.
Example two
Referring to fig. 3, a schematic structural diagram of a multi-modal fusion apparatus 30 for CT images of a liver according to an embodiment of the present invention is shown.
The multi-modal fusion device 30 for liver CT images provided by the embodiment of the present invention includes:
a first acquisition module 301, configured to acquire a CT image including a liver part during an operation;
a segmentation module 302, configured to segment a liver region in the CT image through an MIMO-FAN model;
a registration module 303, configured to perform surface-based registration on the CT image and the to-be-fused image by using an independent iterative closest point function, with the CT image serving as a reference imaging mode and the to-be-fused image serving as a mobile imaging mode;
and a fusion module 304, configured to fuse the CT image after registration with an image to be fused.
Optionally, the image to be fused includes: MRI images, CE-CT images and PET/CT images.
Optionally, the multi-modal fusion apparatus 30 for CT images of liver further comprises:
a second acquisition module 305, configured to acquire a preoperative static CT image;
a point set acquisition module 306 for acquiring a point set from the preoperative static CT image and the intraoperative interventional CT image to perform the surface-based registration.
Optionally, the registration module 303 specifically includes:
an alignment submodule 3031, configured to align the CT image as a reference imaging mode and the image to be fused as a mobile imaging mode by using a principal component analysis algorithm, so as to obtain an initial guess of a correspondence between the CT image and the image to be fused;
an iteration submodule 3032, configured to perform iteration by using a singular value decomposition algorithm to improve a correspondence between the CT image and the image to be fused;
and the determining submodule 3033 is configured to obtain a root mean square distance between the CT image and the image to be fused, and stop optimization when the root mean square distance is smaller than a preset value.
In the embodiment of the invention, the liver part in the CT image is accurately segmented through the MIMO-FAN model, and then the independent iteration nearest point function is used for carrying out accurate registration based on the surface on the multi-mode image, so that the multi-mode image can be accurately fused, the practicability is high, the cost is low, the time consumption is short, the response is timely, the precision is high, and the influence on the clinical work flow can be reduced as far as possible.
The virtual system in the embodiment of the present invention may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
In addition, it should be noted that the above-described embodiments of the apparatus are merely illustrative and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of the modules to implement the purpose of the embodiment according to actual needs, and the present invention is not limited herein.
EXAMPLE III
The embodiment of the invention provides equipment, a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of embodiment one.
In the embodiment of the invention, the liver part in the CT image is accurately segmented through the MIMO-FAN model, and then the independent iteration nearest point function is used for carrying out accurate registration based on the surface on the multi-mode image, so that the multi-mode image can be accurately fused, the practicability is high, the cost is low, the time consumption is short, the response is timely, the precision is high, and the influence on the clinical workflow can be reduced as far as possible.
Example four
Embodiments of the present invention provide a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement a method as described in embodiment one.
In the embodiment of the invention, the liver part in the CT image is accurately segmented through the MIMO-FAN model, and then the independent iteration nearest point function is used for carrying out accurate registration based on the surface on the multi-mode image, so that the multi-mode image can be accurately fused, the practicability is high, the cost is low, the time consumption is short, the response is timely, the precision is high, and the influence on the clinical workflow can be reduced as far as possible.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as a punch card or an in-groove protruding structure with instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that, unless expressly stated otherwise, all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. Where used, further, preferably, still further and more preferably is a brief introduction to the description of the other embodiment based on the foregoing embodiment, the combination of the contents of the further, preferably, still further or more preferably back strap with the foregoing embodiment being a complete construction of the other embodiment. Several further, preferred, still further or more preferred arrangements of the belt after the same embodiment may be combined in any combination to form a further embodiment.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications or substitutions do not depart from the scope of the embodiments of the present disclosure by the essence of the corresponding technical solutions.

Claims (10)

1. A multi-modal fusion method of liver CT images, comprising:
s101: acquiring a CT image containing a liver part in the operation process;
s102: segmenting a liver part in the CT image through an MIMO-FAN model;
s103: taking the CT image as a reference imaging mode, taking an image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the image to be fused by using an independent iteration nearest point function;
s104: and fusing the CT image after registration with the image to be fused.
2. The method of multimodal fusion of CT images of the liver according to claim 1, wherein the images to be fused comprise: MRI images, CE-CT images and PET/CT images.
3. The multi-modal fusion method of liver CT images according to claim 1, further comprising, before the S103:
acquiring a preoperative static CT image;
a set of points is acquired from the pre-operative static CT image and the intra-operative interventional CT image to perform a surface-based registration.
4. The multi-modal fusion method for liver CT images according to claim 1, wherein the S103 comprises:
s1031: taking the CT image as a reference imaging mode and the image to be fused as a mobile imaging mode, and aligning by adopting a principal component analysis algorithm to obtain an initial guess of the corresponding relation between the CT image and the image to be fused;
s1032: iteration is carried out by adopting a singular value decomposition algorithm so as to improve the corresponding relation between the CT image and the image to be fused;
s1033: and acquiring the root-mean-square distance between the CT image and the image to be fused, and stopping optimization under the condition that the root-mean-square distance is smaller than a preset value.
5. A multi-modality fusion apparatus of CT images of the liver, comprising:
the first acquisition module is used for acquiring a CT image containing a liver part in the operation process;
the segmentation module is used for segmenting the liver part in the CT image through an MIMO-FAN model;
the registration module is used for taking the CT image as a reference imaging mode, taking the image to be fused as a mobile imaging mode, and performing surface-based registration on the CT image and the mobile imaging mode by using an independent iteration nearest point function;
and the fusion module is used for fusing the CT image after registration with the image to be fused.
6. The multi-modality fusion system for CT images of the liver as set forth in claim 5, wherein the images to be fused comprise: MRI images, CE-CT images and PET/CT images.
7. The multi-modality fusion system for CT images of the liver of claim 5, further comprising:
the second acquisition module is used for acquiring a preoperative static CT image;
a point set acquisition module to acquire a set of points from the preoperative static CT image and intraoperative interventional CT image to perform a surface-based registration.
8. The multi-modal fusion apparatus of liver CT images as recited in claim 5, wherein the registration module specifically comprises:
the alignment submodule is used for aligning the CT image as a reference imaging mode and the image to be fused as a mobile imaging mode by adopting a principal component analysis algorithm so as to obtain an initial guess of the corresponding relation between the CT image and the image to be fused;
the iteration submodule is used for iterating by adopting a singular value decomposition algorithm so as to improve the corresponding relation between the CT image and the image to be fused;
and the judging submodule is used for acquiring the root-mean-square distance between the CT image and the image to be fused, and stopping optimization under the condition that the root-mean-square distance is smaller than a preset value.
9. An apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 4.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 4.
CN202210827415.0A 2022-07-13 2022-07-13 Multi-mode fusion method, device and equipment of CT image and readable storage medium Pending CN115272356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210827415.0A CN115272356A (en) 2022-07-13 2022-07-13 Multi-mode fusion method, device and equipment of CT image and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210827415.0A CN115272356A (en) 2022-07-13 2022-07-13 Multi-mode fusion method, device and equipment of CT image and readable storage medium

Publications (1)

Publication Number Publication Date
CN115272356A true CN115272356A (en) 2022-11-01

Family

ID=83764660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210827415.0A Pending CN115272356A (en) 2022-07-13 2022-07-13 Multi-mode fusion method, device and equipment of CT image and readable storage medium

Country Status (1)

Country Link
CN (1) CN115272356A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912301A (en) * 2023-02-23 2023-10-20 浙江大学 Liver tumor alignment method and device, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912301A (en) * 2023-02-23 2023-10-20 浙江大学 Liver tumor alignment method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US10111726B2 (en) Risk indication for surgical procedures
CN107809955B (en) Real-time collimation and ROI-filter localization in X-ray imaging via automatic detection of landmarks of interest
US20120014559A1 (en) Method and System for Semantics Driven Image Registration
US11682115B2 (en) Atlas-based location determination of an anatomical region of interest
KR102652749B1 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
US20080285831A1 (en) Automatically updating a geometric model
CN115272356A (en) Multi-mode fusion method, device and equipment of CT image and readable storage medium
US20130218003A1 (en) Rapid entry point localization for percutaneous interventions
US11501442B2 (en) Comparison of a region of interest along a time series of images
US11869216B2 (en) Registration of an anatomical body part by detecting a finger pose
US11928828B2 (en) Deformity-weighted registration of medical images
CN114037830A (en) Training method for enhanced image generation model, image processing method and device
EP4093275A1 (en) Intraoperative 2d/3d imaging platform
JP2019500114A (en) Determination of alignment accuracy
US11877809B2 (en) Using a current workflow step for control of medical data processing
US20240005503A1 (en) Method for processing medical images
Manning et al. Surgical navigation
Ashammagari A Framework for Automating Interventional Surgeries (Catheter detection and Automation of a crucial step in Percutaneous Coronary Interventional surgery-PCI)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination