CN115100172A - Fusion method of multi-modal medical images - Google Patents

Fusion method of multi-modal medical images Download PDF

Info

Publication number
CN115100172A
CN115100172A CN202210811433.XA CN202210811433A CN115100172A CN 115100172 A CN115100172 A CN 115100172A CN 202210811433 A CN202210811433 A CN 202210811433A CN 115100172 A CN115100172 A CN 115100172A
Authority
CN
China
Prior art keywords
frequency
low
images
fusion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210811433.XA
Other languages
Chinese (zh)
Inventor
孔韦韦
雷阳
周元哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202210811433.XA priority Critical patent/CN115100172A/en
Publication of CN115100172A publication Critical patent/CN115100172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a fusion method of multi-modal medical images, comprising the following steps: respectively carrying out non-down sampling shear wave transformation on a plurality of medical source images to be fused, and respectively obtaining a corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image; performing fusion processing on the low-frequency sub-band image by adopting guide kernel weighting guide filtering to obtain a low-frequency fusion sub-image; performing fusion processing on the multiple high-frequency sub-band images by adopting side window filtering to obtain high-frequency fusion sub-images; and fusing the low-frequency fusion subimage and the high-frequency fusion subimage by utilizing the inverse transformation of the non-downsampling shear wave transformation to obtain a final fusion image. The multi-modal medical image fusion method provided by the disclosure has good operation efficiency, obviously reduces the operation complexity, and obviously improves the multi-modal medical image fusion effect.

Description

Fusion method of multi-modal medical images
Technical Field
The disclosure relates to the technical field of image processing, in particular to a fusion method of multi-modal medical images.
Background
With the rapid development of image sensor technology today, a large number of multi-modal medical images can be obtained by a variety of different image sensors. In the prior art, the conventional image fusion method generally performed on various multi-modal medical images is complex in operation process, the operation efficiency needs to be improved, and the image fusion effect needs to be improved. Accordingly, there is a need to ameliorate one or more of the problems with the related art solutions described above.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a method for fusing multi-modal medical images to improve the fusion effect of the medical images.
Respectively carrying out non-down sampling shear wave transformation on a plurality of medical source images to be fused, and respectively obtaining a corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image;
performing fusion processing on the low-frequency sub-band image by adopting guide kernel weighting guide filtering to obtain a low-frequency fusion sub-image;
performing fusion processing on the multiple high-frequency sub-band images by adopting side window filtering to obtain high-frequency fusion sub-images;
and fusing the low-frequency fusion subimage and the high-frequency fusion subimage by using the inverse transformation of the non-downsampling shear wave transformation to obtain a final fusion image.
In an exemplary embodiment of the present disclosure, the step of respectively performing non-downsampling shear wave transformation on a plurality of medical source images to be fused, where each of the medical source images respectively obtains a corresponding low-frequency subband image and a plurality of high-frequency subband images includes: respectively carrying out non-down sampling shear wave transformation on a plurality of medical source images to be fused, and after S-level scale decomposition and ds-direction decomposition, respectively obtaining 1 corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image, wherein S is more than or equal to 1, and S is 1,2, … and S; ds represents the number of directional decompositions at the s-th order scale.
In an exemplary embodiment of the present disclosure, the step of performing fusion processing on the low-frequency subband image by using guided kernel weighted guided filtering to obtain a low-frequency fused subband image includes:
respectively calculating the sum of the laplacian energy of each pixel point in each low-frequency subband image, wherein a calculation formula of the sum of the laplacian energy of each pixel point comprises:
Figure BDA0003739357680000021
wherein, f (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image; n represents the radius of a local area with the pixel f (x, y) as the center pixel;
Figure BDA0003739357680000022
step represents the step size;
respectively selecting a plurality of local areas with different pixel points as central radiuses from each low-frequency subband image to obtain the corresponding Laplace energy sum of each low-frequency subband image; obtaining a plurality of low-frequency initial fusion decision mapping graphs by comparing the Laplace energy sums of the same local area radius in a plurality of low-frequency subband images;
respectively carrying out guide filtering processing on each initial low-frequency fusion decision mapping map by adopting guide kernel weighting to obtain a plurality of low-frequency fusion decision guide mapping maps, and selecting the maximum value at the same pixel point position from the plurality of low-frequency fusion decision guide mapping maps as an element value so as to determine a low-frequency final fusion decision mapping map;
and fusing the plurality of low-frequency sub-band images according to the low-frequency final fusion decision mapping chart to obtain the low-frequency fusion sub-images.
In an exemplary embodiment of the present disclosure, a plurality of local regions with different pixel points as central radii are respectively selected from each low-frequency subband image, so as to obtain a laplacian energy sum corresponding to each low-frequency subband image; and obtaining a plurality of low-frequency initial fusion decision mapping maps by comparing the laplacian energy sums of the same local region radius in a plurality of low-frequency subband images, wherein the steps of:
respectively selecting a plurality of local regions with different pixel points as central radius N from each low-frequency subband image, wherein N belongs to (N) 1 ,N 2 ,N 3 ,...,N n ) And converting a calculation formula for obtaining the Laplace energy and the SML corresponding to each low-frequency subband image into:
Figure BDA0003739357680000031
wherein A and B represent a source image A and a source image B, respectively;
the formula of the low-frequency initial fusion decision map comprises:
Figure BDA0003739357680000032
wherein f is A (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image A; f. of B And (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image B.
In an exemplary embodiment of the present disclosure, the step of performing guided filtering processing on each initial low-frequency fusion decision map by using guided kernel weighting to obtain a plurality of low-frequency fusion decision guide maps, and selecting a maximum value at the same pixel point position from the plurality of low-frequency fusion decision guide maps as an element value, thereby determining a final low-frequency fusion decision map includes:
the calculation formula of the low-frequency fusion decision-guiding mapping map comprises:
guided_map n (x,y)=SKWGF(f A (x,y),map n ,r nn ),n∈(1,2,3,.....) (4)
wherein r represents the neighborhood size of the guide kernel, and epsilon represents the regularization parameter; SKWGF denotes a guided kernel weighted guided filtering model;
the calculation formula of the low-frequency final fusion decision map comprises the following steps:
fusion_map n (x,y)=max(guided_map n (x,y)),n=1,2,3 (5)。
in an exemplary embodiment of the present disclosure, the step of fusing the plurality of low-frequency subband images according to the low-frequency final fusion decision map to obtain the low-frequency fusion subimages includes:
Figure BDA0003739357680000033
wherein denotes dot multiplication of the matrix; a _ low and B _ low denote low frequency subband images a and B, respectively.
In an exemplary embodiment of the present disclosure, the step of performing fusion processing on the multiple high-frequency subband images by using side window filtering to obtain multiple high-frequency fused subimages includes:
respectively calculating side window filtering values in different directions in the neighborhood of each pixel point in each high-frequency sub-band image aiming at a plurality of high-frequency sub-band images with the same scale and the same direction in different medical source images;
comparing each pixel point with the side window filtering value, and taking the direction corresponding to the closest numerical value as the optimal filtering direction; calculating a side window filtering value of the optimal filtering direction, and determining a high-frequency final fusion decision mapping chart;
and fusing the plurality of high-frequency sub-band images according to the high-frequency final fusion decision mapping chart to obtain the high-frequency fusion sub-images.
In an exemplary embodiment of the present disclosure, the step of respectively calculating, for a plurality of high-frequency subband images of the same scale and the same direction in different medical source images, side window filter values in different directions in a neighborhood of each pixel point in each high-frequency subband image includes:
the calculation formula for calculating the side window filtering SWF values of 8 different directions in the local neighborhood of each pixel point comprises the following steps:
Figure BDA0003739357680000041
wherein omega i Representing a local area with a pixel point i as a central point; omega ij Representing the weight of other pixel points j in a local area taking the pixel point i as a central point; d represents a set of pixel points in different directions; d ═ L, R, Up, Do, NW, NE, SW, SE }, L denotes the left side of pixel i, R denotes the right side of pixel i, Up denotes the upper side of pixel i, Do denotes the lower side of pixel i, NW denotes the upper left side of pixel i, NE denotes the upper right side of pixel i, SW denotes the lower left side of pixel i, and SE denotes the lower right side of pixel i; q. q.s j Representing the gray value of pixel point j.
In an exemplary embodiment of the present disclosure, by comparing each of the pixel points with the side window filter value, a direction corresponding to a closest value is taken as an optimal filtering direction; the steps of calculating the side window filtering value of the optimal filtering direction and determining the high-frequency final fusion decision mapping chart comprise:
the calculation formula of the optimal filtering direction comprises the following steps:
Figure BDA0003739357680000042
wherein q is i Representing the gray value of a pixel I, I n Representing the side window filtering SWF values corresponding to eight directions in a local area with the pixel point i as a central point, and D representing a set of eight directions; I.C. A m Is represented by the formula q i The side window filtering SWF value of the corresponding direction with the closest value;
the calculation formula of the high-frequency final fusion decision mapping map comprises the following steps:
Figure BDA0003739357680000043
wherein, I m,A,s,d Representing the SWF value of a source image A after s-level scale decomposition and ds-direction decomposition; i is m,B,s,d And the SWF value of the source image B after s-level scale decomposition and ds-direction decomposition is represented.
In an exemplary embodiment of the present disclosure, the step of fusing the plurality of high-frequency subband images according to the high-frequency final fusion decision mapping map to obtain the high-frequency fusion subimage includes:
the calculation formula of the high-frequency fusion subimage comprises the following steps:
Figure BDA0003739357680000051
where a _ high and B _ high denote high-frequency subband images a and B, respectively.
The technical scheme provided by the disclosure can comprise the following beneficial effects:
compared with the existing NSCT method, the method has higher operating efficiency, can obviously reduce the operation complexity, performs fusion processing on the low-frequency subband images by adopting guide Kernel Weighted Guided Filtering (SKWGF), and performs fusion processing on the high-frequency subband images by adopting Side Window Filtering (SWF), and can effectively capture and represent main information and detail information in the medical source images according to the information characteristics of the low-frequency subband images and the high-frequency subband images. Therefore, the fusion effect of the multi-modal medical images can be remarkably improved, and the method has good academic value and application prospect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the disclosure, and that other drawings may be derived from those drawings by a person of ordinary skill in the art without inventive effort.
FIG. 1 shows a schematic step diagram of a multi-modality medical image fusion method in an exemplary embodiment of the present disclosure;
FIG. 2 shows a flow chart of a method of multi-modal medical image fusion in an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a CT image and MRI image contrast map of a simulation experiment in an exemplary embodiment of the present disclosure;
FIG. 4 shows a comparison of simulation experiments using different fusion methods in exemplary embodiments of the present disclosure;
FIG. 5 shows an enlarged view of the various partial regions of FIG. 4;
fig. 4a corresponds to fig. 5a, fig. 4b corresponds to fig. 5b, fig. 4c corresponds to fig. 5c, fig. 4d corresponds to fig. 5d, and fig. 4f corresponds to fig. 5 f.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The present exemplary embodiment provides, in a first aspect, a method for fusing multi-modal medical images, which is illustrated with reference to fig. 1, and includes the following steps:
step S101: respectively carrying out non-subsampled shear wave transformation on a plurality of medical source images to be fused, and respectively obtaining a corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image;
step S102: performing fusion processing on the low-frequency sub-band image by adopting guide kernel weighting guide filtering to obtain a low-frequency fusion sub-image;
step S103: performing fusion processing on the multiple high-frequency sub-band images by adopting side window filtering to obtain high-frequency fusion sub-images;
step S104: and fusing the low-frequency fusion subimage and the high-frequency fusion subimage by using the inverse transformation of the non-subsampled shear wave transformation to obtain a final fusion image.
Hereinafter, each step of the above-described method in the present exemplary embodiment will be described in more detail.
Referring to fig. 1, in step S101, the following steps are included, but not limited to: performing Non-Subsampled shear wave Transform (NSST) on all medical source images to be fused, and after S-level scale decomposition and ds-direction decomposition, respectively obtaining 1 corresponding low-frequency subband image and multiple high-frequency subband images from each medical source image, wherein S is more than or equal to 1, and S is 1,2, …, and S; ds represents the number of directional decompositions at the s-th order scale.
In step S102, the process of fusing the low-frequency subband images by using guided kernel weighted guided filtering (SKWGF) includes the following substeps:
step S1021: respectively calculating the sum of the laplacian energy of each pixel point in the low-frequency sub-band image corresponding to each medical source image, wherein a calculation formula of the sum of the laplacian energy of each pixel point comprises the following steps:
Figure BDA0003739357680000071
wherein f (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image; n represents the radius of a local area with the pixel f (x, y) as the center pixel;
Figure BDA0003739357680000072
step denotes the step size, and in the usual case the step value is 1.
Step S1022: respectively selecting a plurality of local areas with different pixel points as central radiuses from each low-frequency sub-band image to obtain a corresponding Laplace energy sum of each low-frequency sub-band image; and obtaining a plurality of low-frequency initial fusion decision mapping graphs by comparing Laplacian energy sums of the same local region radius in the plurality of low-frequency subband images.
Step S1023: and adopting guide kernel weighting to respectively conduct guide filtering processing on each initial low-frequency fusion decision mapping map to obtain a plurality of low-frequency fusion decision guide mapping maps, and selecting the maximum value at the position of the same pixel point from the plurality of low-frequency fusion decision guide mapping maps as an element value so as to determine a low-frequency final fusion decision mapping map.
Step S1024: and fusing the plurality of low-frequency sub-band images according to the low-frequency final fusion decision mapping chart to obtain a low-frequency fusion sub-image F _ low.
Further, step S1022 includes selecting a plurality of local regions with different pixel points as the central radius N from each low frequency subband image, where N ∈ (N ∈) (N 1 ,N 2 ,N 3 ,...,N n ) And obtaining a calculation formula of the Laplace energy and the SML corresponding to each low-frequency subband image, wherein the calculation formula comprises the following steps:
Figure BDA0003739357680000073
wherein A and B represent a source image A and a source image B, respectively;
the formula of the low-frequency initial fusion decision map comprises:
Figure BDA0003739357680000081
wherein f is A (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image A; f. of B And (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image B.
Further, in step S1023, the calculation formula of the low-frequency fusion decision-guiding map includes:
guided_map n (x,y)=SKWGF(f A (x,y),map n ,r nn ),n∈(1,2,3,.....) (4)
wherein r represents the neighborhood size of the guide kernel, and epsilon represents the regularization parameter; SKWGF denotes a guided kernel weighted guided filtering model;
the calculation formula of the low-frequency final fusion decision mapping graph comprises the following steps:
fusion_map n (x,y)=max(guided_map n (x,y)),n=1,2,3 (5)。
further, in step S1023, the step of fusing the plurality of low-frequency subband images according to the low-frequency final fusion decision map to obtain a low-frequency fusion subimage includes:
Figure BDA0003739357680000082
specifically, referring to fig. 2, in this embodiment, first, values of Laplacian energy Sum (SML) of each pixel point in low-frequency subband images a _ low and B _ low corresponding to two medical source images are calculated, and three different region radii N are respectively selected according to formula (2) 1 ,N 2 ,N 3 (ii) a Then, taking two fused medical source images a and B as an example, a plurality of SMLs are obtained:
Figure BDA0003739357680000083
by comparing SML values of two different low-frequency subband images under the same local region radius, a low-frequency initial fusion decision mapping map is obtained by a formula (3) n
Secondly, the decision map is fused for the three low-frequency initials 1 ~map 3 Adopting guide kernel weighting to make guide filtering treatment on low-frequency initial fusion decision-making mapping chart and obtaining low-frequency fusion decision-making guide mapping chart defined _ map by using formula (4) n Wherein r and ∈ respectively represent the guiding kernel neighborhood size and the regularization parameter, and in this embodiment, the following are set:
Figure BDA0003739357680000091
Figure BDA0003739357680000092
guiding map structured _ map for three low-frequency fusion decision-directed maps 1 ~guided_map 3 Taking the maximum value at the same pixel point position as the element value, thereby determining the low-frequency final fusion decision map fusion _ map by equation (5) S .。
Finally, fusion _ map is fused according to the final fusion decision map S And fusing the low-frequency sub-images A _ low and B _ low, and obtaining a low-frequency fused sub-image F _ low by a formula (6).
In step S103, the process of fusing the high-frequency subband images by Side Window Filtering (SWF) includes the following sub-steps:
step S1031: respectively calculating side window filtering values in different directions in the neighborhood of each pixel point in each high-frequency sub-band image according to a plurality of high-frequency sub-band images with the same scale and the same direction in different medical source images;
step S1032: comparing each pixel point with the side window filtering value, and taking the direction corresponding to the closest numerical value as the optimal filtering direction; calculating a side window filtering value of the optimal filtering direction, and determining a high-frequency final fusion decision mapping chart;
step S1033: and fusing the plurality of high-frequency sub-band images according to the high-frequency final fusion decision mapping chart to obtain a high-frequency fusion sub-image F _ high.
In this embodiment, the step S1031 includes a calculation formula for calculating the side window filtering SWF values in 8 different directions in the local neighborhood of each pixel point, including:
Figure BDA0003739357680000093
wherein omega i Representing a local area with a pixel point i as a central point; omega ij Representing the weight of other pixel points j in a local area taking the pixel point i as a central point; d represents a set of pixel points in different directions; d ═ L, R, Up, Do, NW, NE, SW, SE }, L denotes the left side of pixel i, R denotes the right side of pixel i, Up denotes the upper side of pixel i, Do denotes the lower side of pixel i, NW denotes the upper left side of pixel i, NE denotes the upper right side of pixel i, SW denotes the lower left side of pixel i, and SE denotes the lower right side of pixel i; q. q.s j Representing the gray value of pixel point j. Eight corresponding values can be obtained from equation (7).
In step 1032, the calculation formula of the optimal filtering direction includes:
Figure BDA0003739357680000101
wherein q is i Representing the gray value of a pixel I, I n Representing the side window filtering SWF values corresponding to eight directions in a local area with the pixel point i as a central point, and D representing a set of eight directions; i is m Is represented by the formula i The side window filtering SWF value of the corresponding direction with the closest value;
calculating the Side Window Filtering (SWF) value in the optimal direction aiming at the same-scale and same-direction high-frequency sub-band images of different medical source images, and determining the fusion _ map of the high-frequency final fusion decision map by a formula (9) s,d Wherein, I m,A,s,d Representing a source imageA is subjected to s-level scale decomposition and ds-direction decomposition to obtain an SWF value; i is m,B,s,d And the SWF value of the source image B after s-level scale decomposition and ds-direction decomposition is represented.
In step S1033, fusion _ map is generated according to the high frequency final fusion decision map s,d The high-frequency subband images a _ high and B _ high are fused, and a high-frequency fused subimage F _ high is obtained by the formula (10).
In step S104, the low-frequency fused sub-image F _ low and the high-frequency fused sub-image F _ high obtained in the above steps are finally reconstructed by NSST inverse transformation, so that a final fused image F is obtained.
Simulation and analysis
For the fusion method of the multi-modal medical images proposed in the present exemplary embodiment, the following simulation experiment was performed to verify the effectiveness of the fusion method of the multi-modal medical images in the present embodiment by comparing with the results of the existing various conventional fusion methods:
the simulation experiment is completed on a personal PC which is configured to: 11 generation i7CPU, memory 32GB, Windows 10 operating system, simulation software Matlab R2019 a. Five comparison methods are adopted in the simulation experiment: the method includes a non-subsampled contourlet transform phase-consistent local Laplacian (NSCT-pcl) method, a non-subsampled shear wave transform parameter adaptive pulse coupled neural network (NSST-PAPCNN) method, a Laplacian re-decomposition (LRD) method, a Convolution Sparse Morphological Component Analysis (CSMCA) method, and a Joint Bilateral Filtering (JBF) method.
Referring to fig. 3, fig. 3a is a CT image, and fig. 3b is an MRI image both belonging to the category of anatomical medical images, wherein the CT image is ideal for imaging hard tissues such as bones, and the MRI image is better for imaging soft tissues and organs of a living body. As can be seen from fig. 3, for the same scene, a large amount of complementary information exists in the CT image and the MRI image, and fusing the complementary information will undoubtedly improve the visual effect of the information amount and the lesion information of the image, and can provide a more accurate and richer diagnosis reference for the doctor.
Referring to FIG. 4, FIG. 4a shows a simulation result of the NSCT-PCLL method; FIG. 4b is a simulation result of the NSST-PAPCNN method; FIG. 4c is a simulation result of the LRD method; FIG. 4d is a simulation result of the CSMCA method; FIG. 4e is the simulation result of the JBF process; fig. 4f is a simulation result of the method proposed by the present disclosure. From the visual perspective, the CSMC method has a low contrast level of the fusion effect graph, and the contrast levels corresponding to the fusion results of the other five methods are relatively close. In order to further compare the fusion effects of the six methods, a representative region in the fusion effect graph is selected and amplified in the simulation experiment, and as shown in fig. 5, it is not difficult to find that the fusion result graph based on the NSCT-PCLL, NSST-PAPCNN and CSMCA methods has serious information loss, and the fusion result graph based on the LRD and JBF methods introduces zigzag false information. Through simulation experiment results of the six fusion methods, the fusion method of the multi-modal medical images provided by the disclosure is fully proved to have better image information capturing capability and characterization performance.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A method of fusing multi-modal medical images, comprising:
respectively carrying out non-down sampling shear wave transformation on a plurality of medical source images to be fused, and respectively obtaining a corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image;
performing fusion processing on the low-frequency sub-band image by adopting guide kernel weighting guide filtering to obtain a low-frequency fusion sub-image;
performing fusion processing on the multiple high-frequency sub-band images by adopting side window filtering to obtain high-frequency fusion sub-images;
and fusing the low-frequency fusion subimage and the high-frequency fusion subimage by using the inverse transformation of the non-downsampling shear wave transformation to obtain a final fusion image.
2. The method for fusing multi-modal medical images according to claim 1, wherein the step of performing non-downsampling shear wave transformation on a plurality of medical source images to be fused respectively to obtain a corresponding low-frequency subband image and a plurality of high-frequency subband images from each of the medical source images comprises: respectively carrying out non-down sampling shear wave transformation on a plurality of medical source images to be fused, and after S-level scale decomposition and ds-direction decomposition, respectively obtaining 1 corresponding low-frequency sub-band image and a plurality of high-frequency sub-band images from each medical source image, wherein S is more than or equal to 1, and S is 1,2, … and S; ds represents the number of directional decompositions at the s-th order scale.
3. The method for fusing multi-modal medical images according to claim 1, wherein the step of fusing the low-frequency sub-band images by using guided kernel weighted guided filtering to obtain low-frequency fused sub-images comprises:
respectively calculating the sum of the laplacian energy of each pixel point in each low-frequency subband image, wherein a calculation formula of the sum of the laplacian energy of each pixel point comprises:
Figure FDA0003739357670000011
wherein, f (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image; n represents the radius of a local area with the pixel f (x, y) as the center pixel;
Figure FDA0003739357670000012
step represents the step size;
respectively selecting a plurality of local areas with different pixel points as central radiuses from each low-frequency subband image to obtain the corresponding Laplace energy sum of each low-frequency subband image; obtaining a plurality of low-frequency initial fusion decision mapping graphs by comparing the Laplace energy sums of the same local area radius in a plurality of low-frequency subband images;
respectively carrying out guide filtering processing on each initial low-frequency fusion decision mapping map by adopting guide kernel weighting to obtain a plurality of low-frequency fusion decision guide mapping maps, and selecting the maximum value at the same pixel point position from the plurality of low-frequency fusion decision guide mapping maps as an element value so as to determine a low-frequency final fusion decision mapping map;
and fusing the low-frequency sub-band images according to the low-frequency final fusion decision mapping chart to obtain the low-frequency fusion sub-images.
4. The method for fusing multi-modal medical images according to claim 3, wherein a plurality of local regions with different pixel points as central radii are respectively selected from each low-frequency subband image to obtain the laplacian energy sum corresponding to each low-frequency subband image; and obtaining a plurality of low-frequency initial fusion decision mapping maps by comparing the laplacian energy sums of the same local region radius in a plurality of low-frequency subband images, wherein the steps of the low-frequency initial fusion decision mapping maps comprise:
respectively selecting a plurality of local regions with different pixel points as central radius N from each low-frequency subband image, wherein N belongs to (N) 1 ,N 2 ,N 3 ,...,N n ) And converting the calculation formula for obtaining the Laplace energy and the SML corresponding to each low-frequency subband image into a calculation formula:
Figure FDA0003739357670000021
Wherein A and B represent a source image A and a source image B, respectively;
the formula of the low-frequency initial fusion decision map comprises:
Figure FDA0003739357670000022
wherein f is A (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image A; f. of B And (x, y) represents a pixel point located at a spatial position (x, y) in the low-frequency subband image corresponding to the medical source image B.
5. The method for fusing multi-modal medical images according to claim 3, wherein the step of determining the final low-frequency fusion decision map comprises the steps of performing a guided filtering process on each initial low-frequency fusion decision map by using a guided kernel weighting to obtain a plurality of low-frequency fusion decision guide maps, and selecting a maximum value at the same pixel point position from the plurality of low-frequency fusion decision guide maps as an element value:
the calculation formula of the low-frequency fusion decision-directed map comprises:
guided_map n (x,y)=SKWGF(f A (x,y),map n ,r nn ),n∈(1,2,3,.....) (4)
wherein r represents the neighborhood size of the guide kernel, and epsilon represents the regularization parameter; SKWGF denotes a guided kernel weighted guided filtering model;
the calculation formula of the low-frequency final fusion decision map comprises the following steps:
fusion_map n (x,y)=max(guided_map n (x,y)),n=1,2,3 (5)。
6. the method for fusing multi-modal medical images according to claim 3, wherein the step of fusing the plurality of low-frequency sub-band images according to the low-frequency final fusion decision map to obtain the low-frequency fusion sub-images comprises:
Figure FDA0003739357670000031
wherein denotes dot multiplication of the matrix; a _ low and B _ low denote low frequency subband images a and B, respectively.
7. The method for fusing multi-modal medical images according to claim 1, wherein the step of performing the fusion processing on the plurality of high-frequency sub-band images by using side window filtering to obtain a plurality of high-frequency fused sub-images comprises:
respectively calculating side window filtering values in different directions in the neighborhood of each pixel point in each high-frequency sub-band image aiming at a plurality of high-frequency sub-band images with the same scale and the same direction in different medical source images;
comparing each pixel point with the side window filtering value, and taking the direction corresponding to the closest numerical value as the optimal filtering direction; calculating a side window filtering value of the optimal filtering direction, and determining a high-frequency final fusion decision mapping chart;
and fusing the plurality of high-frequency sub-band images according to the high-frequency final fusion decision mapping chart to obtain the high-frequency fusion sub-images.
8. The method for fusing multi-modal medical images according to claim 7, wherein the step of calculating the side window filtering values in different directions in the neighborhood of each pixel point in each high-frequency subband image respectively for a plurality of high-frequency subband images with the same scale and the same direction in different medical source images comprises:
the calculation formula for calculating the side window filtering SWF values in 8 different directions in the local neighborhood of each pixel point comprises the following steps:
Figure FDA0003739357670000041
wherein omega i Representing a local area with a pixel point i as a central point; omega ij Representing the weight of other pixel points j in a local area taking the pixel point i as a central point; d represents a set of pixel points in different directions; d ═ L, R, Up, Do, NW, NE, SW, SE }, L denotes the left side of pixel i, R denotes the right side of pixel i, Up denotes the upper side of pixel i, Do denotes the lower side of pixel i, NW denotes the upper left side of pixel i, NE denotes the upper right side of pixel i, SW denotes the lower left side of pixel i, and SE denotes the lower right side of pixel i; q. q of j Representing the gray value of pixel point j.
9. The method for fusing multi-modal medical images as claimed in claim 7, wherein the direction corresponding to the closest value is taken as the optimal filtering direction by comparing each pixel with the side window filtering value; the steps of calculating the side window filtering value of the optimal filtering direction and determining the high-frequency final fusion decision mapping chart comprise:
the calculation formula of the optimal filtering direction comprises:
Figure FDA0003739357670000042
wherein q is i Representing the gray value of a pixel I, I n Representing the side window filtering SWF values corresponding to eight directions in a local area with the pixel point i as a central point, and D representing a set of eight directions; I.C. A m Is represented by the formula q i The side window filtering SWF value of the corresponding direction with the closest value;
the calculation formula of the high-frequency final fusion decision mapping map comprises the following steps:
Figure FDA0003739357670000043
wherein, I m,A,s,d Representing the SWF value of a source image A after s-level scale decomposition and ds-direction decomposition; i is m,B,s,d And the SWF value of the source image B after s-level scale decomposition and ds-direction decomposition is represented.
10. The method for fusing multi-modal medical images according to claim 7, wherein the step of fusing the plurality of high-frequency sub-band images according to the high-frequency final fusion decision map to obtain the high-frequency fused sub-images comprises:
the calculation formula of the high-frequency fusion subimage comprises the following steps:
Figure FDA0003739357670000044
where a _ high and B _ high denote high-frequency subband images a and B, respectively.
CN202210811433.XA 2022-07-11 2022-07-11 Fusion method of multi-modal medical images Pending CN115100172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210811433.XA CN115100172A (en) 2022-07-11 2022-07-11 Fusion method of multi-modal medical images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210811433.XA CN115100172A (en) 2022-07-11 2022-07-11 Fusion method of multi-modal medical images

Publications (1)

Publication Number Publication Date
CN115100172A true CN115100172A (en) 2022-09-23

Family

ID=83296792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210811433.XA Pending CN115100172A (en) 2022-07-11 2022-07-11 Fusion method of multi-modal medical images

Country Status (1)

Country Link
CN (1) CN115100172A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342444A (en) * 2023-02-14 2023-06-27 山东财经大学 Dual-channel multi-mode image fusion method and fusion imaging terminal
CN117408905A (en) * 2023-12-08 2024-01-16 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342444A (en) * 2023-02-14 2023-06-27 山东财经大学 Dual-channel multi-mode image fusion method and fusion imaging terminal
CN117408905A (en) * 2023-12-08 2024-01-16 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction
CN117408905B (en) * 2023-12-08 2024-02-13 四川省肿瘤医院 Medical image fusion method based on multi-modal feature extraction

Similar Documents

Publication Publication Date Title
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
Martın-Fernández et al. An approach for contour detection of human kidneys from ultrasound images using Markov random fields and active contours
US7724256B2 (en) Fast graph cuts: a weak shape assumption provides a fast exact method for graph cuts segmentation
CN109727270B (en) Motion mechanism and texture feature analysis method and system of cardiac nuclear magnetic resonance image
CN104637044B (en) The ultrasonoscopy extraction system of calcified plaque and its sound shadow
CN112837274B (en) Classification recognition method based on multi-mode multi-site data fusion
CN107977926A (en) A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net
EP1789920A1 (en) Feature weighted medical object contouring using distance coordinates
CN110660063A (en) Multi-image fused tumor three-dimensional position accurate positioning system
CN109118487B (en) Bone age assessment method based on non-subsampled contourlet transform and convolutional neural network
CN112085736B (en) Kidney tumor segmentation method based on mixed-dimension convolution
CN110570394B (en) Medical image segmentation method, device, equipment and storage medium
JP2009517163A (en) Method, system and computer program for segmenting structure associated with reference structure in image
Mienye et al. Improved predictive sparse decomposition method with densenet for prediction of lung cancer
CN108280804A (en) A kind of multi-frame image super-resolution reconstruction method
CN114863225A (en) Image processing model training method, image processing model generation device, image processing equipment and image processing medium
CN115100172A (en) Fusion method of multi-modal medical images
CN106952268A (en) Medical image segmentation method based on incidence matrix self-learning and explicit rank constraint
CN117333751A (en) Medical image fusion method
CN117475268A (en) Multimode medical image fusion method based on SGDD GAN
Lakshmi et al. An adaptive MRI-PET image fusion model based on deep residual learning and self-adaptive total variation
CN112102327B (en) Image processing method, device and computer readable storage medium
CN112686932A (en) Image registration method and image processing method for medical image, and medium
CN115909016A (en) System, method, electronic device, and medium for analyzing fMRI image based on GCN
CN114202464B (en) X-ray CT local high-resolution imaging method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination