CN107194912B - Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning - Google Patents

Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning Download PDF

Info

Publication number
CN107194912B
CN107194912B CN201710259812.1A CN201710259812A CN107194912B CN 107194912 B CN107194912 B CN 107194912B CN 201710259812 A CN201710259812 A CN 201710259812A CN 107194912 B CN107194912 B CN 107194912B
Authority
CN
China
Prior art keywords
dictionary
image
fusion
matrix
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710259812.1A
Other languages
Chinese (zh)
Other versions
CN107194912A (en
Inventor
王丽芳
董侠
成茜
史超宇
王雁丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201710259812.1A priority Critical patent/CN107194912B/en
Publication of CN107194912A publication Critical patent/CN107194912A/en
Application granted granted Critical
Publication of CN107194912B publication Critical patent/CN107194912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning, relates to the technical field of image processing, and can respectively fuse three groups of brain medical images of normal brain, brain atrophy and brain tumor.

Description

Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning
Technical Field
The invention relates to the technical field of image processing, in particular to a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning.
Background
In the medical field, doctors need to study and analyze a single image having both high spatial and high spectral information in order to accurately diagnose and treat diseases. This type of information cannot be obtained from single modality images only, for example, CT imaging can capture bone structures of the human body with higher resolution, while MR imaging can capture detailed information of soft tissues of organs of the human body such as muscle, cartilage, fat, etc. Therefore, the complementary information of the CT image and the MR image are fused to obtain more comprehensive and abundant image information, and the effective help can be provided for clinical diagnosis and auxiliary treatment.
The more classical method applied to the field of brain medical image fusion at present is a method based on multi-scale transformation: discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), dual tree complex wavelet transform (dtctt), Laplacian Pyramid (LP), non-subsampled contourlet transform (NSCT). The method based on multi-scale transformation can well extract the salient features of the image, but is sensitive to image misregistration, and the traditional fusion strategy also enables the fusion result not to retain the detail information such as the edge and texture of the source image. With the rise of compressed sensing, the method based on sparse representation is widely used in the field of image fusion, and obtains excellent fusion effect. Yang.b et al sparsely represent the source image using redundant DCT dictionaries and fuse the sparse coefficients using a "select max" rule. The DCT dictionary is an implicit dictionary formed by DCT transformation, which is easy to implement quickly, but has limited representation capability. Elad et al propose a K-SVD algorithm for learning dictionaries from training images. Compared with a DCT dictionary, the learning dictionary is an explicit dictionary adaptive to the source image and has stronger representation capability. In the learned dictionary, a dictionary obtained by sampling and training only from a natural image is called as a single dictionary, the single dictionary can represent any natural image with the same category as a training sample, but for a brain medical image with a complex structure, the single dictionary is used for representing a CT image and an MR image, and an accurate sparse representation coefficient is difficult to obtain. Ophir et al propose a multi-scale dictionary learning method on wavelet domain. Namely, on the wavelet domain, all sub-bands are trained by using a K-SVD algorithm respectively to obtain sub-dictionaries corresponding to all sub-bands. The multi-scale dictionary effectively combines the advantages of both the parsing dictionary and the learned dictionary, enabling the capture of different features contained in images at different scales and in different directions. But the sub-dictionaries of all sub-bands are also single dictionaries, the sparse representation of all sub-bands by using the sub-dictionaries still has difficulty in obtaining accurate sparse representation coefficients, and the learning time efficiency of the separated dictionaries is low. Yu.N et al propose an image fusion method based on joint sparse representation and having a denoising function. The method comprises the steps of performing dictionary learning on a source image to be fused, extracting common features and respective features of the image to be fused according to a JSM-1 model, and then respectively combining and reconstructing to obtain a fused image. The method is suitable for brain medical images because the dictionary is trained for the source image to be fused, and accurate sparse representation coefficients can be obtained. But, the dictionary needs to be trained for each source image to be fused, so that the time efficiency is low and the flexibility is poor.
Disclosure of Invention
The embodiment of the invention provides a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning, which can solve the problems in the prior art.
A brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning comprises the following steps:
a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided into
Figure BDA0001274401860000021
Image blocks of size, for each CT source image ICAnd MR source image IRAll are provided with
Figure BDA0001274401860000022
Encoding the image blocks into m-dimensional column vector, and encoding the CT source image ICThe jth image block in (1)
Figure BDA0001274401860000023
MR source image IRThe jth image block in (1)
Figure BDA0001274401860000024
Subtract the respective average:
Figure BDA0001274401860000031
Figure BDA0001274401860000032
wherein the content of the first and second substances,
Figure BDA0001274401860000033
and
Figure BDA0001274401860000034
respectively represent
Figure BDA0001274401860000035
And
Figure BDA0001274401860000036
mean of all elements in (1) represents an m-dimensional column vector of all 1;
a fusion stage: solving using the CoefROMP algorithm
Figure BDA0001274401860000037
The formula is as follows:
Figure BDA0001274401860000038
Figure BDA0001274401860000039
wherein | α | Y phosphor0Representing the number of non-zero elements in the sparse coefficient alpha, representing the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRObtaining a fused dictionary after fusion;
will sparse coefficient of l2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtained
Figure BDA00012744018600000323
And
Figure BDA00012744018600000322
fusing by the following fusion rules:
Figure BDA00012744018600000310
mean value
Figure BDA00012744018600000311
And
Figure BDA00012744018600000312
using a "weighted average" rule fusion:
Figure BDA00012744018600000313
wherein the content of the first and second substances,
Figure BDA00012744018600000314
then
Figure BDA00012744018600000315
And
Figure BDA00012744018600000316
the fusion result of (a) is:
Figure BDA00012744018600000317
a reconstruction stage: performing a preprocessing stage and a fusion stage on all image blocks to obtain a fusion result of all image blocks, for each block vector
Figure BDA00012744018600000318
Reshaped by the process of reverse sliding window
Figure BDA00012744018600000319
The image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF
Preferably, in the fusion phase, the fused dictionary is obtained by calculation through the following method:
using high quality CT and MR images as a training set, vector pairs { X ] are sampled from the training setC,XR}, define
Figure BDA00012744018600000320
A matrix of n sampled CT image vectors,
Figure BDA00012744018600000321
a matrix formed for corresponding n sampled MR image vectors, wherein Rd×nRepresenting a vector space with d rows and n columns;
adding complete-support prior information on the basis of the dictionary learning cost function, and alternately updating DC,DRAnd A, the corresponding training optimization problem is as follows:
Figure BDA0001274401860000041
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, indicates a dot product, the mask matrix M is composed of elements 0 and 1, defined as M { | a | ═ 0}, equivalent to if a (i, j) ═ 0 then M (i, j) ═ 1, otherwise 0, introducing an auxiliary variable:
Figure BDA0001274401860000042
formula (1) can be equivalently converted into:
Figure BDA0001274401860000043
the solving process of the formula (3) comprises two steps of sparse coding and dictionary updating:
firstly, in the sparse coding stage, a random matrix initializes a dictionary
Figure BDA0001274401860000044
And
Figure BDA0001274401860000045
the update of the joint sparse coefficient matrix a is achieved by solving equation (4):
Figure BDA0001274401860000046
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (4) can be converted into the following equation:
Figure BDA0001274401860000047
in the formula (I), the compound is shown in the specification,
Figure BDA0001274401860000048
is that
Figure BDA0001274401860000049
Submatrices of non-zero subsets, alpha, corresponding to AiIs a non-zero part of the ith column A, and the formula (5) is solved by a coefficient reuse orthogonal matching pursuit algorithm CoefROMP to obtain an updated joint sparse coefficient matrix A;
secondly, in the dictionary updating stage, the optimization problem of the formula (3) is converted into:
Figure BDA00012744018600000410
the compensation term of equation (6) is written as:
Figure BDA0001274401860000051
in the formula (I), the compound is shown in the specification,
Figure BDA0001274401860000052
representation dictionary
Figure BDA0001274401860000053
The k-th column to be updated,
Figure BDA0001274401860000054
represents the kth row of the joint sparse coefficient matrix a,
Figure BDA0001274401860000055
line j representing mask matrix M for assurance
Figure BDA0001274401860000056
With zero elements in the correct position, mask matrix
Figure BDA0001274401860000057
Is to form a row vector
Figure BDA0001274401860000058
Copying d times to obtain matrix with size of d × n and rank of 1, and masking matrix
Figure BDA0001274401860000059
Can be effectively removed
Figure BDA00012744018600000510
The columns of the sample corresponding to the k-th atom are not used, and the error matrix EkSingular Value Decomposition (SVD) to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix U
Figure BDA00012744018600000511
Atom of (1)
Figure BDA00012744018600000512
While thinning out the coefficient matrix A
Figure BDA00012744018600000513
Updating to the product of the first column of the matrix V and Δ (1, 1);
and finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary.
Preferably, the dictionary D is processed using the following methodCAnd DRCarrying out fusion:
LC(n) and LR(N), N is 1,2, …, N represents the characteristic index of the nth atom of CT dictionary and MR dictionary respectively, and the fusion formula is expressed as follows:
Figure BDA00012744018600000514
where λ is 0.25.
The brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning provided by the embodiment of the invention can respectively fuse three groups of brain medical images of normal brain, brain atrophy and brain tumor, and multiple experimental results show that compared with a multi-scale transformation-based method, a traditional sparse representation method, a K-SVD dictionary learning-based method and a multi-scale dictionary learning method, the ICDL method provided by the invention not only improves the quality of brain medical image fusion, but also effectively reduces the dictionary training time and can provide effective help for clinical medical diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning according to an embodiment of the present invention;
FIG. 2 is a high quality CT and MR image as a training set;
fig. 3 is a CT/MR fusion result of a normal brain, where a is a CT image, b is an MR image, c is a DWT (discrete wavelet transform) image, d is an SWT (smooth wavelet transform) image, e is an NSCT (non-downsampling contourlet transform) image, f is an SRM (conventional sparse representation method) image, g is an SRK (K-SVD Dictionary learning method-based) image, h is an MDL (multi-scale Dictionary learning method-based) image, and i is an icdl (enhanced Coupled Dictionary learning) image used in the present invention;
FIG. 4 shows CT/MR fusion results of brain atrophy;
FIG. 5 shows the CT/MR fusion results of brain tumors.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning provided in an embodiment of the present invention includes the following steps:
step 100, a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided into
Figure BDA0001274401860000071
Image blocks of size, for each CT source image ICAnd MR source image IRAll are provided with
Figure BDA0001274401860000072
Encoding the image blocks into m-dimensional column vector, and encoding the CT source image ICThe jth image block in (1)
Figure BDA0001274401860000073
MR source image IRThe jth image block in (1)
Figure BDA0001274401860000074
Subtract the respective average:
Figure BDA0001274401860000075
wherein the content of the first and second substances,
Figure BDA0001274401860000076
and
Figure BDA0001274401860000077
respectively represent
Figure BDA0001274401860000078
And
Figure BDA0001274401860000079
mean of all elements in (1) represents an m-dimensional column vector of all 1;
step 200, fusion stage: solving using the CoefROMP algorithm
Figure BDA00012744018600000710
The formula is as follows:
Figure BDA00012744018600000711
wherein | α | Y phosphor0Representing the number of non-zero elements in the sparse coefficient alpha, representing the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRThe specific calculation method of the fused dictionary obtained after fusion is as follows:
using the high quality CT and MR images shown in FIG. 2 as the training set, vector pairs { X ] are sampled from the training setC,XR}, define
Figure BDA00012744018600000712
A matrix of n sampled CT image vectors,
Figure BDA00012744018600000713
a matrix formed for corresponding n sampled MR image vectors, wherein Rd×nRepresenting a vector space with d rows and n columns;
the coupled dictionary training of the invention uses an improved K-SVD algorithm which adds complete supporting prior information on the basis of the traditional dictionary learning cost function and alternately updates DC,DRAnd A, the corresponding training optimization problem is as follows:
Figure BDA0001274401860000081
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, indicating a dot product, the mask matrix M is composed of elements 0 and 1, defined as M { | a | ═ 0}, equivalent to M (i, j) ═ 1 if a (i, j) ═ 0, and 0 otherwise. Therefore, a ═ M ═ 0 can keep all zeros in a perfect. Introducing an auxiliary variable:
Figure BDA0001274401860000082
then formula (3) can be equivalently converted into:
Figure BDA0001274401860000083
the solving process of the formula (5) comprises two steps of sparse coding and dictionary updating.
Firstly, in the sparse coding stage, a random matrix initializes a dictionary
Figure BDA0001274401860000084
And
Figure BDA0001274401860000085
the update of the joint sparse coefficient matrix a is achieved by solving equation (6):
Figure BDA0001274401860000086
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (6) can be converted into the following equation:
Figure BDA0001274401860000087
in the formula (I), the compound is shown in the specification,
Figure BDA0001274401860000088
is that
Figure BDA0001274401860000089
Submatrices of non-zero subsets, alpha, corresponding to AiIs a non-zero part of column i. Equation (7) is solved by the coefficient reuse orthogonal matching pursuit algorithm CoefROMP, and thus an updated joint sparse coefficient matrix a can be obtained.
Secondly, in the dictionary updating stage, the optimization problem of equation (5) can be converted into:
Figure BDA00012744018600000810
the compensation term of equation (8) can be written as:
Figure BDA0001274401860000091
in the formula (I), the compound is shown in the specification,
Figure BDA0001274401860000092
representation dictionary
Figure BDA0001274401860000093
The k-th column to be updated,
Figure BDA0001274401860000094
represents the kth row of the joint sparse coefficient matrix a,
Figure BDA0001274401860000095
representation maskJ-th row of the film matrix M for ensuring
Figure BDA0001274401860000096
Is in the correct position. Mask matrix
Figure BDA0001274401860000097
Is to form a row vector
Figure BDA0001274401860000098
Copying d times to obtain matrix with size of d × n and rank of 1, and masking matrix
Figure BDA0001274401860000099
Can be effectively removed
Figure BDA00012744018600000910
The columns of the sample corresponding to the k-th atom are not used. For error matrix EkSingular Value Decomposition (SVD) to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix U
Figure BDA00012744018600000911
Atom of (1)
Figure BDA00012744018600000912
While thinning out the coefficient matrix A
Figure BDA00012744018600000913
Updated as the product of the first column of matrix V and Δ (1, 1).
And finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary. Then the following method is used for the dictionary DCAnd DRCarrying out fusion:
LC(n) and LR(N), where N is 1,2, …, N represents the characteristic index of the nth atom of the CT dictionary and the MR dictionary, respectively, since the CT and MR images of the brain are images obtained by different imaging devices corresponding to the same part of the human bodyThere must be common features and individual features between the two. The invention proposes to regard the atoms with larger difference of characteristic indexes as respective characteristics and use the rule of 'selecting the maximum' for fusion. The atoms with smaller difference of characteristic indexes are regarded as common characteristics, and are fused by using an average rule, and the formula is expressed as follows:
Figure BDA00012744018600000914
let λ be 0.25 here, and use the information entropy as the feature index according to the physical characteristics of the medical image. The method combines the sparse domain method and the space domain method, considers the physical characteristics of the medical image to calculate the characteristic indexes of the dictionary atoms, and has more definite physical significance compared with the sparse domain method.
And in the dictionary updating stage, the dictionary and the non-zero elements of the sparse representation coefficients are updated simultaneously, so that the representation error of the dictionary is smaller and the convergence speed of the dictionary is higher. In the sparse coding stage, considering that the representation of the previous iteration is ignored during each iteration, the CoefROMP algorithm proposes that the coefficient is updated by using the sparse representation residual information of the previous iteration, so that the solution of the required problem is obtained more quickly.
Calculating to obtain a fused dictionary DFThen, the l of the sparse coefficient is2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtained
Figure BDA0001274401860000101
And
Figure BDA0001274401860000102
fusing by the following fusion rules:
Figure BDA0001274401860000103
mean value
Figure BDA0001274401860000104
And
Figure BDA0001274401860000105
using a "weighted average" rule fusion:
Figure BDA0001274401860000106
wherein the content of the first and second substances,
Figure BDA0001274401860000107
then
Figure BDA0001274401860000108
And
Figure BDA0001274401860000109
the fusion result of (a) is:
Figure BDA00012744018600001010
step 300, a reconstruction phase: and executing the two steps on all the image blocks to obtain the fusion result of all the image blocks. For each block vector
Figure BDA00012744018600001011
Reshaped by the process of reverse sliding window
Figure BDA00012744018600001012
The image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF
To verify the effectiveness of the method of the present invention, three sets of registered brain CT/MR images are selected for fusion, wherein the CT/MR images are respectively a normal brain CT/MR image as shown in a and b of fig. 3, a brain atrophy CT/MR image as shown in a and b of fig. 4, a brain tumor CT/MR image as shown in a and b of fig. 5, and the image sizes are 256 × 256. The selected comparison algorithm comprises the following steps: discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), nonsubsampled contourlet transform (NSCT), traditional Sparse Representation Method (SRM), K-SVD dictionary learning-based method (SRK), multi-scale dictionary learning-based Method (MDL), and the fusion results are respectively shown in c, d, e, f, g, and h in fig. 3, c, d, e, f, g, and h in fig. 4, and c, d, e, f, g, and h in fig. 5.
In the multi-scale transform based approach, the decomposition level is set to 3 for both DWT and SWT approaches, and the wavelet bases are set to "db 6" and "bior 1.1", respectively. The NSCT method uses a "9-7" pyramid filter and a "c-d" directional filter, with the decomposition level set to {2 }2,22,23,24}. In the sparse representation-based method, the sliding step is 1, the image block sizes are all 8 × 8, the dictionary sizes are all 64 × 256, the error is 0.01, the sparsity τ is 6, and the ICDL method performs 6 multiple Dictionary Update Cycles (DUCs) and 30 iterations by using an improved K-SVD algorithm.
3-5, the fused image edge texture of the DWT method is fuzzy, the image information is distorted and the block effect exists; compared with a DWT method, the fusion quality of the SWT method and the NSCT method is relatively good, the brightness, the contrast and the definition of an image are greatly improved, but the problems of edge brightness distortion and artifacts in soft tissues and focus areas still exist; compared with the method based on multi-scale transformation, the SRM and SRK method has the advantages that the bone tissues and the soft tissues of the image are clearer, the artifacts are reduced, and the focus area can be well identified; compared with the SRM and SRK methods, the MDL method can keep more detail information, the image quality is further improved, and partial artifacts still exist; the ICDL method provided by the invention is superior to other methods in the brightness, contrast, definition and detail retention of images, the fused images have no artifacts, the bone tissues, soft tissues and lesion areas are displayed clearly, and the diagnosis of doctors is facilitated.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (2)

1. A brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning is characterized by comprising the following steps:
a pretreatment stage: for the CT/MR source image I of the brain which is already registeredC,IR∈RMN,RMNRepresenting a vector space with M rows and N columns, using a sliding window of step size 1 to represent the source image IC,IRAre respectively divided into
Figure FDA0002775257790000011
Image blocks of size, for each CT source image ICAnd MR source image IRAll are provided with
Figure FDA0002775257790000012
Encoding the image blocks into m-dimensional column vector, and encoding the CT source image ICThe jth image block in (1)
Figure FDA0002775257790000013
MR source image IRThe jth image block in (1)
Figure FDA0002775257790000014
Subtract the respective average:
Figure FDA0002775257790000015
Figure FDA0002775257790000016
wherein the content of the first and second substances,
Figure FDA0002775257790000017
and
Figure FDA0002775257790000018
respectively represent
Figure FDA0002775257790000019
And
Figure FDA00027752577900000110
mean of all elements in (1) represents an m-dimensional column vector of all 1;
a fusion stage: solving using the CoefROMP algorithm
Figure FDA00027752577900000111
The formula is as follows:
Figure FDA00027752577900000112
Figure FDA00027752577900000113
wherein | α | Y phosphor0Representing the number of non-zero elements in the sparse coefficient alpha, representing the accuracy of the allowable deviation, DFRepresentation dictionary DCAnd DRObtaining a fused dictionary after fusion;
will sparse coefficient of l2The norm is used as the liveness measurement of the source image, and then the sparse coefficient is obtained
Figure FDA00027752577900000114
And
Figure FDA00027752577900000115
fusing by the following fusion rules:
Figure FDA00027752577900000116
mean value
Figure FDA0002775257790000021
And
Figure FDA0002775257790000022
using a "weighted average" rule fusion:
Figure FDA0002775257790000023
wherein the content of the first and second substances,
Figure FDA0002775257790000024
then
Figure FDA0002775257790000025
And
Figure FDA0002775257790000026
the fusion result of (a) is:
Figure FDA0002775257790000027
a reconstruction stage: performing a preprocessing stage and a fusion stage on all image blocks to obtain a fusion result of all image blocks, for each block vector
Figure FDA0002775257790000028
Reshaped by the process of reverse sliding window
Figure FDA0002775257790000029
The image blocks are put back to the corresponding pixel positions, and then the repeated pixels are averaged to obtain the final fusion image IF
The following is used in the fusion phaseFair dictionary DCAnd DRCarrying out fusion:
LC(n) and LR(N), N is 1,2, …, N represents the characteristic index of the nth atom of CT dictionary and MR dictionary respectively, and the fusion formula is expressed as follows:
Figure FDA00027752577900000210
where λ is 0.25.
2. The method of claim 1, wherein in the fusion phase, the fused dictionary is computed by:
using high quality CT and MR images as a training set, vector pairs { X ] are sampled from the training setC,XR}, define
Figure FDA00027752577900000211
A matrix of n sampled CT image vectors,
Figure FDA00027752577900000212
a matrix formed for corresponding n sampled MR image vectors, wherein Rd×nRepresenting a vector space with d rows and n columns;
adding complete-support prior information on the basis of the dictionary learning cost function, and alternately updating DC,DRAnd A, the corresponding training optimization problem is as follows:
Figure FDA0002775257790000031
wherein A is XCAnd XRτ is the sparsity of the joint sparse coefficient matrix a, indicates a dot product, the mask matrix M is composed of elements 0 and 1, defined as M { | a | ═ 0}, equivalent to if a (i, j) ═ 0 then M (i, j) ═ 1, otherwise 0, introducing an auxiliary variable:
Figure FDA0002775257790000032
formula (1) can be equivalently converted into:
Figure FDA0002775257790000033
the solving process of the formula (3) comprises two steps of sparse coding and dictionary updating:
firstly, in the sparse coding stage, a random matrix initializes a dictionary
Figure FDA0002775257790000034
And
Figure FDA0002775257790000035
the update of the joint sparse coefficient matrix a is achieved by solving equation (4):
Figure FDA0002775257790000036
if the non-zero elements of each column in the joint sparse coefficient matrix a are processed separately and the zero elements are kept complete, equation (4) can be converted into the following equation:
Figure FDA0002775257790000037
in the formula (I), the compound is shown in the specification,
Figure FDA0002775257790000038
is that
Figure FDA0002775257790000039
Submatrices of non-zero subsets, alpha, corresponding to AiIs a non-zero part of the ith column A, and equation (5) is calculated from the coefficient reuse orthogonal matching pursuitSolving by a CoefROMP method to obtain an updated joint sparse coefficient matrix A;
secondly, in the dictionary updating stage, the optimization problem of the formula (3) is converted into:
Figure FDA0002775257790000041
the compensation term of equation (6) is written as:
Figure FDA0002775257790000042
in the formula (I), the compound is shown in the specification,
Figure FDA0002775257790000043
representation dictionary
Figure FDA0002775257790000044
The k-th column to be updated,
Figure FDA0002775257790000045
represents the kth row of the joint sparse coefficient matrix a,
Figure FDA0002775257790000046
line j representing mask matrix M for assurance
Figure FDA0002775257790000047
With zero elements in the correct position, mask matrix
Figure FDA0002775257790000048
Is to form a row vector
Figure FDA0002775257790000049
Copying d times to obtain matrix with size of d × n and rank of 1, and masking matrix
Figure FDA00027752577900000410
Can be effectively removed
Figure FDA00027752577900000411
The columns of the sample corresponding to the k-th atom are not used, and the error matrix EkSingular Value Decomposition (SVD) is carried out to obtain Ek=UΔVTUpdating the dictionary using the first column of the matrix U
Figure FDA00027752577900000412
Atom of (1)
Figure FDA00027752577900000413
While thinning out the coefficient matrix A
Figure FDA00027752577900000414
Updating to the product of the first column of the matrix V and Δ (1, 1);
and finally, circularly executing two stages of sparse coding and dictionary updating until reaching a preset iteration number, and outputting a pair of coupled DCAnd DRA dictionary.
CN201710259812.1A 2017-04-20 2017-04-20 Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning Active CN107194912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710259812.1A CN107194912B (en) 2017-04-20 2017-04-20 Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710259812.1A CN107194912B (en) 2017-04-20 2017-04-20 Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning

Publications (2)

Publication Number Publication Date
CN107194912A CN107194912A (en) 2017-09-22
CN107194912B true CN107194912B (en) 2020-12-29

Family

ID=59871779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710259812.1A Active CN107194912B (en) 2017-04-20 2017-04-20 Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning

Country Status (1)

Country Link
CN (1) CN107194912B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680072A (en) * 2017-11-01 2018-02-09 淮海工学院 It is a kind of based on the positron emission fault image of depth rarefaction representation and the fusion method of MRI
CN108428225A (en) * 2018-01-30 2018-08-21 李家菊 Image department brain image fusion identification method based on multiple dimensioned multiple features
CN108846430B (en) * 2018-05-31 2022-02-22 兰州理工大学 Image signal sparse representation method based on multi-atom dictionary
CN109461140A (en) * 2018-09-29 2019-03-12 沈阳东软医疗***有限公司 Image processing method and device, equipment and storage medium
CN109946076B (en) * 2019-01-25 2020-04-28 西安交通大学 Planetary wheel bearing fault identification method of weighted multi-scale dictionary learning framework
CN109998599A (en) * 2019-03-07 2019-07-12 华中科技大学 A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system
WO2020223865A1 (en) * 2019-05-06 2020-11-12 深圳先进技术研究院 Ct image reconstruction method, device, storage medium, and computer apparatus
CN110443248B (en) * 2019-06-26 2021-12-03 武汉大学 Method and system for eliminating semantic segmentation blocking effect of large-amplitude remote sensing image
CN114428873B (en) * 2022-04-07 2022-06-28 源利腾达(西安)科技有限公司 Thoracic surgery examination data sorting method
CN117877686B (en) * 2024-03-13 2024-05-07 自贡市第一人民医院 Intelligent management method and system for traditional Chinese medicine nursing data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182954A (en) * 2014-08-27 2014-12-03 中国科学技术大学 Real-time multi-modal medical image fusion method
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182954A (en) * 2014-08-27 2014-12-03 中国科学技术大学 Real-time multi-modal medical image fusion method
CN104376565A (en) * 2014-11-26 2015-02-25 西安电子科技大学 Non-reference image quality evaluation method based on discrete cosine transform and sparse representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于非下采样Contourlet变换和区域特征的医学图像融合;李超等;《计算机应用》;20130630;第33卷(第6期);第1727-1731页 *
联合稀疏表示的医学图像融合及同步去噪;宗静静等;《中国生物》;20160430;第35卷(第2期);第133-140页 *

Also Published As

Publication number Publication date
CN107194912A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN107194912B (en) Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning
Armanious et al. MedGAN: Medical image translation using GANs
AU2020100199A4 (en) A medical image fusion method based on two-layer decomposition and improved spatial frequency
CN104156994B (en) Compressed sensing magnetic resonance imaging reconstruction method
Hu et al. Multi-modality medical image fusion based on separable dictionary learning and Gabor filtering
CN109754403A (en) Tumour automatic division method and system in a kind of CT image
CN103218791A (en) Image de-noising method based on sparse self-adapted dictionary
CN107292858B (en) Multi-modal medical image fusion method based on low-rank decomposition and sparse representation
CN107301630B (en) CS-MRI image reconstruction method based on ordering structure group non-convex constraint
CN111325695B (en) Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111696042B (en) Image super-resolution reconstruction method based on sample learning
Lin et al. BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation
Aghabiglou et al. Projection-Based cascaded U-Net model for MR image reconstruction
CN111598964A (en) Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN114331849B (en) Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method
CN115457359A (en) PET-MRI image fusion method based on adaptive countermeasure generation network
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Gao et al. Hierarchical perception adversarial learning framework for compressed sensing MRI
CN115018728A (en) Image fusion method and system based on multi-scale transformation and convolution sparse representation
Jiang et al. One shot PACS: Patient specific Anatomic Context and Shape prior aware recurrent registration-segmentation of longitudinal thoracic cone beam CTs
CN117333750A (en) Spatial registration and local global multi-scale multi-modal medical image fusion method
Barbano et al. Steerable conditional diffusion for out-of-distribution adaptation in imaging inverse problems
Sander et al. Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant