CN110428392A - A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation - Google Patents
A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation Download PDFInfo
- Publication number
- CN110428392A CN110428392A CN201910850346.3A CN201910850346A CN110428392A CN 110428392 A CN110428392 A CN 110428392A CN 201910850346 A CN201910850346 A CN 201910850346A CN 110428392 A CN110428392 A CN 110428392A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- dictionary
- matrix
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000004927 fusion Effects 0.000 title claims abstract description 48
- 238000002156 mixing Methods 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 4
- 238000000354 decomposition reaction Methods 0.000 claims abstract 2
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000000694 effects Effects 0.000 claims description 7
- 230000010339 dilation Effects 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000006872 improvement Effects 0.000 claims description 5
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims 1
- 238000007500 overflow downdraw method Methods 0.000 description 5
- 238000002591 computed tomography Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 210000001367 artery Anatomy 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002224 dissection Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention proposes a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation, method includes the following steps: first, classify using window sliding technology segmented image, and according to histogram of gradients feature (HOG) to multi-source image sample;Secondly, converting thereof into vector, dictionary learning is carried out, successive ignition singular value decomposition method (K-SVD) training dictionary is passed through;Then low-rank representation (LRR) method is used, the coefficient of low-rank representation is obtained;Subsequently by 1 norm maximum principle and fusion rule, preliminary blending image is obtained;Finally by image compensation, to obtain final blending image.The present invention reaches good result either in terms of the image subjective vision or in terms of objective indicator, obtains the preferable blending image of quality.
Description
Technical field
The present invention relates to field of image processing, in particular to a kind of medical image based on dictionary learning and low-rank representation melts
Conjunction method.
Background technique
Medical image plays an important role in clinical diagnosis and surgical navigational.However, due to the difference of image-forming mechanism, no
Same medical image is different in terms of organizing the expression with organ information.For example, computed tomography (CT) imaging can be quasi-
Really detect the compact texture of such as bone and implantation material etc.Magnetic resonance imaging provides high-resolution dissection letter for soft tissue
Breath, but the diagnosis fractured is sensitive not as good as CT.The imaging of same human body organ-tissue is only capable of instead by single mode medical image
Reflect limited structure, form and information.In order to obtain enough diagnostic messages, doctor needs to extract from the image of different mode
Information.Obviously, this method can make troubles to practical operation.In order to solve this problem, a kind of method is needed to integrate not
With the supplemental information of the image of mode, i.e. image co-registration.
In the past few decades, the Image Fusion of various principles has been developed.Earlier picture merges main base
In non-representative learning method.Multi-scale transform is most common method.Classical way includes more changeable in image co-registration
Change domain, including laplacian pyramid (LP), wavelet transform (DWT), discrete cosine transform (DCT), non-down sampling profile
Convert (NSCT) etc..There are certain defects in terms of details preservation for classical method.Another kind of method is based on dictionary learning
Sparse representation method.Although the fusion method based on rarefaction representation has many advantages, the ability of global structure is captured
It is limited.On the contrary, the global structure of low-rank representation (LRR) capture data, but do not retain partial structurtes.Therefore the present invention proposes
A kind of multi-focus image fusing method based on dictionary learning and LRR, to solve the above problems.
Summary of the invention
For above-mentioned defect existing in the prior art or deficiency, the present invention is proposed based on dictionary learning and low-rank representation
Method of Medical Image Fusion solves what the global structure in existing Medical image fusion, partial structurtes capture and details saved
Problem comprises the following steps that
Step 1, to two registered width source images [I1,I2] using sliding window technique it is divided into image block, it calculates point
The histograms of oriented gradients (HOG) of image block after cutting, and classified according to HOG feature to image block, it is assumed that I1And I2Ruler
Very little is M × N;
Step 2, sorted image is subjected to rarefaction representation, uses the multiple dictionaries of K-SVD method training;
Step 3, multiple dictionaries of acquisition are synthesized into a dictionary, indicates source images using low-rank representation (LRR) method, obtains
Obtain the coefficient of low-rank representation;
Step 4, the selection of low-rank coefficient is carried out by 1 norm maximum principle, and according to the improvement La Pula based on neighborhood
The weighted sum WSEML of this operator calculates weight and is merged, and obtains preliminary blending image;
Step 5, image compensation is carried out to the preliminary blending image of acquisition, obtains final blending image.
Fusion method dictionary-based learning can carry out rarefaction representation to image, and low-rank representation (LRR) captures image
Global structure, the weighted sum of introducing activity magnitude WSEML calculate weight and are merged, be can solve thin in Medical image fusion
Save extraction problem.
Preferably, the step 1 specifically includes:
(1) to two registered width source images [I1,I2] use sliding window technique progress image segmentation, it is assumed that I1With
I2Size be M × N, window size be n × n, step-length s, so will be divided intoA fritter Pi
(j=1,2 ..., Q);
(2) in extracting HOG characteristic procedure, it is assumed that have L chest { θ1,θ2,…,θL, Gi(θj) (j=1,2 ..., L) generation
Table is in j-th of chest, the gradient value of i-th of fritter, defines JiAs fritter PiGrade, the classification method of grade is as follows,
GiMax=max { Gi(θi), J=arg maxJ{Gi(θi) represent PiDominant gradient, T be a threshold value be used to determine
Whether fritter PiThere are dominant gradient, Ji=0 represents PiThere is no dominant gradient, that is to say, that PiIt is random.
The step 2 specifically includes:
(1) after classifying, all sorted image blocks is reconstructed into column vector, constitute corresponding matrix Vj(j=0,1 ...,
L);
(2) matrix VjDictionary DjIt can be obtained by KSVD.Subordinate dictionary Dj(j=0,1 ..., L) it respectively obtains
Afterwards, it is combined into Global Dictionary D according to figure.Global Dictionary D will be used to carry out the process of image co-registration, and the word as LRR
Allusion quotation input.
The step 3 specifically includes:
(1) during image co-registration, each source images are divided into Q image block first, are then sorted by dictionary
All image blocks are converted into vector, by these Vector Groups composograph matrix Vs IA, to source images I2Identical operation is carried out,
Obtain image array VIB。
(2) LRR parameter matrix ZAAnd ZBIt can be by calculating VIA, VIBAnd following formula,
s,t.,VIC=DZC+EC,
VIC, C ∈ { A, B } representative representative is by I1Or I2The image array of acquisition, D represent Global Dictionary, ZC, C ∈ { A, B } representative
VICCorresponding LRR coefficient matrix, | | | |*Indicating nuclear norm, it is the summation of singular values of a matrix,Referred to as l2,1- norm, y, x respectively represent ECY row and xth column, λ > 0 be balance system
Number, EC, C ∈ { A, B } represents VICCorresponding error matrix.
The step 4 specifically includes:
(1)ZCI, C ∈ { A, B } represent ZCThe i-th column vector i={ 1,2..., Q }, pass through 1 norm and MAXIMUM SELECTION strategy
Determining related LRR coefficient vector, fusion LRR related coefficient vector can obtain by following formula,
ZfRepresent the fusion LRR correlation matrix obtained by formula, ZfiRepresent ZfThe i-th column vector i={ 1,2..., Q }, |
|·||1Indicate 1 norm;
(2) blending image block matrix VfIt can be obtained by formula, D represents Global Dictionary, Z in this formulafIt represents
The fusion LRR correlation matrix obtained by formula,
Vf=DZf
Vector VfI represents matrix VfThe i-th column vector, i={ 1,2..., Q }, by vector VfI is reconstructed into the image block of n × n, fixed
Justice is blocki, i={ 1,2 ..., Q };
(3) assume that two image blocks for having lap are respectively gaAnd gb, improvement drawing of the calculating activity magnitude based on neighborhood
The weighted sum WSEML of general Laplacian operaters,
S ∈ { A, B }, q and p are respectively represented in gaOr gbIn q row pth column, q={ 1,2 ..., n }, p={ 1,2 ..., n }, W are
One (2r+1) × (2r+1) weight matrix, r is radius.For each element in W, value 22r-d, d is four neighborhoods
To the distance at center, it is exactly g that S represents the image to be calculated hereinaOr gb,
(4) work as WA<WB,
Fusion (a, b)=gA(q,p)×W1+gB(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
W in formulaA=WSEMLA (q, p), WB=WSEMLB (q, p), W1, W2If representing the weight W calculatedA>WB, exchange equation
In WAAnd WBPosition,
Fusion (a, b)=gB(q,p)×W1+gA(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
Fusion is defined as preliminary blending image, and Fusion (a, b) represents the pixel in preliminary blending image at a row b column
Point, q={ 1,2 ..., M }, p={ 1,2 ..., N }, ga(q, p) and gb(q, p) is two image blocks for having lap.Q and p points
It Dai Biao not be in gaOr gbIn q row pth column, q={ 1,2 ..., n }, p={ 1,2 ..., n };
The step 5 specifically includes:
(1) image Fusion and I are obtained by formula1And I2The difference figure DiffF of mean value,
(2) the difference figure DiffF of acquisition is subjected to morphological dilations, is as a result used as parameter adaptive Pulse Coupled Neural Network
(PA-PCNN) input obtains image DiffFPCNN, then to DiffFPCNNNormalization;
(3) difference figure DiffF is handled according to formula, is combined with Fusion, obtains final blending image
Finish,
Finish (a, b)=Fusion (a, b)+DiffF_dilate (a, b) × DiffFPCNN(a,b)。
DiffF_dilate is that difference figure DiffF passes through the image that morphological dilations obtain.
Detailed description of the invention
In order to illustrate the embodiments of the present invention more clearly and technical solution in the prior art, below will to embodiment or
Attached drawing needed to be used in the description of the prior art is briefly described, it should be apparent that, the accompanying drawings in the following description is only
The embodiment of the present invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other accompanying drawings.
Fig. 1 is the Medical image fusion side based on dictionary learning and low-rank representation accord to a specific embodiment of that present invention
The structural block diagram of method;
Fig. 2 is two width source images accord to a specific embodiment of that present invention;
Fig. 3 is image co-registration result accord to a specific embodiment of that present invention;
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not considered as limiting the invention.
Referring to Figure of description 1, the Method of Medical Image Fusion of the invention based on dictionary learning and low-rank representation includes
Following steps:
Step 1, to two registered width source images [I1,I2] using sliding window technique it is divided into image block, it calculates point
The histograms of oriented gradients (HOG) of image block after cutting, and classified according to HOG feature to image block, it is assumed that I1And I2Ruler
Very little is M × N;
(1) to two registered width source images [I1,I2] use sliding window technique progress image segmentation, it is assumed that I1With
I2Size be M × N, window size be n × n, step-length s, so will be divided intoA fritter Pi
(j=1,2 ..., Q);
(2) in extracting HOG characteristic procedure, it is assumed that have L chest { θ1,θ2,…,θL, Gi(θj) (j=1,2 ..., L) generation
Table is in j-th of chest, the gradient value of i-th of fritter, defines JiAs fritter PiGrade, the classification method of grade is as follows,
GiMax=max { Gi(θi), J=arg maxJ{Gi(θi) represent PiDominant gradient, T be a threshold value be used to determine
Whether fritter PiThere are dominant gradient, Ji=0 represents PiThere is no dominant gradient, that is to say, that PiIt is random.
Step 2, sorted image is subjected to rarefaction representation, uses the multiple dictionaries of K-SVD method training;
(1) after classifying, all sorted image blocks is reconstructed into column vector, constitute corresponding matrix Vj(j=0,1 ...,
L);
(2) matrix VjDictionary DjIt can be obtained by KSVD.Subordinate dictionary Dj(j=0,1 ..., L) it respectively obtains
Afterwards, it is combined into Global Dictionary D according to figure.Global Dictionary D will be used to carry out the process of image co-registration, and the word as LRR
Allusion quotation input.
Step 3, multiple dictionaries of acquisition are synthesized into a dictionary, indicates source images using low-rank representation (LRR) method, obtains
Obtain the coefficient of low-rank representation;
(1) during image co-registration, each source images are divided into Q image block first, are then sorted by dictionary
All image blocks are converted into vector, by these Vector Groups composograph matrix Vs IA, to source images I2Identical operation is carried out,
Obtain image array VIB;
(2) LRR parameter matrix ZAAnd ZBIt can be by calculating VIA, VIBAnd following formula,
s,t.,VIC=DZC+EC,
VIC, C ∈ { A, B } representative representative is by I1Or I2The image array of acquisition, D represent Global Dictionary, ZC, C ∈ { A, B } representative
VICCorresponding LRR coefficient matrix, | | | |*Indicating nuclear norm, it is the summation of singular values of a matrix,Referred to as l2,1- norm, y, x respectively represent ECY row and xth column, λ > 0 be balance system
Number, EC, C ∈ { A, B } represents VICCorresponding error matrix.
Step 4, the selection of low-rank coefficient is carried out by 1 norm maximum principle, and according to the improvement La Pula based on neighborhood
The weighted sum WSEML of this operator calculates weight and is merged, and obtains preliminary blending image;
(1)ZCI, C ∈ { A, B } represent ZCThe i-th column vector i={ 1,2..., Q }, pass through 1 norm and MAXIMUM SELECTION strategy
Determining related LRR coefficient vector, fusion LRR related coefficient vector can obtain by following formula,
ZfRepresent the fusion LRR correlation matrix obtained by formula, ZfiRepresent ZfThe i-th column vector i={ 1,2..., Q }, |
|·||1Indicate 1 norm;
(2) blending image block matrix VfIt can be obtained by formula, D represents Global Dictionary, Z in this formulafIt represents
The fusion LRR correlation matrix obtained by formula,
Vf=DZf
Vector VfI represents matrix VfThe i-th column vector, i={ 1,2..., Q }, by vector VfI is reconstructed into the image block of n × n, fixed
Justice is blocki, i={ 1,2 ..., Q };
(3) assume that two image blocks for having lap are respectively gaAnd gb, improvement drawing of the calculating activity magnitude based on neighborhood
The weighted sum WSEML of general Laplacian operaters,
S ∈ { A, B }, q and p are respectively represented in gaOr gbIn q row pth column, q={ 1,2 ..., n }, p={ 1,2 ..., n }, W are
One (2r+1) × (2r+1) weight matrix, r is radius.For each element in W, value 22r-d, d is four neighborhoods
To the distance at center, it is exactly g that S represents the image to be calculated hereinaOr gb,
(4) work as WA<WB,
Fusion (a, b)=gA(q,p)×W1+gB(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
W in formulaA=WSEMLA (q, p), WB=WSEMLB (q, p), W1, W2The weight calculated is represented, if WA>WB, exchange public affairs
W in formulaAAnd WBPosition,
Fusion (a, b)=gB(q,p)×W1+gA(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
Fusion is defined as preliminary blending image, and Fusion (a, b) represents the pixel in preliminary blending image at a row b column
Point, q={ 1,2 ..., M }, p={ 1,2 ..., N }, ga(q, p) and gb(q, p) is two image blocks for having lap.Q and p points
It Dai Biao not be in gaOr gbIn q row pth column, q={ 1,2 ..., n }, p={ 1,2 ..., n };
Step 5, to the preliminary blending image image compensation of acquisition, final blending image is obtained,;
(1) image Fusion and I are obtained by formula1And I2The difference figure DiffF of mean value,
(2) the difference figure DiffF of acquisition is subjected to morphological dilations, is as a result used as parameter adaptive Pulse Coupled Neural Network
(PA-PCNN) input obtains image DiffFPCNN, then to DiffFPCNNNormalization;
(3) difference figure DiffF is handled according to formula, is combined with Fusion, obtains final blending image
Finish,
Finish (a, b)=Fusion (a, b)+DiffF_dilate (a, b) × DiffFPCNN(a,b)。
DiffF_dilate is that difference figure DiffF passes through the image that morphological dilations obtain.
In order to which quantitative assessment difference fusion method is used for Medical image fusion performance, the present invention uses entropy (EN), difference phase
The sum of closing property (SCD), average gradient (AVG), edge strength (EI), spatial frequency (SF), graphic definition (FD), mutual information (MI)
Etc. evaluation parameters, for comparison fusion method use convolution rarefaction representation image co-registration (CSR), be based on parameter adaptive arteries and veins
The non-sub-sampling shear transformation domain Medical image fusion (NSST-PAPCNN) of coupled neural network is rushed, direction discrete cosine is based on
The image co-registration (DDCT-PCA) of transformation and principle component analysis, the Laplacian-pyramid image fusion based on discrete cosine transform
Method (DCT-LP), as shown in table 1, parameter is bigger, and expression effect is better, and fused image quality is better for record.
1 distinct methods image co-registration result parameter contrast table of table
CSR | NSST_PAPCNN | DDCT-PCA | DCT-LP | the proposed | |
EN | 0.999537 | 0.974531156 | 0.998245 | 0.779888 | 0.999999933 |
MI | 1.999074 | 1.949062312 | 1.99649 | 1.559775 | 1.999999866 |
SCD | 1.077217 | 1.539213138 | 1.034647 | 0.911093 | 1.392087796 |
edge | 87.55595 | 83.39720191 | 52.4083 | 99.62058 | 90.89056079 |
G | 10.66156 | 10.01598777 | 6.480024 | 14.66764 | 11.00615204 |
AVG | 8.637632 | 8.223030571 | 5.25503 | 10.71748 | 8.962417008 |
SF | 36.64251 | 32.75371082 | 19.36494 | 38.21401 | 40.12292627 |
As shown in data in table 1, first is classified as evaluation parameter, and second is classified as the image co-registration (CSR) of convolution rarefaction representation,
Third is classified as the non-sub-sampling shear transformation domain Medical image fusion (NSST- based on parameter adaptive Pulse Coupled Neural Network
PAPCNN), the 4th it is classified as the image co-registration (DDCT-PCA) based on direction discrete cosine transform and principle component analysis, the 5th is classified as
Laplacian-pyramid image fusion method (DCT-LP) based on discrete cosine transform, the 6th is classified as side proposed by the present invention
Method, it can be seen that this method is a kind of unique method that front three is all come in all indexs, shows that this method can enhance
The details expressive ability of image extracts more representational characteristics of image from source images and is saved, so in all fields
It all does well, obtains preferably syncretizing effect.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any
One or more embodiment or examples in can be combined in any suitable manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not
A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this
The range of invention is by claim and its equivalent limits.
Claims (6)
1. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation, which is characterized in that this method specifically includes that
Step 1, to two registered width source images [I1,I2] using sliding window technique it is divided into image block, after calculating segmentation
Image block histograms of oriented gradients (HOG), and classified according to HOG feature to image block, it is assumed that I1And I2Size be
M×N;
Step 2, sorted image is subjected to rarefaction representation, it is more using successive ignition singular value decomposition method (K-SVD) training
A dictionary;
Step 3, multiple dictionaries of acquisition are synthesized into a dictionary, indicates source images using low-rank representation (LRR) method, obtains low
The coefficient that order indicates;
Step 4, the selection of low-rank coefficient is carried out by 1 norm maximum principle, and is drawn according to activity magnitude based on the improvement of neighborhood
The weighted sum WSEML of general Laplacian operater calculates weight and is merged, and obtains preliminary blending image;
Step 5, image compensation is carried out to the preliminary blending image of acquisition, obtains final blending image.
2. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation according to claim 1, feature
It is, the step 1 specifically includes:
(1) to two registered width source images [I1,I2] use sliding window technique progress image segmentation, it is assumed that I1And I2Ruler
Very little is M × N, and window size is n × n, step-length s, so will be divided intoA fritter Pi(j=1,
2,···,Q);
(2) in extracting HOG characteristic procedure, it is assumed that have L chest { θ1,θ2,···,θL, Gi(θj) (j=1,2,
L it) represents in j-th of chest, the gradient value of i-th of fritter, defines JiAs fritter PiGrade, the classification method of grade is such as
Under,
Gimax=max { Gi(θi), J=arg maxJ{Gi(θi) represent PiDominant gradient, T be a threshold value be used to determination be
No fritter PiThere are dominant gradient, Ji=0 represents PiThere is no dominant gradient, that is to say, that PiIt is random.
3. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation according to claim 1, feature
It is, the step 2 specifically includes:
(1) after classifying, all sorted image blocks is reconstructed into column vector, constitute corresponding matrix Vj(j=0,1 ..., L);
(2) matrix VjDictionary DjIt can be obtained by KSVD, subordinate dictionary DjIt, will after (j=0,1 ..., L) is respectively obtained
It is combined into Global Dictionary D according to figure, and Global Dictionary D will be used to carry out the process of image co-registration, and the dictionary as LRR is defeated
Enter.
4. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation according to claim 1, feature
It is, the step 3 specifically includes:
(1) during image co-registration, each source images are divided into Q image block first, are then sorted by dictionary by institute
Some image blocks are converted into vector, by these Vector Groups composograph matrix Vs IA, to source images I2Identical operation is carried out, is obtained
Image array VIB;
(2) LRR parameter matrix ZAAnd ZBIt can be by calculating VIA, VIBAnd following formula,
s,t.,VIC=DZC+EC,
VIC, C ∈ { A, B } representative representative is by I1Or I2The image array of acquisition, D represent Global Dictionary, ZC, C ∈ { A, B } represents VIC
Corresponding LRR coefficient matrix, | | | |*Indicating nuclear norm, it is the summation of singular values of a matrix,Referred to as l2,1- norm, y, x respectively represent ECY row and xth column, λ > 0 be balance system
Number, EC, C ∈ { A, B } represents VICCorresponding error matrix.
5. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation according to claim 1, feature
It is, the step 4 specifically includes:
(1)ZCI, C ∈ { A, B } represent ZCThe i-th column vector i={ 1,2..., Q }, by 1 norm and MAXIMUM SELECTION strategy come really
Fixed correlation LRR coefficient vector, fusion LRR related coefficient vector can be obtained by following formula,
ZfRepresent the fusion LRR correlation matrix obtained by formula, ZfiRepresent ZfThe i-th column vector i={ 1,2..., Q }, |
|·||1Indicate 1 norm;
(2) blending image block matrix VfIt can be obtained by formula, D represents Global Dictionary, Z in this formulafIt represents by public affairs
The fusion LRR correlation matrix that formula obtains,
Vf=DZf
Vector VfI represents matrix VfThe i-th column vector, i={ 1,2..., Q }, by vector VfI is reconstructed into the image block of n × n,
It is defined as blocki, i={ 1,2, Q };
(3) assume that two image blocks for having lap are respectively gaAnd gb, improvement La Pula of the calculating activity magnitude based on neighborhood
The weighted sum WSEML of this operators,
S ∈ { A, B }, q and p are respectively represented in gaOr gbIn q row pth column, q={ 1,2, n }, p=1,
2, n }, W is one (2r+1) × (2r+1) weight matrix, and r is radius, and for each element in W, value is
22r-d, d is distance of four neighborhoods to center, and it is exactly g that S represents the image to be calculated hereinaOr gb,
(4) work as WA<WB,
Fusion (a, b)=gA(q,p)×W1+gB(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
W in formulaA=WSEMLA (q, p), WB=WSEMLB (q, p), W1, W2The weight calculated is represented, if WA>WB, exchange public affairs
W in formulaAAnd WBPosition,
Fusion (a, b)=gB(q,p)×W1+gA(q,p)×W2, a={ 1,2,3..., M }, b={ 1,2,3..., N }
Fusion is defined as preliminary blending image, and Fusion (a, b) represents the pixel in preliminary blending image at a row b column
Point, q={ 1,2, M }, p={ 1,2, N }, ga(q, p) and gb(q, p) is two image blocks for having lap,
Q and p are respectively represented in gaOr gbIn q row pth column, q={ 1,2, n }, p={ 1,2, n }.
6. a kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation according to claim 1, feature
It is, the step 5 specifically includes:
(1) image Fusion and I are obtained by formula1And I2The difference figure DiffF of mean value,
(2) the difference figure DiffF of acquisition is subjected to morphological dilations, is as a result used as parameter adaptive Pulse Coupled Neural Network (PA-
PCNN input) obtains image DiffFPCNN, then to DiffFPCNNNormalization;
(3) difference figure DiffF is handled according to formula, is combined with Fusion, obtains final blending image Finish,
Finish (a, b)=Fusion (a, b)+DiffF_dilate (a, b) × DiffFPCNN(a,b)
DiffF_dilate is that difference figure DiffF passes through the image that morphological dilations obtain.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850346.3A CN110428392A (en) | 2019-09-10 | 2019-09-10 | A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850346.3A CN110428392A (en) | 2019-09-10 | 2019-09-10 | A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110428392A true CN110428392A (en) | 2019-11-08 |
Family
ID=68418848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910850346.3A Pending CN110428392A (en) | 2019-09-10 | 2019-09-10 | A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110428392A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833284A (en) * | 2020-07-16 | 2020-10-27 | 昆明理工大学 | Multi-source image fusion method based on low-rank decomposition and convolution sparse coding |
CN111967331A (en) * | 2020-07-20 | 2020-11-20 | 华南理工大学 | Face representation attack detection method and system based on fusion feature and dictionary learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN104282007A (en) * | 2014-10-22 | 2015-01-14 | 长春理工大学 | Contourlet transformation-adaptive medical image fusion method based on non-sampling |
CN107563968A (en) * | 2017-07-26 | 2018-01-09 | 昆明理工大学 | A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning |
CN109801248A (en) * | 2018-12-18 | 2019-05-24 | 重庆邮电大学 | One New Image fusion method based on non-lower sampling shear transformation |
-
2019
- 2019-09-10 CN CN201910850346.3A patent/CN110428392A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN104282007A (en) * | 2014-10-22 | 2015-01-14 | 长春理工大学 | Contourlet transformation-adaptive medical image fusion method based on non-sampling |
CN107563968A (en) * | 2017-07-26 | 2018-01-09 | 昆明理工大学 | A kind of method based on the group medicine image co-registration denoising for differentiating dictionary learning |
CN109801248A (en) * | 2018-12-18 | 2019-05-24 | 重庆邮电大学 | One New Image fusion method based on non-lower sampling shear transformation |
Non-Patent Citations (5)
Title |
---|
HUI LI: "Multi-focus Image Fusion using dictionary learning and Low-Rank Representation", 《COMPUTER VISION AND PATTERN RECOGNITION》 * |
MING YIN: "Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain", 《IEEE》 * |
唐晶磊等: "小波变换在医学图像融合中的应用", 《医学信息》 * |
邓志华等: "低秩稀疏分解与显著性度量的医学图像融合", 《光学技术》 * |
陈蔓等: "基于SIFT字典学习的引导滤波多聚焦图像融合", 《哈尔滨工业大学学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833284A (en) * | 2020-07-16 | 2020-10-27 | 昆明理工大学 | Multi-source image fusion method based on low-rank decomposition and convolution sparse coding |
CN111833284B (en) * | 2020-07-16 | 2022-10-14 | 昆明理工大学 | Multi-source image fusion method based on low-rank decomposition and convolution sparse coding |
CN111967331A (en) * | 2020-07-20 | 2020-11-20 | 华南理工大学 | Face representation attack detection method and system based on fusion feature and dictionary learning |
CN111967331B (en) * | 2020-07-20 | 2023-07-21 | 华南理工大学 | Face representation attack detection method and system based on fusion feature and dictionary learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113506334B (en) | Multi-mode medical image fusion method and system based on deep learning | |
US8634628B2 (en) | Medical image processing apparatus, method and program | |
Li et al. | Wavelet-based segmentation of renal compartments in DCE-MRI of human kidney: initial results in patients and healthy volunteers | |
CN112150428A (en) | Medical image segmentation method based on deep learning | |
CN112116004B (en) | Focus classification method and device and focus classification model training method | |
Wu et al. | Robust x-ray image segmentation by spectral clustering and active shape model | |
CN106096571B (en) | A kind of cell sorting method based on EMD feature extraction and rarefaction representation | |
Jabbar et al. | Using convolutional neural network for edge detection in musculoskeletal ultrasound images | |
CN111681230A (en) | System and method for scoring high-signal of white matter of brain | |
Shouno et al. | A transfer learning method with deep convolutional neural network for diffuse lung disease classification | |
JP2023517058A (en) | Automatic detection of tumors based on image processing | |
Nair et al. | MAMIF: multimodal adaptive medical image fusion based on B-spline registration and non-subsampled shearlet transform | |
CN111178369A (en) | Medical image identification method and system, electronic device and storage medium | |
Kumaraswamy et al. | A review on cancer detection strategies with help of biomedical images using machine learning techniques | |
CN112348059A (en) | Deep learning-based method and system for classifying multiple dyeing pathological images | |
Nie et al. | Feature extraction for medical CT images of sports tear injury | |
CN110428392A (en) | A kind of Method of Medical Image Fusion based on dictionary learning and low-rank representation | |
Zhang et al. | Rapid surface registration of 3D volumes using a neural network approach | |
CN115830016A (en) | Medical image registration model training method and equipment | |
CN112150564A (en) | Medical image fusion algorithm based on deep convolutional neural network | |
Ramana | Alzheimer disease detection and classification on magnetic resonance imaging (MRI) brain images using improved expectation maximization (IEM) and convolutional neural network (CNN) | |
CN111640127B (en) | Accurate clinical diagnosis navigation method for orthopedics department | |
CN114565711A (en) | Heart image reconstruction method and system based on deep learning | |
CN112215878B (en) | X-ray image registration method based on SURF feature points | |
CN111640126B (en) | Artificial intelligent diagnosis auxiliary method based on medical image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191108 |