CN103279959B - A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method - Google Patents

A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method Download PDF

Info

Publication number
CN103279959B
CN103279959B CN201310233516.6A CN201310233516A CN103279959B CN 103279959 B CN103279959 B CN 103279959B CN 201310233516 A CN201310233516 A CN 201310233516A CN 103279959 B CN103279959 B CN 103279959B
Authority
CN
China
Prior art keywords
dictionary
omega
image
image block
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310233516.6A
Other languages
Chinese (zh)
Other versions
CN103279959A (en
Inventor
施云惠
齐娜
丁文鹏
尹宝才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310233516.6A priority Critical patent/CN103279959B/en
Publication of CN103279959A publication Critical patent/CN103279959A/en
Application granted granted Critical
Publication of CN103279959B publication Critical patent/CN103279959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention disclose a kind of make full use of image spatial coherence, need less training sample, a large amount of two-dimension analysis sparse model of saving the storage space of dictionary, and based on the dictionary training method of this model and image de-noising method.

Description

A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method
Technical field
The invention belongs to the technical field of signal modeling, relate to a kind of two-dimension analysis sparse model particularly, and based on the dictionary training method of this model and image de-noising method.
Background technology
Rarefaction representation be a kind of comparative maturity modeling pattern, to be widely studied, and in most of signal transacting field widespread use, as image denoising, textures synthesis, Video processing and Images Classification.Utilize rarefaction representation to signal modeling mainly comprises two classes: synthesize sparse modeling and analyze sparse modeling.
Synthetic model is defined as follows: x=Db, s.t.||b|| 0=k, here be a complete dictionary of mistake, wherein an atom (primitive) is shown in each list. it is a sparse vector.L 0norm || || 0for characterizing the degree of rarefication of sparse signal, be defined as k nonzero element in a vector.In synthetic model, signal x can by the k in D atom linear expression [2].
Analytical model refers to a signal x, when being analyzed dictionary by one after taking advantage of, its result b meets b=Ω x, s.t.||b|| 0=p-l, be a sparse signal, l represents the joint sparse degree of signal x, and wherein synthesizing sparse coefficient vector b has l neutral element.Contrast is with synthetic model, and analytical model emphasizes the position of neutral element in sparse coefficient more, because these positions indicate the orthogonal space of signal x, that is understands the orthogonal complement space belonging to signal.
Following three aspects are mainly comprised for the research analyzing sparse model: tracing algorithm (PursuitAlgorithm), dictionary training problem, and theoretical analysis.Past all analysis sparse model is when processing two dimensional image, all by arranging or being converted into one dimension height dimensional signal by row by it, result in following problem: first, the two-dimensional space of image is destructurized, local correlations between image is not used effectively, secondly, in order to obtain the estimation of effective robustness, the training sample in a large amount of higher dimensional spaces is needed.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, provides a kind of and makes full use of the spatial coherence of image, needs the two-dimension analysis sparse model of the storage space of less training sample, a large amount of saving dictionary.
Technical solution of the present invention is: this two-dimension analysis sparse model, and this model is formula (2)
{ Ω ^ 1 , Ω ^ 2 , { X ^ j } j = 1 M } = arg min Ω 1 , Ω 2 , { X j } j = 1 M Σ j = 1 M | | X j - Y j | | F 2
s . t . | | Ω 1 X j Ω 2 T | | 0 ≤ p - l , ∀ 1 ≤ j ≤ M , - - - ( 2 )
| | w i ( 1 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 1 ,
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ k ≤ p 2 ,
Wherein p=p 1× p 2, with be respectively and characterize image block X jthe horizontal dictionary of horizontal nature and vertical property and vertical dictionary, with be respectively Ω 1the i-th row and Ω 2row k vector, deriving the number of null element in sparse signal is l, and M is the number of image block in sample set, Y jfor the image block element of the jth in sample set, be X jadd the image block to be made an uproar after making an uproar.
Additionally provide the dictionary training method based on two-dimension analysis sparse model, comprise the following steps:
(1) construct training sample set I: to a secondary noisy image, the some image blocks of stochastic sampling are carried out to this image, and image block is combined to training sample concentrates, obtain training sample set wherein Y jrepresent the size of the jth image block obtained that image sampled, represent real number field, its dimension is d 1, M 0=M × d 1, M represents image block sample size;
(2) initialization two dictionary Ω 1, Ω 2: the pseudoinverse utilizing the discrete cosine transform dictionary of redundancy, initialization dictionary Ω 1, Ω 2;
(3) sparse coding: first pass through obtain the dictionary Ω that tensor generates, by training sample set in each block permutatation, obtain new samples collection wherein vec (Y j) represent image block Y jcarry out the result by column weight arrangement, wherein d=d 1× d 1so, to each column signal vec (Y j), utilize formula (5) to solve reconstructed results vec (X j)
{ vec ( X ^ j ) } = arg min vec ( X j ) | | vec ( Y j ) - vec ( X j ) | | 2 2 , - - - ( 5 ) ;
s . t . Ω Λ j vec ( X j ) = 0 , Rank ( Ω Λ j ) = d - r
(4) dictionary updating: as renewal Ω 2time, suppose Ω 1be given, utilize U 1=[(Ω 1x 1) t, (Ω 1x 2) t, Λ, (Ω 1x m) t] obtain estimate V 1, objective function is formula (6), when trying to achieve the V of estimation 1after, by finding V 1in with current certain row primitive to be estimated orthogonal index line J, then correspondence finds U 1submatrix under manipulative indexing J, then solution formula (7), realizes dictionary Ω 2row k upgrade, to Ω 1renewal identical therewith
{ Ω ^ 2 , { v ^ j } j = 1 N } = arg min Ω 2 , { v j } j = 1 N Σ j = 1 N | | u j - v j | | 2 2
s . t . Ω 2 Λ j u j = 0 , ∀ 1 ≤ j ≤ N ,
Rank ( Ω 2 Λ j ) = d 1 - r 2
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 2 - - - ( 6 )
{ w k ( 2 ) } = arg min | | w k ( 2 ) U J | | , s . t . | | w k ( 2 ) | | 2 = 1 - - - ( 7 ) ;
(5) judge whether to reach iteration stopping condition: if meet iteration stopping condition, get back to step (3), otherwise return output dictionary Ω 1, Ω 2, complete the training of dictionary.
Additionally provide a kind of based on two-dimension analysis sparse model and the image de-noising method of training dictionary thereof, comprise the following steps:
(1) image block utilizing noisy image configuration to be solved;
(2) two are utilized to train dictionary Ω 1, Ω 2solve required dictionary Ω in one dimension sparse coding;
(3) one dimension sparse reconstruction method is utilized to solve the reconstructed value of Y ;
(4) above-mentioned N number of reconstruction image block is utilized to obtain denoising image.
Two dictionary Ω are considered in the two-dimension analysis model proposed in the present invention 1, Ω 2respectively from the characteristic of horizontal direction and Vertical Square always picture engraving, the space structure of image and the spatial coherence of image can be made full use of like this, so only need to train the less dictionary of storage space just can reflect picture characteristics adaptively, utilize and train the dictionary obtained to carry out image denoising, original image characteristic can be excavated better from image to be made an uproar, thus realize image denoising better.Embodiments of the invention reflection is as dictionary Ω 1, Ω 2storage size much smaller than the size of the storage space of dictionary Ω in traditional one-dimensional analytical model, but with dictionary Ω 1, Ω 2the Ω that tensor generates 0when storage space is equal, the denoising effect of denoising effect of the present invention and traditional sparse model is suitable.And when dictionary in conventional model is little of time close with the size of dictionary of the present invention, denoising effect obviously declines, therefore dictionary storage space is being reduced in the present invention, still can ensure the denoising effect of image, and conventional model dictionary is when being reduced to equal with dictionary storage size of the present invention, denoising effect then obviously declines, and denoising effect of the present invention will be better than conventional model far away, and dictionary training algorithm of the present invention has certain reduction compared with on conventional analytical model dictionary training algorithm algorithm complex.
Accompanying drawing explanation
Fig. 1 shows the process flow diagram according to the dictionary training method based on two-dimension analysis sparse model of the present invention;
Fig. 2 shows and utilizes the process flow diagram of image de-noising method based on two-dimension analysis sparse model and training dictionary thereof of the present invention.
Embodiment
This two-dimension analysis sparse model, this model is formula (2)
{ Ω ^ 1 , Ω ^ 2 , { X ^ j } j = 1 M } = arg min Ω 1 , Ω 2 , { X j } j = 1 M Σ j = 1 M | | X j - Y j | | F 2
s . t . | | Ω 1 X j Ω 2 T | | 0 ≤ p - l , ∀ 1 ≤ j ≤ M , - - - ( 2 )
| | w i ( 1 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 1 ,
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ k ≤ p 2 ,
Wherein p=p 1× p 2, with be respectively and characterize image block X jthe horizontal dictionary of horizontal nature and vertical property and vertical dictionary, with be respectively Ω 1the i-th row and Ω 2row k vector, deriving the number of null element in sparse signal is l, and M is the number of image block in sample set, Y jfor the image block element of the jth in sample set, be X jadd the image block to be made an uproar after making an uproar.
Additionally provide the dictionary training method based on two-dimension analysis sparse model, comprise the following steps:
(3) construct training sample set I: to a secondary noisy image, the some image blocks of stochastic sampling are carried out to this image, and image block is combined to training sample concentrates, obtain training sample set , wherein Y jrepresent the size of the jth image block obtained that image sampled, represent real number field, its dimension is d 1, M 0=M × d 1, M represents image block sample size;
(4) initialization two dictionary Ω 1, Ω 2: the pseudoinverse utilizing the discrete cosine transform dictionary of redundancy, initialization dictionary Ω 1, Ω 2;
(3) sparse coding: first pass through obtain the dictionary Ω that tensor generates, by training sample set in each block permutatation, obtain new samples collection wherein vec (Y j) represent image block Y jcarry out the result by column weight arrangement, wherein d=d 1× d 1so, to each column signal vec (Y j), utilize formula (5) to solve reconstructed results vec (X j)
{ vec ( X ^ j ) } = arg min vec ( X j ) | | vec ( Y j ) - vec ( X j ) | | 2 2 , - - - ( 5 ) ;
s . t . Ω Λ j vec ( X j ) = 0 , Rank ( Ω Λ j ) = d - r
(4) dictionary updating: as renewal Ω 2time, suppose Ω 1be given, utilize
U 1=[(Ω 1x 1) t, (Ω 1x 2) t, Λ, (Ω 1x m) t] obtain estimate V 1, objective function is formula (6), when trying to achieve the V of estimation 1after, by finding V 1in with current certain row primitive to be estimated orthogonal index line J, then correspondence finds U 1submatrix under manipulative indexing J, then solution formula (7), realizes dictionary Ω 2row k upgrade, to Ω 1renewal identical therewith
{ Ω ^ 2 , { v ^ j } j = 1 N } = arg min Ω 2 , { v j } j = 1 N Σ j = 1 N | | u j - v j | | 2 2
s . t . Ω 2 Λ j u j = 0 , ∀ 1 ≤ j ≤ N ,
Rank ( Ω 2 Λ j ) = d 1 - r 2
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 2 - - - ( 6 )
{ w k ( 2 ) } = arg min | | w k ( 2 ) U J | | , s . t . | | w k ( 2 ) | | 2 = 1 - - - ( 7 ) ;
(5) judge whether to reach iteration stopping condition: if meet iteration stopping condition, get back to step (3), otherwise return output dictionary Ω 1, Ω 2, complete the training of dictionary.
Additionally provide a kind of based on two-dimension analysis sparse model and the image de-noising method of training dictionary thereof, comprise the following steps:
(1) image block utilizing noisy image configuration to be solved;
(2) two are utilized to train dictionary Ω 1, Ω 2solve required dictionary Ω in one dimension sparse coding;
(3) one dimension sparse reconstruction method is utilized to solve the reconstructed value of Y ;
(4) above-mentioned N number of reconstruction image block is utilized to obtain denoising image.
In traditional analysis sparse model by image by row or be scanned into vector form by row, destroy the space structure of image like this, the spatial coherence of image cannot be made full use of, as the element of image block the first row and the element of last column non-conterminous, correlativity is more weak, and have passed through by after column scan, in training dictionary process, think that they are strong correlations, train the result of the dictionary obtained necessarily can not reflect the spatial coherence of image well like this.In tradition sparse representation model, dictionary reflects the characteristic of the vectorization image block orthogonal complement space, namely only considered the characteristic of an image dimension, and considers two dictionary Ω in the two-dimension analysis model proposed in the present invention 1, Ω 2respectively from the characteristic of horizontal direction and Vertical Square always picture engraving, the space structure of image and the spatial coherence of image can be made full use of like this, so only need to train the less dictionary of storage space just can reflect picture characteristics adaptively, utilize and train the dictionary obtained to carry out image denoising, original image characteristic can be excavated better from image to be made an uproar, thus realize image denoising better.Embodiments of the invention reflection is as dictionary Ω 1, Ω 2storage size much smaller than the size of the storage space of dictionary Ω in traditional one-dimensional analytical model, but with dictionary Ω 1, Ω 2the Ω that tensor generates 0when storage space is equal, the denoising effect of denoising effect of the present invention and traditional sparse model is suitable.And when dictionary in conventional model is little of time close with the size of dictionary of the present invention, denoising effect obviously declines, therefore dictionary storage space is being reduced in the present invention, still can ensure the denoising effect of image, and conventional model dictionary is when being reduced to equal with dictionary storage size of the present invention, denoising effect then obviously declines, and denoising effect of the present invention will be better than conventional model far away, and dictionary training algorithm of the present invention has certain reduction compared with on conventional analytical model dictionary training algorithm algorithm complex.
Preferably, iterated conditional is that whether iterations reaches upper limit L or whether noise error reaches designated value.
Illustrate the specific embodiment of this method below.
For the ease of the understanding of hereinafter formula and symbol, first provide the explanation of a little symbolic formula here.Hereinafter black upper case character representing matrix: as matrix X, and this matrix is designated as vec (X) by the vector form after column weight arrangement.Black lowercase character represents vector, as vector x, and vector norm is defined as here x jit is the element of the jth in vector x.|| || 0represent l 0norm, is used for the number of the vectorial nonzero element of sign one.The F norm of matrix refers to wherein x ijthe i-th row jth row of representing matrix X.The Kroneckerproduct (tensor product) of matrix U and V is defined as:
U ⊗ V = u 1,1 V u 1,2 V Λ u 2,1 V u 2,2 V M O .
In order to introduce two-dimension analysis sparse model better, first the present invention introduces One Dimension Analysis sparse model.Matrix X represents that size is d 1× d 1image block, definition Y=X+V, wherein V is noise block.When given training sample set is here M 0=M × d 1, the training process of one dimension dictionary is defined as:
{ Ω ^ , { vec ( X ^ j ) } j = 1 M } = arg min Ω , { vec ( X j ) } j = 1 M Σ j = 1 M | | vec ( X j ) - vec ( Y j ) | | 2 2
s . t . | | Ωvec ( X j ) | | 0 ≤ - p - l , ∀ 1 ≤ j ≤ M , - - - ( 1 )
| | w i | | 2 = 1 , ∀ 1 ≤ i ≤ p ,
Here vec (X j) indicate without noise cancellation signal, vectorial w irepresent redundancy analysis dictionary the i-th row vector.One Dimension Analysis sparse model hypothesis signal vec (X j) be at least orthogonal to the capable signal of l in Ω, and meet || Ω vec (X j) || 0≤ p-l.
On the basis of above model, it is as follows that the present invention introduces two-dimension analysis model:
{ Ω ^ 1 , Ω ^ 2 , { X ^ j } j = 1 M } = arg min Ω 1 , Ω 2 , { X j } j = 1 M Σ j = 1 M | | X j - Y j | | F 2
s . t . | | Ω 1 X j Ω 2 T | | 0 ≤ p - l , ∀ 1 ≤ j ≤ M , - - - ( 2 )
| | w i ( 1 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 1 ,
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ k ≤ p 2 ,
Here p=p 1× p 2, with be respectively horizontal dictionary and vertical dictionary, with be respectively Ω 1the i-th row and Ω 2row k vector.Two dictionaries pointed out by two-dimension analysis model---horizontal dictionary and vertical dictionary, characterize image block X respectively jhorizontal nature and vertical property, when acting on image block X jtime, a sparse signal can be obtained namely same this point it is emphasized that derive the position of null element in sparse signal, its null element number is l.
Ω 1and Ω 2two dictionaries play an important role at two-dimension analysis model.If definition A 11x, A 22x t, mean Ω 1and Ω 2be respectively X and X tdictionary, now A 1, A 2for sparse coefficient, and as dictionary Ω 1time fixing, then Ω 2can regard as sparse dictionary, otherwise as dictionary Ω 2time fixing, then Ω 1can regard as sparse dictionary.And as dictionary Ω 1, Ω 2when all fixing, two-dimension analysis model can be converted into One Dimension Analysis model, and meets these characteristics are using the foundation as dictionary training.
Two benches Aries In The Block By Block Relaxation method can be adopted in the present invention to solve optimization problem-formula (2), mainly comprise two key link sparse codings and dictionary updating.The object of sparse coding is at given dictionary Ω 1, Ω 2time, estimate reconstruction signal dictionary updating object is then to use above reconstruction signal to dictionary Ω 1, Ω 2upgrade.
The sparse coding stage
As simultaneously given Ω 1, Ω 2time, rebuild can be converted into and solve each X jask following optimization problem:
X ^ j = arg min X j | | Y j - X j | | F 2 , s . t . | | Ω 1 X j Ω 2 T | | 0 ≤ p - l - - - ( 3 )
Owing to working as Ω 1, Ω 2when all fixing, order then the sparse Solve problems of two-dimension analysis can be converted into the sparse Solve problems of following One Dimension Analysis:
{ vec ( X ^ j ) } = arg min vec ( X j ) | | vec ( Y j ) - vec ( X j ) | | F 2 , - - - ( 4 )
s . t . | | Ωvec ( X j ) | | 0 ≤ p - l , | | ω i | | 2 = 1,1 ≤ i ≤ p
Here ω irepresent i-th row of Ω.Above problem can be solved by one dimension sparse coding method, and as greedy algorithm (GAP), reverse greedy algorithm (BGA), optimizes reverse greedy algorithm (xBG) etc. and solve.
The problems referred to above are converted into following problem solving by the present invention:
{ vec ( X ^ j ) } = arg min vec ( X j ) | | vec ( Y j ) - vec ( X j ) | | 2 2 , - - - ( 5 )
s . t . Ω Λ j vec ( X j ) = 0 , Rank ( Ω Λ j ) = d - r
Here Λ jbe orthogonal to signal vec (X j) the set of the capable index of l, expression manipulative indexing is Λ jthe submatrix of Ω, therefore and r represents signal vec (X j) belonging to the dimension in space, therefore Rank ( Ω Λ j ) = d - r .
The dictionary updating stage:
The object of dictionary training process uses upgrade dictionary Ω 1, Ω 2although, but obtain estimated value, but to dictionary Ω 1, Ω 2renewal process remain non-convex optimization problem, therefore the present invention adopts alternately optimal algorithm to upgrade dictionary.For Ω 1, Ω 2optimization procedure be similar, below with Ω 2renewal process be example, provide corresponding algorithm.
Define in the present invention n=p 1× M, utilize the characteristic of two-dimension analysis model in above-mentioned analysis, then Ω 2training set U 1dictionary.Therefore Ω 2renewal process be converted into following one dimension dictionary learning problem:
{ Ω ^ 2 , { v ^ j } j = 1 N } = arg min Ω 2 , { v j } j = 1 N Σ j = 1 N | | u j - v j | | 2 2
s . t . Ω 2 Λ j u j = 0 , ∀ 1 ≤ j ≤ N , - - - ( 6 )
Rank ( Ω 2 Λ j ) = d 1 - r 2
| | w k ( 2 ) | | 2 = 1 , ∀ 1 ≤ i ≤ p 2
Here u ju 1jth row, v jand V 1u respectively jand U 1estimation.R 2characterize signal u jthe dimension in affiliated space.
Be similar to sparse solution procedure (5), first we fix Ω 2, then one arrange renewal V 1.And when upgrading dictionary Ω 2in each row time, we are by V 1in be orthogonal to the set of row when being designated as J, then U jrepresent U 1comprise the submatrix of those row of index in J, therefore right renewal can by solving following problem solving:
{ w ^ k ( 2 ) } = arg min w k ( 2 ) | | w k ( 2 ) U J | | , s . t . | | w k ( 2 ) | | 2 = 1 - - - ( 7 )
This is a SVD problem: sample set U jthe proper vector of minimal eigenvalue corresponding to autocorrelation matrix be exactly required solution.
Here the algorithm complex in the two-dimension analysis sparse model proposed in the present invention in dictionary training process and dictionary storage size is provided.The accurate svd algorithm complexity of the matrix of a m × n is O (min{mn 2, nm 2).In One Dimension Analysis sparse model, carrying out more new capital to each row of dictionary needs to carry out a SVD, and its algorithm complex is O (d 2m) (d<<M), then the algorithm complex needed for dictionary upgrading whole one dimension sparse model is O (pd 2m).Then, in two-dimentional sparse model, need the matrix size performing SVD operation to be d 1× N, N=p 1× M, therefore each row of dictionary carries out upgrading required algorithm complex is O (d 1 2n)=O (dp 1m), dictionary Ω is upgraded 1, Ω 2comprise p respectively 1and p 2row primitive, therefore, the algorithm complex upgraded needed for all dictionaries is suppose then the algorithm complex of the dictionary updating of the present invention's proposition is O (pdM).For dictionary storage space, storage size required in the present invention is (p 1+ p 2) × d 1individual pixel, then required in one-dimensional model storage size is p × d, p=p 1× p 2, obviously, algorithm complex and the dictionary storage space of training dictionary is greatly reduced in the present invention.
In order to the validity of two-dimension analysis sparse model and the dictionary training method proposed in the present invention is described, the above theory of the present invention general and algorithm application are in image denoising.
The test pattern used in the present invention is: ' Lena', ' Barbara', ' Boats', ' House', ' Pepper'.We add the white Gaussian noise of square error σ=5 to above signal, then extract 20 at random, and 000 block size is the image block composition training set of 7 × 7, for training dictionary.Utilize the pseudoinverse of redundant discrete cosine DCT to two dictionary Ω 1, Ω 2carry out initialization.The iterations arranged in training process is 20 times, solves in the process of (5) and makes r=6.When training obtains Ω 1, Ω 2after, the present invention utilizes the dictionary of training to carry out denoising to the above-mentioned image of making an uproar that adds, and gives the dictionary training result of One Dimension Analysis model in prior art and corresponding denoising result simultaneously, thus the validity of the model and algorithm proposed in the present invention is described.
In order to verify with the denoising effect of epigraph, mainly by Y-PSNR (PeakSignaltoNoiseRatio, PSNR) tolerance, unit is decibel (dB).Its computing formula is as follows:
PSNR = 10 &CenterDot; log 10 ( 255 2 MSE ) - - - ( 8 )
Two width sizes are that the square error MSE of the image of m*n is defined as follows:
MSE = 1 m &times; n &Sigma; x = 0 m - 1 &Sigma; y = 0 n - 1 | | I ( x , y ) - J ( x , y ) | | 2 - - - ( 9 )
Wherein I, J represents the image of original not Noise respectively and utilizes sparse coding method to rebuild image, and I (x, y), J (x, y) is for corresponding to position (x, y) pixel value at place, then square error is less, then PSNR is higher, then the denoising effect of the method is higher.
Table 1 gives denoising result with PSNR form.First three columns is the experimental result in the present invention, and corresponding dictionary size is 7 × 7,8 × 7 respectively, the situation of 9 × 7.Last row are that in prior art, dictionary size is the result of 64 × 49.Compared with the method for prior art, the two dimensional model proposed in the present invention is when using less dictionary, but can reach the denoising effect suitable compared with big dictionary with one-dimensional model, and have 3 sub-pictures in 5 sub-pictures, even denoising effect can higher than one-dimensional model (experimental data of black matrix).
The present invention compares two-dimension analysis model and One Dimension Analysis model further when the denoising result in the close situation of storage size needed for dictionary.In this test, the dictionary of One Dimension Analysis model is 4 × 49, and training dictionary result when the dictionary size that the present invention provides is 8 × 7 and denoising result, its result is as shown in table 2.Compared with the method for prior art, when the size of prior art dictionary is suitable with dictionary size of the present invention, denoising result then can not show a candle to result of the present invention.Obviously, when dictionary size is same or equivalent, two-dimension analysis model can obtain better denoising effect than One Dimension Analysis model.
Table 1
Table 2
By above result, can find out that the two-dimension analysis sparse model proposed in the present invention can utilize the Two-Dimensional Correlativity in two dimensional image better.The model provided in the present invention simultaneously, when dictionary size is much smaller than One Dimension Analysis model dictionary, still can reach suitable denoising effect, and when in one dimension dictionary size and the present invention during planar dictionary sizableness, then the denoising effect in the present invention is better than the denoising effect of one-dimensional model far away.Obviously, also illustrate that the validity of dictionary storage space in the present invention.
Two-dimension analysis model dictionary training algorithm embodiment
1. first construct training sample set I
To a secondary noisy image, some image blocks of stochastic sampling can be carried out to this image.As sample 7 × 7 image block, and image block be combined to training sample concentrate, obtain training sample set wherein Y jrepresent and image is sampled the jth d obtained 1× d 1the size of the image block M image block of=7 × 7, represent real number field, its dimension is d 1, M 0=M × d 1, M represents image block sample size.As M=20000 in the present embodiment.
2. initialization two dictionary Ω 1, Ω 2
Utilize the pseudoinverse of discrete cosine transform (DCT) dictionary of redundancy, initialization dictionary Ω 1, Ω 2, two dictionary size are all set to 8 × 7.
3. dictionary Ω 1, Ω 2training---sparse coding
Utilize the algorithm in form 1, first pass through obtain the dictionary Ω that tensor generates, by original training sample collection in each block permutatation, obtain new sample set wherein vec (Y j) represent image block Y jcarry out the result by column weight arrangement, wherein d=d 1× d 1so, to each column signal vec (Y j), utilize formula (5) or (10) to solve reconstructed results vec (X j).
{ vec ( X ^ j ) } = arg min vec ( X j ) | | vec ( Y j ) - vec ( X j ) | | 2 2 , - - - ( 10 )
s . t . &Omega; &Lambda; j vec ( X j ) = 0 , Rank ( &Omega; &Lambda; j ) = d - r
4. dictionary Ω 1, Ω 2training---dictionary updating
To upgrade Ω 2for example, suppose Ω 1be given, more than utilizing, solve the reconstructed results obtained and original picture block can calculate
U 1=[(Ω 1X 1) T,(Ω 1X 2) T,Λ,(Ω 1X M) T],V 1=[(Ω 1Y 1) T,(Ω 1Y 2) T,Λ,(Ω 1Y M) T]。Then deteriorate to dictionary Ω at present 2renewal process.But directly upgrade Ω 2more difficult, due to Ω 2v 1dictionary, therefore to Ω 2the renewal of dictionary, needs to find V 1in orthogonal with it submatrix.Therefore first need to utilize U 1=[(Ω 1x 1) t, (Ω 1x 2) t, Λ, (Ω 1x m) t] obtain estimate V 1.Objective function is (6), and in solution procedure, essence solves the problem being similar to (10).When trying to achieve the V of estimation 1after, then by finding V 1in with current certain row primitive to be estimated orthogonal index line J, then correspondence finds U 1submatrix under manipulative indexing J, then Solve problems (7) or (11), can to the row k of dictionary upgrade.
{ w k ( 2 ) } = arg min | | w k ( 2 ) U j | | , s . t . | | w k ( 2 ) | | 2 = 1 - - - ( 11 )
Complete successively Ω 2the renewal of every a line.Ω 1computation process the same.
5. judge whether to reach iteration stopping condition: as whether iterations reaches upper limit L, noise error reaches certain restriction etc.If still meet iterated conditional to get back to the 3rd step, carry out sparse coding again and dictionary updating, when not meeting iterated conditional, then return and export dictionary Ω 1, Ω 2.Complete the training of dictionary.
Two dimension synthetic model image denoising embodiment
1. utilize the image block that noisy image configuration is to be solved.
Known noisy image is carried out to the block sampling of 7 × 7, and in sampling process, use has overlapping mode to sample, lap is overlap=1.Sampling N block, then and by image block be arranged in the image treating sparse reconstruction of 7 × 7N, obtain set Y to be reconstructed altogether.
2. utilize two to train dictionary Ω 1, Ω 2solve required dictionary Ω in one dimension sparse coding
Pass through obtain the dictionary Ω that tensor generates..
3. utilize traditional one dimension sparse reconstruction method to solve the reconstructed value of Y .
The sparse reconstruction algorithm that the present embodiment adopts one dimension traditional---optimum oppositely greedy algorithm (xBG), carries out permutatation to each image block in above Y, obtains to every column signal vec (Y wherein j) rebuild, finally then its reconstructed results is vec (X j), then to [vec (X 1), vec (X 2), Λ, vec (X m)] carry out inverse permutatation operation, obtain the reconstruction image block set of the form of image block.
4. utilize above-mentioned N number of reconstruction image block to obtain denoising image.
According to the overlap mode of the sample mode in sampling process and respective image block, the N number of reconstruction image block obtained at present being recovered back original image size again, corresponding to there being overlapping place, then adopting the operation of averaging.If namely certain pixel is had by m=6 block simultaneously, then the value that this block is final is the mean value corresponding to this pixel on its total sampling block.Finally can recover the image obtaining the denoising of rebuilding.
The above; it is only preferred embodiment of the present invention; not any pro forma restriction is done to the present invention, every above embodiment is done according to technical spirit of the present invention any simple modification, equivalent variations and modification, all still belong to the protection domain of technical solution of the present invention.

Claims (4)

1. a two-dimension analysis sparse model, is characterized in that: this model is formula (2)
{ &Omega; ^ 1 , &Omega; ^ 2 , { X ^ j } j = 1 M } = arg min &Omega; 1 , &Omega; 2 , { X j } j = 1 M &Sigma; j = 1 M | | X j - Y j | | F 2
s . t . | | &Omega; 1 X j &Omega; 2 T | | 0 &le; p - l , &ForAll; 1 &le; j &le; M , - - - ( 2 )
| | w i ( 1 ) | | 2 = 1 , &ForAll; 1 &le; i &le; p 1 ,
| | w k ( 2 ) | | 2 = 1 , &ForAll; 1 &le; k &le; p 2 ,
Wherein p=p 1× p 2, with be respectively and characterize image block X jthe horizontal dictionary of horizontal nature and vertical property and vertical dictionary, p 1, p 2represent dictionary Ω respectively 1, Ω 2the number of row vector, with be respectively Ω 1the i-th row and Ω 2row k vector, deriving the number of null element in sparse signal is l, and M is the number of image block in sample set, Y jfor the image block element of the jth in sample set, be X jadd the image block to be made an uproar after making an uproar.
2., based on a dictionary training method for two-dimension analysis sparse model, it is characterized in that: comprise the following steps:
(1) construct training sample set I: to a secondary noisy image, the some image blocks of stochastic sampling are carried out to this image, and image block is combined to training sample concentrates, obtain training sample set wherein Y jrepresent the size of the jth image block obtained that image sampled, represent real number field, its dimension is d 1, M 0=M × d 1, M represents image block sample size;
(2) initialization two dictionary Ω 1, Ω 2: the pseudoinverse utilizing the discrete cosine transform dictionary of redundancy, initialization dictionary Ω 1, Ω 2;
(3) sparse coding: first pass through obtain the dictionary Ω that tensor generates, by training sample set in each block permutatation, obtain new samples collection wherein vec (Y j) represent image block Y jcarry out the result by column weight arrangement, wherein d=d 1× d 1so, to each column signal vec (Y j), utilize formula (5) to solve reconstructed results vec (X j)
{ v e c ( X ^ j ) } = arg m i n v e c ( X j ) | | v e c ( Y j ) - v e c ( X j ) | | 2 2 , - - - ( 5 )
s . t . &Omega; &Lambda; j v e c ( X j ) = 0 , R a n k ( &Omega; &Lambda; j ) = d - r
Wherein r is signal the dimension of belonging subspace;
(4) dictionary updating: as renewal Ω 2time, suppose Ω 1be given, utilize U 1=[(Ω 1x 1) t, (Ω 1x 2) t..., (Ω 1x m) t] obtain estimate V 1, objective function is formula (6), when trying to achieve the V of estimation 1after, by finding V 1in with current certain row primitive to be estimated orthogonal index line J, then correspondence finds U 1submatrix under manipulative indexing J, then solution formula (7), realizes dictionary Ω 2row k upgrade, to Ω 1renewal identical therewith
{ &Omega; ^ 2 , { v ^ j } j = 1 N } = arg min &Omega; 2 , { v j } j = 1 N &Sigma; j = 1 N | | u j - v j | | 2 2
s . t . &Omega; 2 &Lambda; j u j = 0 , &ForAll; 1 &le; j &le; N ,
R a n k ( &Omega; 2 &Lambda; j ) = d 1 - r 2
| | w k ( 2 ) | | 2 = 1 , &ForAll; 1 &le; i &le; p 2 - - - ( 6 )
{ w k ( 2 ) } = arg m i n | | w k ( 2 ) U J | | , s . t . | | w k ( 2 ) | | 2 = 1 - - - ( 7 ) ;
(5) judge whether to reach iteration stopping condition: if meet iteration stopping condition, get back to step (3), otherwise return output dictionary Ω 1, Ω 2, complete the training of dictionary.
3. the dictionary training method based on two-dimension analysis sparse model according to claim 2, is characterized in that: iterated conditional is that whether iterations reaches upper limit L or whether noise error reaches designated value.
4., based on two-dimension analysis sparse model and an image de-noising method of training dictionary thereof, it is characterized in that: comprise the following steps:
(1) image block utilizing noisy image configuration to be solved;
(2) two are utilized to train dictionary Ω 1, Ω 2solve required dictionary Ω in one dimension sparse coding, wherein &Omega; = &Omega; 1 &CircleTimes; &Omega; 2 ;
(3) one dimension sparse reconstruction method is utilized to solve the reconstructed value of Y
(4) above-mentioned N number of reconstruction image block is utilized to obtain denoising image.
CN201310233516.6A 2013-06-13 2013-06-13 A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method Active CN103279959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310233516.6A CN103279959B (en) 2013-06-13 2013-06-13 A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310233516.6A CN103279959B (en) 2013-06-13 2013-06-13 A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method

Publications (2)

Publication Number Publication Date
CN103279959A CN103279959A (en) 2013-09-04
CN103279959B true CN103279959B (en) 2016-02-10

Family

ID=49062466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310233516.6A Active CN103279959B (en) 2013-06-13 2013-06-13 A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method

Country Status (1)

Country Link
CN (1) CN103279959B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184742B (en) * 2015-08-07 2018-03-27 河海大学常州校区 A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector
CN105509886B (en) * 2015-11-26 2017-08-25 中国科学院重庆绿色智能技术研究院 Pixelation spectrometer based on electrically-controlled liquid crystal
CN105825473B (en) * 2015-12-24 2019-05-28 三维通信股份有限公司 It is a kind of to analyze the sparse and sparse image recovery method adaptively switched of synthesis
US20170221235A1 (en) * 2016-02-01 2017-08-03 General Electric Company Negative dictionary learning
CN106056141B (en) * 2016-05-27 2019-04-19 哈尔滨工程大学 A kind of target identification of use space sparse coding and angle rough estimate calculating method
CN106097278B (en) * 2016-06-24 2021-11-30 北京工业大学 Sparse model, reconstruction method and dictionary training method of multi-dimensional signal
CN108399608B (en) * 2018-03-01 2021-10-15 桂林电子科技大学 High-dimensional image denoising method based on tensor dictionary and total variation
CN113989406B (en) * 2021-12-28 2022-04-01 成都理工大学 Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930573A (en) * 2012-11-02 2013-02-13 北京工业大学 Image reconstruction method based on two-dimensional analysis sparse model and training dictionaries of two-dimensional analysis sparse model
JP2013109759A (en) * 2011-11-18 2013-06-06 Mitsubishi Electric Corp Method for pan-sharpening panchromatic image and multispectral image using dictionary

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290251B2 (en) * 2008-08-21 2012-10-16 Adobe Systems Incorporated Image stylization using sparse representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013109759A (en) * 2011-11-18 2013-06-06 Mitsubishi Electric Corp Method for pan-sharpening panchromatic image and multispectral image using dictionary
CN102930573A (en) * 2012-11-02 2013-02-13 北京工业大学 Image reconstruction method based on two-dimensional analysis sparse model and training dictionaries of two-dimensional analysis sparse model

Also Published As

Publication number Publication date
CN103279959A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103279959B (en) A kind of two-dimension analysis sparse model, its dictionary training method and image de-noising method
CN103279932B (en) A kind of two dimension synthesis sparse model and dictionary training method based on this model
CN110796625B (en) Image compressed sensing reconstruction method based on group sparse representation and weighted total variation
CN103810755B (en) Compressed sensing spectrum picture method for reconstructing based on documents structured Cluster rarefaction representation
CN109490957B (en) Seismic data reconstruction method based on space constraint compressed sensing
CN102722866B (en) Compressive sensing method based on principal component analysis
CN105046672A (en) Method for image super-resolution reconstruction
CN109191404A (en) A kind of high spectrum image restorative procedure based on E-3DTV canonical
Peyré et al. Learning the morphological diversity
CN103093431B (en) The compressed sensing reconstructing method of Based PC A dictionary and structure prior imformation
Feng et al. Compressive sensing via nonlocal low-rank tensor regularization
Qi et al. Two dimensional synthesis sparse model
CN109887050A (en) A kind of code aperture spectrum imaging method based on self-adapting dictionary study
CN108230280A (en) Image speckle noise minimizing technology based on tensor model and compressive sensing theory
Kong et al. A new 4-D nonlocal transform-domain filter for 3-D magnetic resonance images denoising
Li et al. On joint optimization of sensing matrix and sparsifying dictionary for robust compressed sensing systems
Cao et al. CS-MRI reconstruction based on analysis dictionary learning and manifold structure regularization
Yang et al. Reconstruction of structurally-incomplete matrices with reweighted low-rank and sparsity priors
CN103714534A (en) Material surface defect detection method based on compressed sensing
CN104200439B (en) Image super-resolution method based on adaptive filtering and regularization constraint
Sun et al. Mixed noise removal for hyperspectral images based on global tensor low-rankness and nonlocal SVD-aided group sparsity
Zhang et al. Tensor recovery with weighted tensor average rank
CN104915935A (en) Compressed spectral imaging method based on nonlinear compressed sensing and dictionary learning
CN104299201A (en) Image reconstruction method based on heredity sparse optimization and Bayes estimation model
Keshavarzian et al. LLp norm regularization based group sparse representation for image compressed sensing recovery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant