CN110138479B - Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment - Google Patents

Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment Download PDF

Info

Publication number
CN110138479B
CN110138479B CN201910477917.3A CN201910477917A CN110138479B CN 110138479 B CN110138479 B CN 110138479B CN 201910477917 A CN201910477917 A CN 201910477917A CN 110138479 B CN110138479 B CN 110138479B
Authority
CN
China
Prior art keywords
signal
spectrum sensing
dictionary
noise
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910477917.3A
Other languages
Chinese (zh)
Other versions
CN110138479A (en
Inventor
高玉龙
陈艳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910477917.3A priority Critical patent/CN110138479B/en
Publication of CN110138479A publication Critical patent/CN110138479A/en
Application granted granted Critical
Publication of CN110138479B publication Critical patent/CN110138479B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/382Monitoring; Testing of propagation channels for resource allocation, admission control or handover

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a spectrum sensing method based on dictionary learning in an extremely low signal-to-noise ratio environment, and relates to a spectrum sensing method in cognitive radio. The invention aims to solve the problem that the spectrum sensing accuracy is low in the environment with extremely low signal-to-noise ratio in the existing method. The process is as follows: firstly, establishing a spectrum sensing binary hypothesis model; secondly, forming a training set for dictionary learning; thirdly, training a dictionary; calculating an inner product of the spectrum sensing signal and each column of the trained dictionary, and finding out the position of a maximum value in the inner product; fifthly, updating the index set and the atom set, and obtaining a maximum component by using a least square method; sixthly, calculating the square of the obtained maximum component to obtain the energy corresponding to the maximum component; seventhly, calculating a perception threshold according to a false alarm probability formula; and eighthly, comparing the energy corresponding to the obtained maximum component with a threshold, judging whether the signal exists, if the energy is larger than the threshold, the signal exists, and if not, the signal does not exist. The invention is used for the spectrum sensing field in cognitive radio.

Description

Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment
Technical Field
The invention relates to a frequency spectrum sensing method in cognitive radio.
Background
In order to solve the imbalance of the frequency spectrum utilization and improve the frequency spectrum efficiency, the prior method provides a cognitive radio technology. In cognitive radio, spectrum sensing is the most basic and important link, and is a precondition for cognitive radio environment analysis. The current spectrum sensing method mainly comprises an energy sensing method, a matched filtering sensing method, a cyclostationary feature spectrum sensing method, a feature value spectrum sensing method and the like. But the performance of these methods in very low signal-to-noise ratio environments is to be improved.
The proposal of the learning dictionary theory plays a significant role in the signal sparse decomposition theory. Its advantage is that the optimal result can be obtained for the real signals. However, unlike the analysis dictionary, the learning dictionary has a certain internal structure and a certain construction method. The current dictionary learning algorithm mainly comprises an optimal direction method (MOD algorithm), a K-SVD algorithm, recursive least square dictionary learning (RLS-DLA), Online Dictionary Learning (ODL) and the like. The MOD algorithm and the K-SVD algorithm are most used at present.
The MOD algorithm is introduced first:
the MOD algorithm was first proposed by engind et al and was one of the first algorithms applied to training signal sparse representation learning dictionaries. Suppose a signal xiBelonging to the space omega, for a set of known training signals
Figure BDA0002082888460000011
The final goal of the MOD algorithm is to find a trained dictionary DoptAnd a training signal in the dictionary DoptSparse representation matrix of
Figure BDA0002082888460000012
Minimizing sparse representation errors of training signals on the dictionary
Figure BDA0002082888460000013
Where α isiIs a matrix FoptEach column, | | αi||0K is the number of nonzero values in sparse decomposition. The process of solving is an NP-hard problem similar to solving MP algorithms. Since the problem is non-convex, at best a minimum of a local problem solution can be obtained. Similar to other dictionary training methods, the MOD algorithm is an iterative process that alternately cycles between dictionary updating and sparse coding. The sparse coding is to code all training signals by sparse representation algorithms such as matching pursuit and the like. For the updating process of the dictionary, the equation is solved
Figure BDA0002082888460000014
(
Figure BDA0002082888460000015
The pseudo-inverse) to solve the problem of equation (4).
The main principle of the MOD algorithm is to train D as a matrix, and the process mainly includes two steps: sparse coding of the training signal and an update process of the learning dictionary.
During initialization, a dictionary matrix (which may be a random matrix, and in general, a DCT dictionary is used) is arbitrarily generated and normalized by columns, that is, the initial dictionary matrix
Figure BDA0002082888460000021
The main iterative process is as follows:
(1) sparse coding:
in this stage, the dictionary D is setk-1And (4) solving the signal set x under the condition of ensuring that the signal sparsity is K without changingiIn the dictionary Dk-1Sparse approximation of alphaiI.e. by
Figure BDA0002082888460000022
Wherein, i is more than or equal to 1 and less than or equal to N is the label of the number of training samples, and K is the cycle number of the sparse decomposition algorithm, namely the sparsity of the training signal. All sparse representation vectors are arranged in parallel to obtain a sparse representation matrix gammak
(2) And (3) dictionary updating:
sparse representation matrix gamma obtained by sparse codingkAs a known condition, the dictionary D is updated by solving the following objective function, i.e.
Figure BDA0002082888460000023
(3) Stopping the iteration condition:
solving the sparse representation matrix gamma according to the previous two stepskAnd dictionary DkCalculating residual error
Figure BDA0002082888460000024
If the following conditions are satisfied:
Figure BDA0002082888460000025
or the cycle number satisfies a predetermined value, i.e., K ≧ K0If so, iteration is stopped, and the trained dictionary D is outputopt(ii) a Otherwise, the iteration continues for the previous two steps.
The K-SVD algorithm is described below:
in order to train a learning dictionary which can be used for sparse signal representation more accurately, researchers propose a K-SVD algorithm on the basis of an MOD algorithm. The train algorithm is similar to the MOD algorithm in concept. The main innovation of the K-SVD algorithm is the second part of the algorithm, which processes each atom of the dictionary separately through the process of accurate singular value decomposition, rather than adopting the method of integral inversion and integral update of the matrix. Thus, the algorithm is a more stable and efficient algorithm.
The name of the K-SVD algorithm is taken from the main steps in the algorithm for dictionary atom updating: and repeating the singular value decomposition process for n times according to the number n of atoms in the dictionary. The second order term in equation (4) can be written as:
Figure BDA0002082888460000031
wherein the content of the first and second substances,
Figure BDA0002082888460000032
is a row vector of Γ, EmIs to remove the m-th atom from
Figure BDA0002082888460000033
Residual error matrix after multiplication; in order to minimize the error of equation (8), E should be minimizedmAnd
Figure BDA0002082888460000034
is closest to, and therefore will
Figure BDA0002082888460000035
The decomposition matrix obtained by singular value decomposition is equally distributed to
Figure BDA0002082888460000036
The error of equation (8) is reduced to the maximum extent. In addition, in order to avoid introducing non-zero coefficients in the previous zero value interval in the decomposition, only the current zero value interval can be used in the updating
Figure BDA0002082888460000037
A data set represented by non-zero value locations.
It can be seen from the foregoing description that the process principle of the K-SVD algorithm is similar to that of the MOD algorithm, and the K-SVD algorithm and the MOD algorithm are both divided into two links of sparse representation and dictionary update, and the difference lies in that the dictionary update part of the K-SVD algorithm does not adopt a method of inverting the dictionary in the MOD algorithm, but sequentially updates all atoms in the dictionary through a singular value decomposition algorithm. During initialization, a dictionary matrix (which may be a random matrix, and generally adopts DCT dictionary) is arbitrarily generated, and the dictionary matrix is normalized according to columns, namely, the initial dictionary matrix
Figure BDA0002082888460000038
The main iterative process is as follows:
(1) sparse coding:
in this stage, the dictionary D is setk-1Fixing, and solving a signal set x under the condition of ensuring that the signal sparsity is KiIn the dictionary Dk-1The sparse approximation ofiNamely:
Figure BDA0002082888460000039
wherein, i is more than or equal to 1 and less than or equal to N is the label of the number of training samples, and K is the signal sparsity, namely the sparsity of the training signals; | α |i||0K is less than or equal toThe number of non-zero values in the sparse decomposition; all sparse representation vectors are arranged in parallel to obtain a sparse representation matrix gammak
(2) And (3) dictionary updating:
let dmIs the m-th atom in the dictionary to be updated, i.e. dictionary Dk-1The mth column vector in (1), where the representation error of the dictionary for the training sample is:
Figure BDA00020828884600000310
wherein the content of the first and second substances,
Figure BDA00020828884600000311
representing the matrix Γ for sparsenesskJ is 1,2 … m, and T is the transpose; djFor the jth atom in the dictionary,
Figure BDA00020828884600000312
representing the matrix Γ for sparsenesskThe m-th row vector of (1);
order to
Figure BDA0002082888460000041
Representing the error caused in the rest dictionary after the mth atom in the dictionary is removed;
to update d by SVDmAnd
Figure BDA0002082888460000042
the following processes are required and defined as follows:
Figure BDA0002082888460000043
wherein: omegamIs the intermediate variable(s) of the variable,
Figure BDA0002082888460000044
are respectively as
Figure BDA0002082888460000045
X、EmAfter the zero input is removed, shrinking the obtained result; the above formula can thus be expressed as:
Figure BDA0002082888460000046
using SVD pairs
Figure BDA0002082888460000047
Decomposing to obtain:
Figure BDA0002082888460000048
wherein: u, delta and V are matrixes;
by the first column U of the matrix U1Update dm,Δ(1,1)·v1Updating
Figure BDA0002082888460000049
Namely, it is
Figure BDA00020828884600000410
Wherein: Δ (1,1) is the first element in the matrix Δ, v1Is the first column, u, in the matrix V1Is the first column in the matrix U;
and updating all atoms in the dictionary in sequence.
(3) Stopping the iteration condition:
solving the obtained dictionary D according to the last two stepskAnd a sparse representation matrix ΓkCalculating residual error
Figure BDA00020828884600000411
If the following conditions are satisfied, that is
Figure BDA00020828884600000412
Or circulateThe number of times satisfies a predetermined value, i.e., K is equal to or greater than K0If so, iteration is stopped, and the trained dictionary D is outputopt(ii) a Otherwise, continuing the iteration of the previous two steps; ε is the threshold value.
Disclosure of Invention
The invention aims to solve the problem that the spectrum sensing accuracy is low in the environment with extremely low signal to noise ratio in the existing method, and provides a spectrum sensing method based on dictionary learning in the environment with extremely low signal to noise ratio.
The spectrum sensing method based on dictionary learning under the environment with extremely low signal-to-noise ratio comprises the following specific processes:
step one, establishing a spectrum sensing binary hypothesis model;
step two, collecting N groups of spectrum sensing signals to form a training set for dictionary learning
Figure BDA0002082888460000051
Step three, training a dictionary D by utilizing a K-SVD dictionary learning algorithm and a sparsity 1 constraint conditionopt
Step four, calculating a spectrum sensing signal and a trained dictionary DoptInner product g [ i ] of each column]And finding the position of the maximum in the inner product, i.e.
Figure BDA0002082888460000052
An amount;
step five, updating index set gamma ═ { k } and atom set DΓ={dkObtaining the maximum component by using a least square method
Figure BDA0002082888460000053
Where x is the spectrum sensing signal vector received by the sensing user, dkIs the kth atom in the dictionary;
step six, the maximum component alpha is obtained1Squaring to obtain energy corresponding to the maximum component;
step seven, setting the needed false alarm probability, and calculating a perception threshold lambda according to a false alarm probability formula;
and step eight, comparing the energy corresponding to the maximum component obtained in the step six with the threshold calculated in the step seven, judging whether the signal exists, if the energy is greater than the threshold, the signal exists, otherwise, the signal does not exist.
The invention has the beneficial effects that:
the invention discloses a spectrum sensing method under the condition of extremely low signal-to-noise ratio, and particularly discloses a spectrum sensing method based on dictionary learning and sparse decomposition. Firstly, training a dictionary by collecting training signals and utilizing a dictionary learning K-SVD method, and increasing the constraint of sparsity of 1 in order to concentrate more signals on one signal sparse decomposition component when training the dictionary. And then, decomposing the spectrum sensing signal by using the spectrum sensing signal and the trained dictionary, wherein the decomposition is carried out by adopting the first iteration of an OMP algorithm. And utilizing the square of the component obtained by sparse decomposition as the test statistic of spectrum sensing. And finally, obtaining a spectrum sensing threshold by utilizing a Newman Pearson criterion and fixing the false alarm probability, and comparing the spectrum sensing threshold with the test statistic to obtain a spectrum sensing result. The invention discloses a dictionary learning method under the constraint of sparsity of 1 for sparse decomposition of signals, which concentrates more signal energy on a signal sparse decomposition component and realizes spectrum sensing in an environment with extremely low signal-to-noise ratio. Simulation results show that the method provided by the invention can realize accurate spectrum sensing under the condition of an extremely low signal-to-noise ratio with the signal-to-noise ratio of-26 dB, the spectrum sensing accuracy under the environment with the extremely low signal-to-noise ratio is improved, and the detection probability and the ROC performance of the method have excellent performances under the conditions of different signal-to-noise ratios.
Drawings
FIG. 1 is a signal sparse decomposition error graph obtained by using different dictionary learning methods, and it can be seen that the performance of the K-SVD method is the best, so the K-SVD is used as a dictionary learning algorithm in the invention;
FIG. 2 is a graph of detection probability and false alarm probability of the proposed method under different SNR, which shows that the method of the present invention has significant advantages over the conventional method, the detection performance still meets the FCC requirement under the SNR of-26 dB, Pd-SR 1withDL and Pf-SR 1withDL are the present invention, and others are the prior art;
FIG. 3 is ROC curve diagram of the method of the present invention and the traditional spectrum sensing under different SNR, it can be seen that the performance of the method of the present invention is obviously superior to the traditional method, SR1withDL (-35dB) and SR1withDL (-15dB) are the present invention, and the others are the prior art.
Detailed Description
The first embodiment is as follows: the spectrum sensing method based on dictionary learning in the environment with extremely low signal-to-noise ratio comprises the following specific processes:
the signal-to-noise ratio of the spectrum sensing is lower than-15 dB and is an extremely low signal-to-noise ratio;
the invention relates to a spectrum sensing method in cognitive radio, in particular to a dictionary learning method in sparse representation, which is used for learning a dictionary in sparse representation of signals, carrying out sparse decomposition on the signals according to the learned dictionary to obtain a maximum component of the sparse decomposition, and carrying out spectrum sensing by using the square of the maximum component as test statistic.
The method provided by the invention obtains the training dictionary of the signal sparse representation by utilizing a dictionary learning method and a condition that sparsity constraint is 1, so that the energy of the signal is concentrated on one sparse representation component, and the signal-to-noise ratio of the sparse representation component is improved to the maximum extent. Meanwhile, the square of the signal sparse representation component is selected as the test statistic, and the spectrum sensing performance under the extremely low signal-to-noise ratio is improved.
Assume a training set of signals as
Figure BDA0002082888460000061
Then any one of the signals in the signal set may be sparsely decomposed and formulated as
Figure BDA0002082888460000062
Wherein the content of the first and second substances,
Figure BDA0002082888460000063
for sparsely signal decomposed dictionaries, | | αi||0Is a coefficient vector
Figure BDA0002082888460000064
I.e. represents the number of non-zero terms in the coefficient. Due to l0Norm optimization is an NP-hard problem and therefore often translates to l1Norm optimization problem
Figure BDA0002082888460000065
According to the signal sparse decomposition principle, on the basis of selecting a certain sparse decomposition algorithm, the signal sparse decomposition performance depends on the selection of a dictionary. In general, a dictionary can be obtained in two types of ways: one is some common orthogonal bases and analysis dictionaries derived from the evolution of the orthogonal bases, such as FFT bases, DCT dictionaries and the like; another type is a learning dictionary, also called training dictionary, obtained by a dictionary learning algorithm based on a series of known signals. Thus, the sparse decomposition of the signal with the dictionary learning function can be expressed as
Figure BDA0002082888460000071
Step one, establishing a spectrum sensing binary hypothesis model;
step two, collecting N groups of spectrum sensing signals to form a training set for dictionary learning
Figure BDA0002082888460000072
Step three, training a dictionary D by utilizing a K-SVD dictionary learning algorithm (formulas (9) - (15)) and a sparsity 1 constraint conditionopt
Step four, calculating a spectrum sensing signal and a trained dictionary DoptInner product g [ i ] of each column]And finding the position of the maximum in the inner product, i.e.
Figure BDA0002082888460000073
An amount;
step five, updating index set gamma ═ { k } and atom set DΓ={dkObtaining the maximum component by using a least square method
Figure BDA0002082888460000074
Where x is the spectrum sensing signal vector received by the sensing user, dkIs the kth atom in the dictionary;
the sparsity K is fixed in the original signal sparse decomposition, but the original signal sparse decomposition aims to recover the signal, and the spectrum sensing aims to detect whether the signal exists and does not care about the specific waveform of the signal. Therefore, in the proposed spectrum sensing method, the sparsity K is also considered as an optimized parameter, so as to improve the signal-to-noise ratio of the test statistic and optimize the performance of spectrum sensing.
The effect of sparsity on signal-to-noise ratio is now illustrated with one signal. First, the signal and noise sparse representation components obtained according to equation (17) are arranged in descending order of α ═ α [ [ α ] ]12,…,αi,…,αN]And β ═ β12,…,βi,…,βN]In which α is1And beta1Maximum sparse representation components of the signal and noise, respectively;
defining the signal-to-noise ratio of the sparsely represented signal as
Figure BDA0002082888460000075
Now analyzing the relationship of each component represented by the noise sparseness, the ith component represented by the noise sparseness is represented by the noise according to the Gaussian distribution with the mean value of zero
Figure BDA0002082888460000076
Where n is additive white Gaussian noise and diIs the ith atom in the dictionary, nlIs additiveThe first element in Gaussian white noise, dliThe number of the ith atom in the ith element of the dictionary is represented as l, the number of the I is an element in the additive white Gaussian noise, and the number of the M is the number of the elements in the additive white Gaussian noise;
βirespectively of mean and variance of
Figure BDA0002082888460000081
Figure BDA0002082888460000082
Observing the formulas (21) and (22), it is found that the mean value of each component after noise sparse representation is zero, and the variance is sigma2(ii) a According to the random signal theory, for noise with zero mean, the variance is the power;
therefore, the equations (21) and (22) can be substituted into the equation (19)
Figure BDA0002082888460000083
Analyzing the formula (24), and finding the total signal-to-noise ratio of the received signals after sparse representation as the signal-to-noise ratio of each sparse representation component
Figure BDA0002082888460000084
Average value of (a). According to the spectrum sensing method based on energy, the larger the signal-to-noise ratio is, the stronger the detection capability of the spectrum sensing algorithm is. The square of the largest component of the signal sparse representation is selected as the test statistic based on this principle, i.e.
Figure BDA0002082888460000085
The signal-to-noise ratio of the sparse representation maximum component is
Figure BDA0002082888460000086
Among all components, the signal-to-noise ratio of the sparse representation maximum component is the largest, and therefore the detection performance is the best.
D obtained by using dictionary learning algorithm can be selected according to the above conclusionoptCarrying out sparse decomposition on the signal to obtain sparse decomposition gammaoptThe maximum value of (d) is used as the test statistic. After receiving the above-mentioned idea, it can be assumed that if the sparsity is 1, the signal is sparsely decomposed into one component, and the energy of the signal is more concentrated on one signal component, and at this time, the sparse decomposition of the signal applied to spectrum sensing is expressed as
Figure BDA0002082888460000087
Through the operation, although the error of the signal after sparse decomposition is increased, more signal energy is concentrated on one sparse decomposition component, and the method is very beneficial to improving the performance of spectrum sensing.
Step six, the maximum component alpha is obtained1Squaring to obtain energy corresponding to the maximum component;
step seven, setting the needed false alarm probability, and calculating the perception threshold lambda according to a false alarm probability formula (29);
and step eight, comparing the energy corresponding to the maximum component obtained in the step six with the threshold calculated in the step seven, judging whether the signal exists, if the energy is greater than the threshold, the signal exists, otherwise, the signal does not exist.
The second embodiment is as follows: the first embodiment is different from the first embodiment in that a spectrum sensing binary hypothesis model is established in the first step; the specific process is as follows:
the invention adopts a binary hypothesis model of spectrum sensing, and each sensing user receives the spectrum sensing signal in the form of
Figure BDA0002082888460000091
Wherein, x is a frequency spectrum sensing signal vector received by a sensing user, n is additive white Gaussian noise, the coincidence mean is 0, and the variance is sigman 2Positive distribution of (0, σ)n 2);H0Representing the situation that the primary user does not occupy the detected band resource, H1Representing the situation that the master user is occupying the detected frequency band resource; s represents a signal without noise (pure signal);
according to the sparse decomposition theory, the binary hypothesis model of spectrum sensing is expressed as
Figure BDA0002082888460000092
D is a sparse decomposition dictionary, and alpha and beta are sparse decomposition coefficient vectors of signals and noise under the dictionary D respectively; rsAnd RnRespectively, the margins of the signal and noise sparse decomposition, R when the dictionary is an orthogonal dictionarysAnd RnEqual to zero.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the second embodiment is different from the first or second embodiment in that, in the second step, N groups of spectrum sensing signals are collected to form a training set for dictionary learning
Figure BDA0002082888460000093
The specific process is as follows:
assuming that the number of the spectrum sensing signals received by the user is N, the N spectrum sensing signals form a training set for dictionary learning
Figure BDA0002082888460000094
Wherein x isNAn Nth spectrum sensing signal vector received by a sensing user;
Figure BDA0002082888460000096
is a real number field, m is a training set
Figure BDA0002082888460000095
Row m;
find a trained dictionary DoptAnd a training signal in the dictionary DoptSparse representation matrix of
Figure BDA0002082888460000101
Minimizing sparse representation errors of the training signal on the dictionary; the sparse decomposition of the signal with the dictionary learning process at this time can be expressed as
Figure BDA0002082888460000102
Wherein alpha isiIs a matrix FoptEach column of (1) is a coefficient vector; | α |i||0Is a coefficient vector
Figure BDA0002082888460000103
I represents the number of non-zero terms in the coefficient, i is 1, 2. n is a trained dictionary DoptThe nth row; k is the cycle number of the sparse decomposition algorithm, namely the sparsity of the training signal; | α |i||0K is less than or equal to the number of nonzero values in sparse decomposition; and N is more than or equal to 1 and less than or equal to the number of training samples.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and one of the first to third embodiments is that, in the seventh step, a required false alarm probability is set, and a sensing threshold λ is calculated according to a false alarm probability formula (29); the specific process is as follows:
the following analysis proposes the performance of the spectrum sensing method, mainly including the detection probability and the false alarm probability. Due to the dictionary DoptThe atoms in (a) have been normalized so that the resulting maximum component and its corresponding time domain energy are equal.
H1The binary hypothesis model of spectrum sensing in the case is expressed as
x=Dα+Dβ+R=x′+R=Du+R (25)
Wherein R is the remaining component of x, x' is an intermediate variable (D α + D β); d is a sparse decomposition dictionary, and u is an intermediate variable (alpha + beta);
since x 'is obtained in actual operation as Du, x' contains signals and noise; the noise can be expressed as
nu=x′-s (26)
Where s represents a signal without noise (pure signal);
for convenience, it is further assumed that the signal power and the noise power are P, respectivelysAnd PuUsing standard deviation of noise for signal and noise
Figure BDA0002082888460000104
Carrying out normalization processing; therefore, from the perspective of time domain, the test statistic provided by the method of the present invention
Figure BDA0002082888460000105
And when no signal exists, the distribution is subject to the central chi-square distribution, and when a signal exists, the distribution is subject to the non-central chi-square distribution. The corresponding probability density function in both cases is
Figure BDA0002082888460000111
Wherein gamma is an intermediate variable,
Figure BDA0002082888460000112
Figure BDA0002082888460000113
representing the component signal-to-noise ratio, E [. for each sparse representation]Calculating an average value; sigma2Is the variance; alpha is alphaiA sparse representation component of the signal; Γ (-) denotes the gamma function, Im-1(. cndot.) represents a first order Bessel function,
Figure BDA0002082888460000114
the square of the maximum component is expressed for signal sparsity, r is a variable (function), and lambda is a threshold; m is training set
Figure BDA0002082888460000115
Row m;
given a threshold λ, the detection probability and false alarm probability of the proposed spectrum sensing method can be expressed as
Figure BDA0002082888460000116
Figure BDA0002082888460000117
Wherein Q ism(. is a Q function; Γ (·) represents a gamma function.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: this embodiment is different from one of the first to fourth embodiments in that Q ism(. cndot.) and Γ (-) are defined as
Figure BDA0002082888460000118
Figure BDA0002082888460000119
Wherein a, b and t are all variables (functions).
Other steps and parameters are the same as in one of the first to fourth embodiments.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (1)

1. The spectrum sensing method based on dictionary learning under the environment with extremely low signal-to-noise ratio is characterized in that: the method comprises the following specific processes:
step one, establishing a spectrum sensing binary hypothesis model;
step two, collecting N groups of spectrum sensing signals to form a training set for dictionary learning
Figure FDA0003077352230000011
Step three, training a dictionary D by utilizing a K-SVD dictionary learning algorithm and a sparsity 1 constraint conditionopt
Step four, calculating a spectrum sensing signal and a trained dictionary DoptInner product g [ i ] of each column]And finding the position of the maximum in the inner product, i.e.
Figure FDA0003077352230000012
An amount;
step five, updating index set gamma ═ { k } and atom set DΓ={dkObtaining the maximum component by using a least square method
Figure FDA0003077352230000013
Where x is the spectrum sensing signal vector received by the sensing user, dkIs the kth atom in the dictionary;
step six, the maximum component alpha is obtained1Squaring to obtain energy corresponding to the maximum component;
step seven, setting the needed false alarm probability, and calculating a perception threshold lambda according to a false alarm probability formula;
step eight, comparing the energy corresponding to the maximum component obtained in the step six with the threshold calculated in the step seven, and judging whether a signal exists or not, wherein if the energy is greater than the threshold, the signal exists, otherwise, the signal does not exist;
establishing a spectrum sensing binary hypothesis model in the first step; the specific process is as follows:
adopting a binary hypothesis model of spectrum sensing, and enabling each sensing user to receive spectrum sensing signals in the form of
Figure FDA0003077352230000014
Wherein, x is a frequency spectrum sensing signal vector received by a sensing user, n is additive white Gaussian noise, the coincidence mean is 0, and the variance is sigman 2Positive distribution of (0, σ)n 2);H0Representing the situation that the primary user does not occupy the detected band resource, H1Representing the situation that the master user is occupying the detected frequency band resource; s represents a signal without noise;
according to the sparse decomposition theory, the binary hypothesis model of spectrum sensing is expressed as
Figure FDA0003077352230000015
D is a sparse decomposition dictionary, and alpha and beta are sparse decomposition coefficient vectors of signals and noise under the dictionary D respectively; rsAnd RnRespectively, the margins of the signal and noise sparse decomposition, R when the dictionary is an orthogonal dictionarysAnd RnIs equal to zero;
collecting N groups of spectrum sensing signals in the step two to form a training set for dictionary learning
Figure FDA0003077352230000021
The specific process is as follows:
assuming that the number of the spectrum sensing signals received by the user is N, the N spectrum sensing signals form a training set for dictionary learning
Figure FDA0003077352230000022
Wherein x isNAn Nth spectrum sensing signal vector received by a sensing user;
Figure FDA0003077352230000023
is a real number field, m is a training set
Figure FDA0003077352230000024
Row m;
setting a required false alarm probability in the seventh step, and calculating a perception threshold lambda according to a false alarm probability formula; the specific process is as follows:
H1the binary hypothesis model of spectrum sensing in the case is expressed as:
x=Dα+Dβ+R=x′+R=Du+R
wherein R is the residual component of x, and x' is an intermediate variable; d is a sparse decomposition dictionary, and u is an intermediate variable;
x' contains signal and noise; the noise is expressed as
nu=x′-s
Where s represents a signal without noise;
suppose that the signal power and the noise power are P respectivelysAnd PuUsing standard deviation of noise for signal and noise
Figure FDA0003077352230000025
Carrying out normalization processing; a probability density function of
Figure FDA0003077352230000026
Wherein gamma is an intermediate variable,
Figure FDA0003077352230000027
Figure FDA0003077352230000028
representing the component signal-to-noise ratio, E [. for each sparse representation]Calculating an average value; sigma2Is the variance; alpha is alphaiA sparse representation component of the signal; Γ (-) denotes the gamma function, Im-1(. cndot.) represents a first order Bessel function,
Figure FDA0003077352230000029
representing the square of the maximum component for signal sparsity, wherein r is a variable and lambda is a threshold; m isFor training set
Figure FDA00030773522300000210
Row m;
given a threshold lambda, the detection probability and the false alarm probability of the spectrum sensing method are expressed as
Figure FDA00030773522300000211
Figure FDA00030773522300000212
Wherein Q ism(. is a Q function; Γ (·) represents a gamma function;
said Qm(. cndot.) and Γ (-) are defined as
Figure FDA0003077352230000031
Figure FDA0003077352230000032
Wherein a, b and t are all variables.
CN201910477917.3A 2019-06-03 2019-06-03 Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment Expired - Fee Related CN110138479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910477917.3A CN110138479B (en) 2019-06-03 2019-06-03 Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910477917.3A CN110138479B (en) 2019-06-03 2019-06-03 Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment

Publications (2)

Publication Number Publication Date
CN110138479A CN110138479A (en) 2019-08-16
CN110138479B true CN110138479B (en) 2021-08-27

Family

ID=67579855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910477917.3A Expired - Fee Related CN110138479B (en) 2019-06-03 2019-06-03 Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment

Country Status (1)

Country Link
CN (1) CN110138479B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531321A (en) * 2019-08-26 2019-12-03 哈尔滨工程大学 Dynamic channelization subband spectrum detection method based on characteristic value
JP2023508136A (en) * 2019-12-25 2023-03-01 イスタンブール メディポル ユニベルシテシ Primary user emulation/signal jamming attack detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103347268A (en) * 2013-06-05 2013-10-09 杭州电子科技大学 Self-adaptation compression reconstruction method based on energy effectiveness observation in cognitive sensor network
CN103986539A (en) * 2014-06-10 2014-08-13 哈尔滨工业大学 Cognitive radio spectrum sensing method based on sparse denoising
CN107666322A (en) * 2017-09-08 2018-02-06 山东科技大学 A kind of adaptive microseism data compression sensing method based on dictionary learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103347268A (en) * 2013-06-05 2013-10-09 杭州电子科技大学 Self-adaptation compression reconstruction method based on energy effectiveness observation in cognitive sensor network
CN103986539A (en) * 2014-06-10 2014-08-13 哈尔滨工业大学 Cognitive radio spectrum sensing method based on sparse denoising
CN107666322A (en) * 2017-09-08 2018-02-06 山东科技大学 A kind of adaptive microseism data compression sensing method based on dictionary learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Signal recovery from random measurements via orthogonal matching pursuit;Tropp, Joel A;《IEEE TRANSACTIONS ON INFORMATION THEORY》;20071230;第4655-4666页 *
Spectrum sensing method via signal denoising based on sparse representation;Yulong Gao 等;《PROCEEDINGS OF THE 2014 International Symposium on Information Technology (ISIT 2014)》;20151231;第349-354页 *
基于压缩感知的频谱感知关键技术研究;李佳宁;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130815;I136-267 *
稀疏信号压缩感知重构算法研究;张涛;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;I136-378 *

Also Published As

Publication number Publication date
CN110138479A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
Swirszcz et al. Grouped orthogonal matching pursuit for variable selection and prediction
Bühlmann et al. Sparse Boosting.
CN109886464B (en) Low-information-loss short-term wind speed prediction method based on optimized singular value decomposition generated feature set
CN109190464B (en) Mechanical fault intelligent diagnosis method based on transfer learning under variable working conditions
CN110138479B (en) Spectrum sensing method based on dictionary learning under extremely low signal-to-noise ratio environment
CN110244272B (en) Direction-of-arrival estimation method based on rank-denoising model
CN111767791A (en) Arrival angle estimation method based on anti-regularization deep neural network
CN107977651B (en) Common spatial mode spatial domain feature extraction method based on quantization minimum error entropy
CN110174658B (en) Direction-of-arrival estimation method based on rank-dimension reduction model and matrix completion
Inan et al. PGEE: an R package for analysis of longitudinal data with high-dimensional covariates.
CN111523644A (en) Intermediate-long term runoff prediction method based on LSFL combined model
Grzebyk et al. On identification of multi‐factor models with correlated residuals
CN115271063A (en) Inter-class similarity knowledge distillation method and model based on feature prototype projection
Pan et al. Refinement of Low Rank Approximation of a Matrix at Sub-linear Cost
CN110174657B (en) Direction-of-arrival estimation method based on rank-one dimension reduction model and block matrix recovery
CN109217844B (en) Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS
Doukopoulos et al. The fast data projection method for stable subspace tracking
CN117056669A (en) Denoising method and system for vibration signal
CN113343590B (en) Wind speed prediction method and system based on combined model
Fang et al. Penalized empirical likelihood for semiparametric models with a diverging number of parameters
CN111310996A (en) User trust relationship prediction method and system based on graph self-coding network
CN114897047B (en) Multi-sensor data drift detection method based on depth dictionary
Bozdogan et al. An expert model selection approach to determine the “best” pattern structure in factor analysis models
Xie et al. Recurrent polynomial network for dialogue state tracking with mismatched semantic parsers
Li et al. Dictionary learning with the ℓ _ 1/2 ℓ 1/2-regularizer and the coherence penalty and its convergence analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210827