CN105812802A - Power big data compression transmission method based on sparse coding and decoding - Google Patents

Power big data compression transmission method based on sparse coding and decoding Download PDF

Info

Publication number
CN105812802A
CN105812802A CN201410849400.XA CN201410849400A CN105812802A CN 105812802 A CN105812802 A CN 105812802A CN 201410849400 A CN201410849400 A CN 201410849400A CN 105812802 A CN105812802 A CN 105812802A
Authority
CN
China
Prior art keywords
data
decoding
sparse
coding
electric power
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410849400.XA
Other languages
Chinese (zh)
Inventor
戴江鹏
刁柏青
孟祥君
张伟昌
饶伟
蒋静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shandong Electric Power Co Ltd
Smart Grid Research Institute of SGCC
Original Assignee
State Grid Corp of China SGCC
China Electric Power Research Institute Co Ltd CEPRI
State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, China Electric Power Research Institute Co Ltd CEPRI, State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201410849400.XA priority Critical patent/CN105812802A/en
Publication of CN105812802A publication Critical patent/CN105812802A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to a power big data compression transmission method based on sparse coding and decoding. The power big data compression transmission method comprises (1) system starting; (2) data transmission; (3) sparse coding; (4) scalar quantization; (5) self-adaption arithmetic coding; (6) code storage; (7) deciding whether to invoke original data; (8) self-adaption arithmetic decoding; (9) inverse quantization; (10) sparse decoding; (11) data processing. The input power big data can be converted into the sparse coding way for the output. The data coding having the extremely small storage amount can be acquired after the conversion during the data transmission process. The power big data compression transmission method is advantageous in that the data storage amount can be effectively reduced, and at the same time, the inner structure and the substantive characteristic of the data can be found out, and the data backup and the subsequent calculation can be facilitated.

Description

A kind of compression transmitting method of the big data of the electric power based on sparse encoding and decoding
Technical field
The present invention relates to a kind of big data transmission method of electric power, in particular to the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding.
Background technology
The Intelligent Fusion direction applied to data message is developed by power information technology, the new stage that its operational mode will be marched toward as service-centric, meanwhile, the value of business data assets is constantly exploited, and the enterprise under Information Condition produces and decision-making will be more intelligent.The concentrated reflection of the development of " big data " technological incorporation type under the new situation just and the intelligent theory of application, there is the Technical Architecture of the reply data characteristicses such as the data scale of construction is huge, data type is various, value density is low and processing speed is fast, and with the application model that the content value-added service of high added value inside and outside industry is target, its core is exactly information resource development and utilization.Electric power big data refer to be collected by various acquisition of information channels such as sensor, smart machine, video monitoring equipment, audio communication device, mobile terminals, magnanimity, structuring, semi-structured, non-structured, and there is the business datum set of incidence relation each other.Popularization and strengthened research along with scientific and technological development and bulk information system, Consumer's Experience demand constantly increases, data accumulated in this process and service also get more and more, and tend to magnanimity, and traditional data base cannot meet so huge call data storage.Under the big data background that current data volume is explosive growth, using efficient compression coding mode to process big data, being effectively reduced memory space is significantly.
Sparse coding can provide effective solution for the storage of the big data of electric power.The concept of sparse coding mostlys come from neurobiology.Biologist proposes, and mammal is in long-term evolution, and generating can be quick, and accurately, low-cost ground represents the ability of the optic nerve aspect of natural image.We are intuitively it is envisioned that the secondary picture that our eyes are often seen is all more than one hundred million pixel, and each sub-picture we all only rebuild by little cost and store, we are called sparse coding it.Sparse coding algorithm is applied in the middle of the big data of electric power just can to solve data storage capacities is limited, data exhibiting scarce capacity and data interactive ability are difficult to meet the problems such as the big data demand of electric power.
Summary of the invention
For the deficiencies in the prior art, the present invention proposes the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding, the problem big in order to solve memory data output, the sparse coding method proposed can be greatly saved the memory space of data, reduce the demand to hardware store, save hardware cost, be simultaneously also beneficial to data management easily.
It is an object of the invention to adopt following technical proposals to realize:
The compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding, it thes improvement is that, described method includes
(1) system is started;
(2) data are transmitted;
(3) sparse coding;
(4) scalar quantization;
(5) adaptive arithmetic code;
(6) coding is preserved;
(7) judge whether to call former data;
(8) self adaptation arithmetic decoding;
(9) inverse quantization;
(10) sparse decoding;
(11) data are processed.
Preferably, described sparse coding is expressed as follows:
x = Σ i = 1 m a i φ i ,
Wherein, x ∈ Rn, Ф=[φ12..., φm]∈Rn×m;X is the data of input, and Ф is the dictionary for data representation, a=[α12,...,αm]T∈RmFor sparse coding coefficient.
Preferably, described step (3) includes data waiting for transmission are carried out sparse coding, it is thus achieved that sparse coding coefficient vector a=[α12,...,αm]T
Preferably, described step (4) includes the sparse coding coefficient a=[α to floating number12,...,αm]TEach coefficient component carry out scalar quantization, generate integer-valued code coefficient di, note quantized interval is Δ, and quantitative formula is as follows:
d i = round ( α i Δ )
Wherein, input value is mapped as nearest integer by round ().
Preferably, described step (5) includes the position and the coefficient value that arrange nonzero coefficient after data form record quantifies, and data record carries out lossless adaptive arithmetic coding.
Further, the message coding of whole input is a decimal by described arithmetic coding, the character string that will encode encodes one by one by character, current interval is divided into some subintervals, the length in each subinterval is directly proportional with the probability of the corresponding character being likely to occur under current context, the interval lower limit low every time produced after coding be interval limit after previous character code with previous siding-to-siding block length current_range be multiplied by current character place interval limit Low [current] and, it may be assumed that
Low=Low+current_range*Low [current];
Upper limit high is the lower limit sum that just newly generated lower limit low and previous siding-to-siding block length are multiplied by current character, it may be assumed that
High=Low+current_range*High [current].
Preferably, described step (6) includes preserving the binary coded data that adaptive arithmetic code generates.
Preferably, described step (8) includes the arithmetic decoding algorithms of application self-adapting and recovers non-zero position sparse greatly and quantized value, arithmetic decoding process is interval according to its place of data judging, and then judge why character, comprising and arrange interval limit, arrange subinterval lower limit and be decoded into character one by one, interval limit is assigned to subinterval lower limit initial value, subinterval length is each character probabilities, loop termination when the interval character of last judgement is end mark " # " time, decoding completes.
Preferably, described step (9) includes carrying out inverse quantization operation, rebuilds the coefficient value α ' of nonzero coefficienti, α 'i=diΔ。
Preferably, described step (10) includes performing sparse decoding operation, nonzero coefficient position is carried out zero padding operation, obtains m and tie up sparse coding vector α ', rebuild, and the data after decoding and rebuilding are:Wherein,ФIt it is the dictionary of coding side use.
Preferably, described step (3) is obtained by a series of electric power data sample training in advance with the dictionary Ф in (10), and learning model is:
Wherein, Section 1 is the cost value of reconstruct input data, the l of Section 21Norm is to ensure that the openness coefficient for decomposition coefficient is punished, λ is regularization parameter, and C is a constant, and when optimizing, we are by αi(1≤i≤k) and Ф fix one of them, optimize another, so replace, until convergence, it is thus achieved that for the dictionary Ф of electric power data rarefaction representation.
Preferably,
(12.1) gaussian random battle array is adopted to initialize Ф, by string all normalization every in Ф;
(12.2) fixing Ф, updates αi(1≤i≤k):
(12.3) fixing αi(1≤i≤k), updates Ф:
(12.3) iterations t=t+1, iteration (12.2) and (12.3) are until restraining.
Compared with the prior art, the invention have the benefit that
The mode that big for the electric power of input data are converted into sparse coding is exported by the present invention.Among the transmission process of data, transmission is no longer gather the data got before, and only minimum simply by converting the amount of storage obtained data encoding.The topmost advantage of this method is just able to effectively reduce the amount of storage of data, also allows for finding out immanent structure and the substitutive characteristics of data simultaneously, is also beneficial to backup and the subsequent arithmetic of data.
Accompanying drawing explanation
Fig. 1 is the compression transmitting method flow chart of a kind of big data of the electric power based on sparse encoding and decoding provided by the invention.
Detailed description of the invention
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described in further detail.
The compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding of the present invention, it is proposed that utilize the mode of sparse coding that the big data of power industry are processed.The sparse coding of the big data of electric power refers under the premise obtaining dictionary by known training sample, and electric power data is carried out rarefaction representation.Dictionary owing to obtaining can be regarded as one group of super complete base, and we can represent initial data with the linear combination of one group of base, and specifies to have only to less several base just by data representation out.Obtaining on the basis of sparse coding, construct scalar quantizer, design position and the value of the big coefficient of non-zero after data form record quantifies, and it is carried out lossless adaptive arithmetic code, reduce memory data output further.This method can represent immanent structure and the feature of data fully, greatly reduces the amount of storage of data, and improves the subsequent treatment efficiency of data.
Wherein, sparse coding can be expressed as follows:
x = Σ i = 1 m a i φ i ,
Wherein, x ∈ Rn, Ф=[φ12..., φm]∈Rn×m.X is the data of input, and Ф is dictionary, a=[α12,...,αm]T∈RmIt it is sparse coding coefficient.
Coding side is it can be seen that the data set of input has been broken down into the linear combination of multiple primitive.In actual solution procedure, we usually solve a following cost function:
max α | | x - Σ i = 1 m a i φ i | | 2 2 + λ | | a | | 1
Above-mentioned Section 1 is the cost value of reconstruct input data, the l of Section 21Norm is to ensure that the openness coefficient for decomposition coefficient is punished, λ is regularization parameter.Due to l1The sparsity constraints of norm, the factor alpha obtained after solving, while ensureing to rebuild former data x as far as possible, containing only the big coefficient of the non-zero having minority, is had laid a good foundation for follow-up coding.
The coding side sparse coding α for asking, does adaptive coding to it further and processes.Sparse coding before comparing, the result obtained after adaptive coding processes greatly reduces again required memory space.Data compression complicated, higher-dimension can be become the form of simple code by twice coding work completely.
The compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding of the present invention, idiographic flow is as follows:
1) data waiting for transmission are carried out sparse coding, it is thus achieved that sparse coding coefficient vector a=[α12,...,αm]T
2) the sparse coding coefficient a=[α to floating number12,...,αm]TEach coefficient classification αi(1≤i≤m) carries out scalar quantization, generates integer-valued code coefficient di.Note quantized interval is Δ, and quantitative formula is as follows:
d i = round ( α i Δ )
Wherein, input value is mapped as nearest integer by round ().
3) position and the coefficient value of nonzero coefficient after data form record quantifies are set, it is assumed that having K the big coefficient of non-zero after quantization, the position (i.e. atom sequence number) of nonzero coefficient is designated as pi(1≤i≤K), concrete form is as follows:
p1 . . . EOF d1 . . .
EOF is the coded markings set, it was shown that the beginning of position encoded end and quantized value coding, # is the end mark of whole data record.And this data record is carried out lossless adaptive arithmetic coding.
The message coding of whole input is a number by arithmetic coding, a decimal n meeting (0.0≤n < 1.0).The character string that will encode encodes one by one by character, and current interval is divided into some subintervals by whole process, and the length in each subinterval is directly proportional with the probability of the corresponding character being likely to occur under current context.The interval lower limit (low) every time produced after coding for the interval limit after previous character code and previous siding-to-siding block length current_range be multiplied by current character place interval limit Low [current] and, it may be assumed that
Low=Low+current_range*Low [current]
The upper limit (high) is the lower limit sum that just newly generated lower limit low and previous siding-to-siding block length are multiplied by current character, it may be assumed that
High=Low+current_range*High [current].
The false code of this process is as follows:
Lower_bound=0
Upper_bound=1
whiletherearestillcoefficientstoencode
Currentrange=high low;
High=low+ (current_range × upperboundofnewsymbol)
Low=low+ (current_range × lowerboundofnewsymbol)
endwhile
4) binary coded data that adaptive arithmetic code generates is preserved.
Decoding end, for the data being transmitted through, is decoded work, reconstructs data, and concrete decoding process is as follows:
5) arithmetic decoding algorithms of application self-adapting recovers the position p that non-zero is sparse greatlyi(1≤i≤K) and quantized value di(1≤i≤K).Arithmetic decoding process is interval according to its place of data judging, and then judges why character.Comprising and arrange interval limit, arrange subinterval lower limit and be decoded into character one by one, interval limit is assigned to subinterval lower limit initial value, subinterval length is each character probabilities.Loop termination when the interval character of last judgement is end mark " # " time, decoding completes.
The false code of self adaptation arithmetic decoding process is as follows:
Encodedvalue=encodedinput
whilebitstreamisnotfullydecoded
Current_range=High [current]-Low [current]
Encodedvalue=(encodedvalue-Low [current]) ÷ current_range
endwhile
6) carry out inverse quantization operation, rebuild the coefficient value α ' of nonzero coefficienti, as follows:
α′i=diΔ
7) performing sparse decoding operation, nonzero coefficient position is carried out zero padding operation, obtains m and tie up sparse coding vector α ', and rebuild as follows, the data after decoding and rebuilding are:
x &prime; = &Phi; &alpha; &prime; = &Sigma; i = 1 m a i &prime; &phi; i
Wherein Ф is the dictionary that coding side uses.
8) compression efficiency of electric power data is had direct relation with the dictionary Ф of use in decoding by coding side, and how selecting dictionary Ф is a key issue.This patent, for the characteristic of electric power data, is obtained by a series of electric power data training sample preconditions.The learning model of dictionary Ф is:
C is a constant, and when optimizing, we are by αi(1≤i≤k) and Ф fix one of them, optimize another, so replace, until convergence.Whole optimization process is as follows:
8.1) gaussian random battle array is adopted to initialize Ф, by string all normalization every in Ф.
Fixing Ф, updates αi(1≤i≤k):
&alpha; i t = arg min &alpha; i | | x i - &Phi;&alpha; i | | 2 2 + &lambda; | | &alpha; i | | 1
Can solve by the method for linear programming.
8.2) fixing αi(1≤i≤k), updates Ф:
8.3) this is the quadratic programming problem of a belt restraining, has many optimization methods.
8.4) iterations t=t+1, iteration 8.2) and 8.3) until convergence.
Embodiment
First coding side trains the training set approximate with testing sample, obtains complete dictionary, for input picture is carried out rarefaction representation.Idiographic flow is as follows:
1) for input picture, we input picture is divided into some pieces (this experiment input picture be sized to 128*128, each fritter be sized to 5*5, from left to right, from top to bottom, it is ensured that between adjacent block, have the overlap of 2 pixels.
2) each data block y in input picture is carried out on dictionary Ф sparse coding, obtain code coefficient α, as follows:
&alpha; = min | | y - &Phi;&alpha; | | 2 2 + &lambda; | | &alpha; | | 1
3), the sparse coding coefficient a=[α to floating number12,...,αm]TEach coefficient classification αi(1≤i≤m) carries out scalar quantization, generates integer-valued code coefficient di
4) position and the coefficient value of nonzero coefficient after data form record quantifies are set, and these data are recorded lossless adaptive arithmetic coding.
5) binary coded data that adaptive arithmetic code generates is preserved.
6) arithmetic decoding algorithms of application self-adapting recovers the position p that non-zero is sparse greatlyi(1≤i≤K) and quantized value di(1≤i≤K), adaptive arithmetic decoding process is substantially carried out judging interval work, it is simply that interval according to its place of data judging, and then judges why character.
7) carry out inverse quantization operation, rebuild the coefficient value α ' of nonzero coefficienti, quantification method is as follows:
α′i=diΔ
8) performing sparse decoding operation, nonzero coefficient position is carried out zero padding operation, obtains m and tie up sparse coding vector α ', and rebuild as follows, the data after decoding and rebuilding are:
x &prime; = &Phi;&alpha; &prime; = &Sigma; i = 1 m a i &prime; &phi; i
Wherein Ф is the dictionary that coding side uses.
The present embodiment is done the allocation of computer tested: the operating system of 64, the internal memory of 16GB, Intel processors, and software runtime environment is MATLABR2012a version.
Finally should be noted that: above example is only in order to illustrate that technical scheme is not intended to limit; the specific embodiment of the present invention still can be modified or equivalent replacement by those of ordinary skill in the field with reference to above-described embodiment; these are without departing from any amendment of spirit and scope of the invention or equivalent replace, within the claims of the present invention all awaited the reply in application.

Claims (12)

1. the compression transmitting method of the big data of the electric power based on sparse encoding and decoding, it is characterised in that described method includes
(1) system is started;
(2) data are transmitted;
(3) sparse coding;
(4) scalar quantization;
(5) adaptive arithmetic code;
(6) coding is preserved;
(7) judge whether to call former data;
(8) self adaptation arithmetic decoding;
(9) inverse quantization;
(10) sparse decoding;
(11) data are processed.
2. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterised in that described sparse coding is expressed as follows:
x = &Sigma; i = 1 m a i &phi; i ,
Wherein, x ∈ Rn, Φ=[φ12..., φm]∈Rn×m;X is the data of input, and Φ is the dictionary for data representation, a=[α12,...,αm]T∈RmFor sparse coding coefficient.
3. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterised in that described step (3) includes data waiting for transmission are carried out sparse coding, it is thus achieved that sparse coding coefficient vector a=[α12,...,αm]T
4. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterised in that described step (4) includes the sparse coding coefficient a=[α to floating number12,...,αm]TEach coefficient component carry out scalar quantization, generate integer-valued code coefficient di, note quantized interval is Δ, and quantitative formula is as follows:
d i = round ( &alpha; i &Delta; )
Wherein, input value is mapped as nearest integer by round ().
5. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterized in that, described step (5) includes the position and the coefficient value that arrange nonzero coefficient after data form record quantifies, and data record carries out lossless adaptive arithmetic coding.
6. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 5, it is characterized in that, the message coding of whole input is a decimal by described arithmetic coding, the character string that will encode encodes one by one by character, current interval is divided into some subintervals, the length in each subinterval is directly proportional with the probability of the corresponding character being likely to occur under current context, the interval lower limit low every time produced after coding be interval limit after previous character code with previous siding-to-siding block length current_range be multiplied by current character place interval limit Low [current] and, that is:
Low=Low+current_range*Low [current];
Upper limit high is the lower limit sum that just newly generated lower limit low and previous siding-to-siding block length are multiplied by current character, it may be assumed that
High=Low+current_range*High [current].
7. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterised in that described step (6) includes preserving the binary coded data that adaptive arithmetic code generates.
8. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterized in that, described step (8) includes the arithmetic decoding algorithms of application self-adapting and recovers non-zero position sparse greatly and quantized value, arithmetic decoding process is interval according to its place of data judging, and then judge why character, comprise and interval limit is set, subinterval lower limit is set and is decoded into character one by one, interval limit is assigned to subinterval lower limit initial value, subinterval length is each character probabilities, loop termination when the interval character of last judgement is end mark " # " time, decoding completes.
9. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterised in that described step (9) includes carrying out inverse quantization operation, rebuilds the coefficient value α ' of nonzero coefficienti, α 'i=diΔ。
10. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterized in that, described step (10) includes performing sparse decoding operation, nonzero coefficient position is carried out zero padding operation, obtain m and tie up sparse coding vector α ', rebuilding, the data after decoding and rebuilding are:Wherein, Φ is the dictionary that coding side uses.
11. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 1, it is characterized in that, described step (3) is obtained by a series of electric power data sample training in advance with the dictionary Φ in (10), and learning model is:
Wherein, Section 1 is the cost value of reconstruct input data, the l of Section 21Norm is to ensure that the openness coefficient for decomposition coefficient is punished, λ is regularization parameter, and C is a constant, and when optimizing, we are by αi(1≤i≤k) and Φ fix one of them, optimize another, so replace, until convergence, it is thus achieved that for the dictionary Φ of electric power data rarefaction representation.
12. the compression transmitting method of a kind of big data of the electric power based on sparse encoding and decoding as claimed in claim 11, it is characterised in that
(12.1) gaussian random battle array is adopted to initialize Φ, by string all normalization every in Φ;
(12.2) fixing Φ, updates αi(1≤i≤k): &alpha; i t = arg min &alpha; i | | x i - &Phi; &alpha; i | | 2 2 + &lambda; | | &alpha; i | | 1 ;
(12.3) fixing αi(1≤i≤k), updates Φ:
(12.3) iterations t=t+1, iteration (12.2) and (12.3) are until restraining.
CN201410849400.XA 2014-12-29 2014-12-29 Power big data compression transmission method based on sparse coding and decoding Pending CN105812802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410849400.XA CN105812802A (en) 2014-12-29 2014-12-29 Power big data compression transmission method based on sparse coding and decoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410849400.XA CN105812802A (en) 2014-12-29 2014-12-29 Power big data compression transmission method based on sparse coding and decoding

Publications (1)

Publication Number Publication Date
CN105812802A true CN105812802A (en) 2016-07-27

Family

ID=56420990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410849400.XA Pending CN105812802A (en) 2014-12-29 2014-12-29 Power big data compression transmission method based on sparse coding and decoding

Country Status (1)

Country Link
CN (1) CN105812802A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867821A (en) * 2010-06-18 2010-10-20 上海交通大学 Video coding system based on sparse sampling and texture reconstruction
WO2010149554A1 (en) * 2009-06-22 2010-12-29 Thomson Licensing Process for matching pursuit based coding of video data for a sequence of images
WO2011046607A2 (en) * 2009-10-14 2011-04-21 Thomson Licensing Filtering and edge encoding
CN103280221A (en) * 2013-05-09 2013-09-04 北京大学 Audio frequency lossless compression coding and decoding method and system based on basis pursuit
CN104185026A (en) * 2014-09-05 2014-12-03 西安电子科技大学 Infrared high-resolution imaging method for phase encoding under random projection domain and device thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010149554A1 (en) * 2009-06-22 2010-12-29 Thomson Licensing Process for matching pursuit based coding of video data for a sequence of images
WO2011046607A2 (en) * 2009-10-14 2011-04-21 Thomson Licensing Filtering and edge encoding
CN101867821A (en) * 2010-06-18 2010-10-20 上海交通大学 Video coding system based on sparse sampling and texture reconstruction
CN103280221A (en) * 2013-05-09 2013-09-04 北京大学 Audio frequency lossless compression coding and decoding method and system based on basis pursuit
CN104185026A (en) * 2014-09-05 2014-12-03 西安电子科技大学 Infrared high-resolution imaging method for phase encoding under random projection domain and device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈跃: ""基于压缩感知理论的电力***数据检测与压缩方法研究"", 《万方数据知识平台》 *
陈红莉: ""基于稀疏表示和自适应字典的单帧图像的超分辨率算法研究"", 《万方数据知识服务平台》 *

Similar Documents

Publication Publication Date Title
US10834415B2 (en) Devices for compression/decompression, system, chip, and electronic device
CN107516129B (en) Dimension self-adaptive Tucker decomposition-based deep network compression method
CN111641832B (en) Encoding method, decoding method, device, electronic device and storage medium
Alvar et al. Multi-task learning with compressible features for collaborative intelligence
CN109859281B (en) Compression coding method of sparse neural network
CN109245773A (en) A kind of decoding method based on block circulation sparse matrix neural network
CN105374054A (en) Hyperspectral image compression method based on spatial spectrum characteristics
CN103188494A (en) Apparatus and method for encoding depth image by skipping discrete cosine transform (DCT), and apparatus and method for decoding depth image by skipping DCT
CN103546161A (en) Lossless compression method based on binary processing
CN111246206A (en) Optical flow information compression method and device based on self-encoder
CN115361559A (en) Image encoding method, image decoding method, image encoding device, image decoding device, and storage medium
CN114071141A (en) Image processing method and equipment
CN111050170A (en) Image compression system construction method, compression system and method based on GAN
Yadav et al. Flow-MotionNet: A neural network based video compression architecture
SairaBanu et al. Parallel implementation of Singular Value Decomposition (SVD) in image compression using open Mp and sparse matrix representation
CN113256744B (en) Image coding and decoding method and system
CN116912337A (en) Data processing method and device based on image compression coding system
CN115632660B (en) Data compression method, device, equipment and medium
CN105812802A (en) Power big data compression transmission method based on sparse coding and decoding
Roy et al. Compression of time evolutionary image data through predictive deep neural networks
CN109474826B (en) Picture compression method and device, electronic equipment and storage medium
CN114501031B (en) Compression coding and decompression method and device
Karna et al. Evaluation of DLX Microprocessor Instructions Efficiency for Image Compression
Vooturi et al. Efficient inferencing of compressed deep neural networks
CN107172425A (en) Reduced graph generating method, device and terminal device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20171016

Address after: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant after: State Grid Corporation of China

Applicant after: China Electric Power Research Institute

Applicant after: State Grid Smart Grid Institute

Applicant after: State Grid Shandong Electric Power Company

Address before: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant before: State Grid Corporation of China

Applicant before: China Electric Power Research Institute

Applicant before: State Grid Shandong Electric Power Company

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant after: State Grid Corporation of China

Applicant after: China Electric Power Research Institute

Applicant after: GLOBAL ENERGY INTERCONNECTION RESEARCH INSTITUTE

Applicant after: State Grid Shandong Electric Power Company

Address before: 100031 Xicheng District West Chang'an Avenue, No. 86, Beijing

Applicant before: State Grid Corporation of China

Applicant before: China Electric Power Research Institute

Applicant before: State Grid Smart Grid Institute

Applicant before: State Grid Shandong Electric Power Company

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20160727

RJ01 Rejection of invention patent application after publication