CN108964672A - A kind of polarization code coding method based on deep neural network - Google Patents

A kind of polarization code coding method based on deep neural network Download PDF

Info

Publication number
CN108964672A
CN108964672A CN201810736700.5A CN201810736700A CN108964672A CN 108964672 A CN108964672 A CN 108964672A CN 201810736700 A CN201810736700 A CN 201810736700A CN 108964672 A CN108964672 A CN 108964672A
Authority
CN
China
Prior art keywords
neural network
deep neural
decoding
node
polarization code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810736700.5A
Other languages
Chinese (zh)
Inventor
李世宝
卢丽金
潘荔霞
刘建航
黄庭培
陈海华
邓云强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201810736700.5A priority Critical patent/CN108964672A/en
Publication of CN108964672A publication Critical patent/CN108964672A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Error Detection And Correction (AREA)

Abstract

The present invention provides a kind of polarization code coding method based on deep neural network, since the method first collecting, arranging sample data;Then setting parameter is modeled, and trains network using back-propagation algorithm;Then the corresponding likelihood ratio of Rate-R node is input to again and is completed in trained deep neural network model, obtain 0 or 1;Simplified successive elimination decoding algorithm is finally executed according to 0,1 state.This method reduces the traversing operation to Rate-R node, improves decoding speed, reduce decoding delay by the way that deep neural network technology and polarization code decoding technique to be combined.

Description

A kind of polarization code coding method based on deep neural network
Technical field
The invention belongs to fields of communication technology, in particular to a kind of that simplified successive elimination is assisted with deep neural network Decoding algorithm carries out fast decoding.
Background technique
Polarization code is a kind of novel channel coding proposed by E.Arikan in 2008.Polarization code is that the first can pass through Stringent mathematical method proves the constructivity encoding scheme for reaching channel capacity, and has clear and simply encode and decode Algorithm.By being continually striving to for channel coding scholar, the attainable error-correcting performance of current polarization code institute is more than to be widely used at present Turbo code, LDPC code.
The basis of polarization code is channel-polarization.It is obtained when channel (time slot) number for participating in channel-polarization is enough The channel capacity of polarisation channel will appear polarization phenomena, i.e., the capacity of a part of channel will tend to 1, remaining tends to 0.In channel On the basis of polarized, polarization code is when transmitting information, it is only necessary to tend to 1 channel information bit in a part of capacity, And remaining capacity tend to 1 channel and capacity tend to 0 the known fixed bit in channel sending and receiving end.With K table Show the number of channel for being used for transmission information bit, is consequently formed one and is closed by K information bit to N number of mapping one by one for sending bit System, this mapping is Polarization Coding.At the beginning of polarization code is suggested, successive elimination (SC) decoding is also suggested therewith.SC decoding Although complexity is low, decoding architecture is simple, higher time delay can only be brought by bit decoding.In order to reduce time delay, propose Simplified successive elimination (SSC) decoding algorithm.It is also put forward one after another subsequently, based on a variety of modified versions of SSC decoding algorithm.? In SSC algorithm, node is divided into 3 seed types, respectively Rate-1 node, Rate-0 node, Rate-R node.Each node All correspond to different decoding rules.Tree construction based on combinational code, SSC decoding algorithm can simplify the decoding of Rate-1 node To improve decoding speed, reduction time delay.
Deep neural network (DNN) is exactly the multiple neurons connected according to certain rule in fact.Realizing data Classification aspect, DNN show an ideal accuracy with its powerful learning ability and generalization ability, meet to the fast of data Speed, exact classification demand.
In order to further decrease decoding delay, the invention patent proposes a kind of polarization code based on deep neural network and translates Code method.It assists simplified successive elimination decoding algorithm to carry out fast decoding by using deep neural network, improves decoding Speed, to reduce decoding delay.
Summary of the invention
The invention proposes a kind of polarization code coding method based on deep neural network is guaranteeing that decoding performance is constant In the case of, the decoding speed of simplified successive elimination decoding algorithm is improved by the auxiliary of deep neural network, reduces decoding Time delay calls this decoding algorithm simplification successive elimination (DNA-SSC) decoding algorithm of deep neural network auxiliary.
In the sample data preparation stage, it is known that 1000 frame code words are sent to simplified successive elimination under same signal-to-noise ratio In decoder, the corresponding likelihood ratio vector α of each Rate-R node is recorded, each α corresponds to a known leaf Node sequence S carries out hard decision to each α, obtains vector β, then be multiplied by generator matrix G known to a correspondence to β, obtains SequenceCompare the corresponding sequence of αWith S, the identical situation of the symbol of two sequences is denoted as 1, by the code of two sequences The not exactly the same situation of member is denoted as 0, i.e. the corresponding label of this feature α vector is 1 or 0, and α and 1, α and 0 constitute one group of sample, 80% is selected to be used as test sample as training sample, and by remaining 20% at random from the sample recorded.
Deep neural network is the multi-layer perception (MLP) comprising multiple hidden layers and can be used as classifier, it is characterized in that layer Level structure and training rules can be set according to the actual situation.When building DNN, hierarchical structure include 1 input layer, 3 it is hidden Layer and 1 output layer are hidden, the input of input layer is the α vector that a length is N, and the number of nodes of each hidden layer is respectively set It is 128,64,32, the interstitial content of output layer is set as 2, DNN is built using full connection type, and sigmoid function is set It is set to activation primitive.When training DNN, it is based on supervised learning, trains network using error backpropagation algorithm, it is defeated by finding out The error term of layer and 3 hidden layers biases out to adjust weight and neuron, until completing the training of DNN.
End is decoded in polarization code, the corresponding likelihood ratio of Rate-R node is input in deep neural network model, obtains 0 Or 1, and simplified successive elimination decoding algorithm is executed according to 0,1 state.
During polarization code decoding, it is applicable in following steps:
Step 1, prepare sample data, and sample data is pre-processed using method for normalizing;
Step 2, deep neural network, and training deep neural network are built;
Step 3, the stage is decoded in polarization code, the corresponding likelihood ratio of Rate-R node is input to deep neural network model In, 0 or 1 is obtained, and simplified successive elimination decoding algorithm is executed according to 0,1 state;
Wherein, prepare sample data in step 1 and refer to be selected at random from all samples 80% as training sample, And test sample is used as by remaining 20%;Deep neural network is built in step 2 refer to set the number of plies of input layer to 1, the number of plies of hidden layer is set as 3, and the number of plies of output layer is set as 1, is built between layers using full connection type.
Beneficial effect
The present invention, which compares prior art, has following innovative point:
One classification is compared to the corresponding likelihood of Rate-R node with deep neural network.End is decoded in polarization code, it is first First the corresponding likelihood ratio of current Rate-R node is input in deep neural network model, obtains 0 or 1, if obtaining 1, directly Hard decision is made to the likelihood ratio vector, obtains the corresponding decoding bit of current Rate-R node, otherwise finds next Rate-R Node, and the corresponding likelihood ratio of next Rate-R node is input in deep neural network model, until completing all translate Code.
Deep neural network technology and polarization code decoding technique are combined.In the sample data preparation stage, sample number Simplified successive elimination decoding algorithm is performed a plurality of times according to deriving from;The deep neural network stage is being built and is training, input layer Input is likelihood ratio;At decoding end, the corresponding likelihood ratio of Rate-R node is input in deep neural network model, obtains 0 Or 1, and simplified successive elimination decoding algorithm is executed according to 0,1 state.At this point, DNA-SSC decoding algorithm can be with higher Speed decodes Rate-R node, reduces the traversing operation to Rate-R node, improves decoding speed, when reducing decoding Prolong.
Detailed description of the invention
Fig. 1 is DNA-SSC decoding algorithm flow chart.
Specific embodiment
Below in conjunction with drawings and examples, the present invention will be further described.
The present invention provides a kind of polarization code coding method based on deep neural network, it is main include prepare sample data, Build and train deep neural network and decoding three parts.In the sample data preparation stage, selected at random from collected sample 80% is used as training sample, and is used as test sample for remaining 20%;It is building and is training the deep neural network stage, first The hierarchical structure and parameter for determining network, build deep neural network, and the error term for then finding out output layer and 3 hidden layers is come Adjust weight and neuron biasing;End is decoded in polarization code, the corresponding likelihood ratio of Rate-R node is input to depth nerve net In network model, 0 or 1 is obtained, and simplified successive elimination decoding algorithm is executed according to 0,1 state.
In the sample data preparation stage, it is known that 1000 frame code words are sent to simplified successive elimination under same signal-to-noise ratio In decoder, the corresponding likelihood ratio vector α of each Rate-R node is recorded, each α corresponds to a known leaf Node sequence S carries out hard decision to each α, obtains vector β, then be multiplied by generator matrix G known to a correspondence to β, obtains SequenceCompare the corresponding sequence of αWith S, the identical situation of the symbol of two sequences is denoted as 1, by the code of two sequences The not exactly the same situation of member is denoted as 0, i.e. the corresponding label of this feature α vector is 1 or 0, and α and 1, α and 0 constitute one group of sample, 80% is selected to be used as test sample as training sample, and by remaining 20% at random from the sample recorded.The present embodiment will The code length N of simplified successive elimination decoding algorithm is set as 256, and code rate is set as 0.5.
Include 1 input layer, 3 with training deep neural network stage, the hierarchical structure of deep neural network building Hidden layer and 1 output layer, the input of input layer are the α vectors that a length is N, and the number of nodes of each hidden layer is set respectively 128,64,32 are set to, the interstitial content of output layer is set as 2, builds DNN using full connection type.The present embodiment will activate Function setup is sigmoid function, calculates the output of each node in each hidden layer and output layer.Based on supervised learning, Network is trained using back-propagation algorithm, the present embodiment is using the error sum of squares of all output node layers of network as target letter Number;On this basis, objective function is optimized with stochastic gradient descent optimization method;After optimization, respectively To the error term of output layer, the error term of each hidden layer, the update method of weight and the update mode of bias term.According to The training rules arrived, the weight and bias term of continuous corrective networks, until completing all sample trainings.
DNA-SSC decoding algorithm flow chart is as shown in Figure 1.DNA-SSC decoding algorithm mainly includes preparing sample data, taking Build and train deep neural network and decoding three parts.In decoding portion, traversal SSC first decodes code tree, then to first The node for calculating α is judged judge whether the node is Rate-R node;If it is Rate-R node, Rate-R is tied The corresponding likelihood ratio vector of point, which is input to, have been completed in trained deep neural network, if output is 1, to the likelihood ratio to Amount directly carries out hard decision, obtains the decoding bit that Rate-R node corresponds to leaf node, if output is 0, wants to next The node for calculating α is judged;If it is Rate-0 node or Rate-1 node, then decoded with the decoded mode of SSC, The decoding bit of the corresponding leaf node of Rate-0 node is fixed bit, that is, is all bit 0, passes through the α to Rate-1 node Vector directly carries out the corresponding decoding bit of the available Rate-1 node of hard decision;Finally judge whether to be fully completed to translate Code, completion then exit decoding, otherwise continue to decode.The present embodiment sets 256 for the code length N that DNA-SSC is decoded, code rate setting It is 0.5.
The above description is merely a specific embodiment, but the scope of protection of the present invention is not limited thereto, any ripe Those skilled in the art are known in technical scope proposed by the present invention, the variation or replacement that can be readily occurred in all are answered This is included within the scope of the present invention.

Claims (3)

1. a kind of polarization code coding method based on deep neural network, which is characterized in that the method deep neural network It assists the simplified successive elimination decoding algorithm to carry out fast decoding, the described method comprises the following steps:
Step 1, prepare sample data, and sample data is pre-processed using method for normalizing;
Step 2, deep neural network, and training deep neural network are built;
Step 3, the stage is decoded in polarization code, the corresponding likelihood ratio of Rate-R node is input in deep neural network model, 0 or 1 is obtained, and executes simplified successive elimination decoding algorithm according to 0,1 state.
2. a kind of polarization code coding method based on deep neural network according to claim 1, which is characterized in that step Prepare sample data in 1 and refer to be selected at random from all samples 80% as training sample, and by remaining 20% conduct Test sample.
3. a kind of polarization code coding method based on deep neural network according to claim 1, which is characterized in that step Deep neural network is built in 2 referring to and set 1 for the number of plies of input layer, the number of plies of hidden layer is set as 3, output layer The number of plies is set as 1, is built between layers using full connection type.
CN201810736700.5A 2018-07-06 2018-07-06 A kind of polarization code coding method based on deep neural network Pending CN108964672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810736700.5A CN108964672A (en) 2018-07-06 2018-07-06 A kind of polarization code coding method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810736700.5A CN108964672A (en) 2018-07-06 2018-07-06 A kind of polarization code coding method based on deep neural network

Publications (1)

Publication Number Publication Date
CN108964672A true CN108964672A (en) 2018-12-07

Family

ID=64482200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810736700.5A Pending CN108964672A (en) 2018-07-06 2018-07-06 A kind of polarization code coding method based on deep neural network

Country Status (1)

Country Link
CN (1) CN108964672A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450459A (en) * 2019-01-16 2019-03-08 中国计量大学 A kind of polarization code FNSC decoder based on deep learning
CN111106839A (en) * 2019-12-19 2020-05-05 北京邮电大学 Polarization code decoding method and device based on neural network
CN113438049A (en) * 2021-05-31 2021-09-24 杭州电子科技大学 Hamming code decoding method and system based on DNN model analysis

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109450459A (en) * 2019-01-16 2019-03-08 中国计量大学 A kind of polarization code FNSC decoder based on deep learning
CN111106839A (en) * 2019-12-19 2020-05-05 北京邮电大学 Polarization code decoding method and device based on neural network
CN113438049A (en) * 2021-05-31 2021-09-24 杭州电子科技大学 Hamming code decoding method and system based on DNN model analysis

Similar Documents

Publication Publication Date Title
CN108536123B (en) Train control on board equipment method for diagnosing faults based on long Memory Neural Networks in short-term
CN110474716B (en) Method for establishing SCMA codec model based on noise reduction self-encoder
CN106203624B (en) Vector Quantization and method based on deep neural network
CN101808244B (en) Video transmission control method and system
CN105897379B (en) A kind of polarization code concatenated space-time code system and its cascade polarization code encoding method
CN108776683A (en) A kind of electric power operation/maintenance data cleaning method based on isolated forest algorithm and neural network
CN110636020B (en) Neural network equalization method for adaptive communication system
CN108964672A (en) A kind of polarization code coding method based on deep neural network
CN111062464B (en) Power communication network reliability prediction and guarantee method and system based on deep learning
Ye et al. Circular convolutional auto-encoder for channel coding
CN109361404A (en) A kind of LDPC decoding system and interpretation method based on semi-supervised deep learning network
CN113381828B (en) Sparse code multiple access random channel modeling method based on condition generation countermeasure network
CN110278001A (en) Polarization code subregion interpretation method based on deep learning
CN108924558A (en) A kind of predictive encoding of video method neural network based
Liu et al. Evol-TL: evolutionary transfer learning for QoT estimation in multi-domain networks
CN108777584A (en) A kind of fast Optimization of polarization code decoding parameter
CN109450459B (en) Polarization code FNSC decoder based on deep learning
CN112332863B (en) Polar code decoding algorithm, receiving end and system under low signal-to-noise ratio scene of low orbit satellite
CN110138390A (en) A kind of polarization code SSCL algorithm decoder based on deep learning
Ngo et al. Adaptive anomaly detection for IoT data in hierarchical edge computing
CN108880568A (en) A kind of serial counteracting list decoding parameter optimization method based on convolutional neural networks
JP2008211800A (en) Maximum likelihood decoder for multi-source system with pulse position modulation
CN105007246B (en) The multi-mode blind equalization method that a kind of mould optimizes by method
CN115345222A (en) Fault classification method based on TimeGAN model
Chien et al. Exploring state transition uncertainty in variational reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181207