CN114337927A - Decoding method, device, apparatus, storage medium, program product, and communication chip - Google Patents

Decoding method, device, apparatus, storage medium, program product, and communication chip Download PDF

Info

Publication number
CN114337927A
CN114337927A CN202111668318.3A CN202111668318A CN114337927A CN 114337927 A CN114337927 A CN 114337927A CN 202111668318 A CN202111668318 A CN 202111668318A CN 114337927 A CN114337927 A CN 114337927A
Authority
CN
China
Prior art keywords
data
llr
buffer
address
effective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111668318.3A
Other languages
Chinese (zh)
Inventor
方圣云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111668318.3A priority Critical patent/CN114337927A/en
Publication of CN114337927A publication Critical patent/CN114337927A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Error Detection And Correction (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

The embodiment of the application discloses a decoding method, a decoding device, equipment, a storage medium, a program product and a communication chip, and belongs to the technical field of communication. The method comprises the following steps: obtaining log-likelihood ratio (LLR) data of a received wireless signal; sequentially storing effective data in the LLR data to a buffer area of a de-interleaver; obtaining 3X K effective data in a buffer area of a de-interleaver and the address of the 3X K effective data after de-interleaving according to the number of invalid data in the LLR data, the code block size K of the LLR data and the parallelism P of a Turbo decoder; and inputting the 3X K effective data and the address of the 3X K effective data after the deinterleaving into a Turbo decoder for decoding. The scheme avoids the condition that the Turbo decoder needs to rearrange the sequence to adapt to the operation requirement of the Turbo decoder after receiving the data, thereby improving the decoding efficiency of the Turbo decoder.

Description

Decoding method, device, apparatus, storage medium, program product, and communication chip
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a decoding method, apparatus, device, storage medium, program product, and communication chip.
Background
During data transmission of the communication device, the device at the data transmitting end needs to perform encoding and modulating operations on data, and the device at the data receiving end needs to perform demodulating and decoding operations on data.
In the related technology, data is received under the condition of no empty bit, HARQ soft combination is completed through an LLR combination block, then the data is input into a circular buffer, NULL bits are inserted into corresponding positions to obtain a complete matrix of R32, the matrix is divided into 4 groups, de-interleaving is completed through a column writing and row reading mode according to group index values and address offset correction values after NULL removal, and finally a de-interleaving result without NULL is sent to a Turbo decoder. The Turbo decoder needs to process the received deinterleaving result and then perform the data decoding operation.
Disclosure of Invention
The embodiment of the application provides a decoding method, a decoding device, decoding equipment, a storage medium, a program product and a communication chip, and can improve the decoding efficiency of a Turbo decoder. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a decoding method, where the method is performed by a receiving end device, and the method includes:
obtaining log-likelihood ratio (LLR) data of a received wireless signal;
sequentially storing effective data in the LLR data to a buffer area of a de-interleaver;
obtaining 3 x K effective data in a buffer of a de-interleaver and the address of the 3 x K effective data after de-interleaving according to the number of invalid data in the LLR data, the code block size K of the LLR data and the parallelism P of a Turbo decoder;
and inputting the 3X K effective data and the address of the 3X K effective data after de-interleaving into the Turbo decoder for decoding.
In another aspect, an embodiment of the present application provides a decoding apparatus, where the apparatus is used in a receiving end device, and the apparatus includes:
the data acquisition module is used for acquiring log-likelihood ratio (LLR) data of the received wireless signals;
the data storage module is used for sequentially storing effective data in the LLR data to a de-interleaver buffer area;
an address obtaining module, configured to obtain, according to the number of invalid data in the LLR data, a code block size K of the LLR data, and a parallelism P of a Turbo decoder, 3 × K valid data in the deinterleaver buffer, and addresses of the 3 × K valid data after deinterleaving;
and the decoding module is used for inputting the 3X K effective data and the address of the 3X K effective data after de-interleaving into the Turbo decoder for decoding.
In another aspect, an embodiment of the present application provides a computer device, which includes a processor and a memory; the memory has stored therein at least one computer instruction that is loaded and executed by the processor to implement the decoding method as described in the above aspect.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, in which at least one computer instruction is stored, and the computer instruction is loaded and executed by a processor to implement the decoding method according to the above aspect.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal executes the decoding method provided in the various alternative implementations of the above-mentioned aspects.
In another aspect, an embodiment of the present application provides a communication chip, where the communication chip is used in a receiving end device, and the communication chip is used for executing to implement the decoding method according to the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the receiving terminal equipment stores the effective data in the obtained LLR data into a de-interleaver buffer area, and de-interleaves the effective data in the de-interleaver buffer area and obtains the address of the de-interleaved effective data according to the number of the ineffective data in the obtained LLR data, the code block size of the LLR data and the parallelism of the Turbo decoder, so that the Turbo decoder can directly decode the LLR data according to the obtained effective data and the address of the effective data after de-interleaving. The condition that the Turbo decoder needs to rearrange the sequence to adapt to the operation requirement of the Turbo decoder after receiving the data is avoided, and the decoding efficiency of the Turbo decoder is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a block diagram illustrating a communication system in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a decoding method according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a decoding method according to another exemplary embodiment;
FIG. 4(a) is a schematic diagram of a matrix before interleaving according to the embodiment shown in FIG. 3;
FIG. 4(b) is a schematic diagram of a matrix before interleaving according to the embodiment shown in FIG. 3;
FIG. 5(a) is a schematic diagram of an interleaved matrix according to the embodiment shown in FIG. 3;
FIG. 5(b) is a schematic diagram of an interleaved matrix according to the embodiment shown in FIG. 3;
FIG. 6 is a schematic diagram of a circular buffer according to the embodiment shown in FIG. 3;
FIG. 7(a) is a schematic matrix diagram of encoded valid data according to the embodiment shown in FIG. 3;
FIG. 7(b) is a schematic matrix diagram of the encoded valid data according to the embodiment shown in FIG. 3;
FIG. 8 is a schematic illustration of a CBM store according to the embodiment shown in FIG. 3;
FIG. 9 is a schematic illustration of another CBM storage according to the embodiment shown in FIG. 3;
FIG. 10 is a schematic diagram of bit positions before and after data interleaving according to the embodiment shown in FIG. 3;
FIG. 11 is a schematic diagram of address offset correction values for a three-way bitstream according to the embodiment of FIG. 3;
FIG. 12 is a timing diagram for reading data from a CBM and writing data to a Turbo decoder according to the embodiment shown in FIG. 3;
FIG. 13 is a block diagram of one type of LLR data generation and processing according to the embodiment shown in FIG. 3;
fig. 14 is a block diagram of a decoding apparatus according to an exemplary embodiment of the present application;
fig. 15 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 shows a block diagram of a communication system provided by an exemplary embodiment of the present application, which may include: access network 12, terminal device 14, and core network 16.
Several access network devices 120 are included in access network 12. The access network equipment 120 may be a base station, which is a device deployed in an access network to provide wireless communication functions for terminals. The base stations may include various forms of macro base stations, micro base stations, relay stations, access points, and the like. In systems using different radio access technologies, names of devices having a base station function may be different, for example, in an LTE (Long Term Evolution) system, the device is called an eNodeB (Evolved Node B) or eNB for short; in a 5G NR-U (5G New Radio in Unlicensed Spectrum) system, it is called a gsnodeb (5G base station) or a gNB. The description of "base station" may change as communication technology evolves. For convenience of this embodiment, the above-mentioned apparatuses providing wireless communication function for the terminal device 14 are collectively referred to as a network device.
The Terminal devices 14 may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication capabilities, as well as various forms of user equipment, Mobile Stations (MSs), terminals (Terminal devices), and so forth. For convenience of description, the above-mentioned devices are collectively referred to as a terminal. Access network device 120 and terminal device 14 communicate with each other over some air interface technology, such as a Uu interface.
The core network 16 is used as the top layer of the mobile communication network to complete the routing and exchange of data, and finally realizes the establishment of a channel between the terminal user and the internet, after the channel is established, the terminal user can access a data center on the internet, namely a server of a service provider, so as to use services and services provided by the service provider.
The technical scheme of the embodiment of the application can be applied to various communication systems, for example: a Global System for Mobile Communication (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a General Packet Radio Service (GPRS), a Long Term Evolution (Long Term Evolution, LTE) System, a LTE Frequency Division Duplex (FDD) System, a LTE Time Division Duplex (TDD) System, an Advanced Long Term Evolution (LTE-A) System, a New wireless (New Radio, NR) System, an Evolution System of an NR System, an LTE-based Access (LTE-to-non-licensed) System, a UMTS-based Access (UMTS-to-non-licensed) System, a UMTS-based Universal Mobile Communication (UMTS-to-Universal Mobile Access, UMTS) System, WiMAX), a Wireless Local Area Network (WLAN), a Wireless Fidelity (WiFi), a 6 th Generation (6G) system, a next Generation communication system, or other communication systems.
Generally, the conventional Communication system supports a limited number of connections and is easy to implement, however, with the development of Communication technology, the mobile Communication system will support not only conventional Communication but also, for example, Device-to-Device (D2D) Communication, Machine-to-Machine (M2M) Communication, Machine Type Communication (MTC), Vehicle-to-Vehicle (V2V) Communication, and Vehicle networking (V2X) system, etc. The embodiments of the present application can also be applied to these communication systems.
Fig. 2 shows a flowchart of a decoding method provided in an exemplary embodiment of the present application. The decoding method may be performed by a receiving end device, for example, the receiving end device may be the terminal device 14 or the access network device 120 in the communication system shown in fig. 1. The decoding method comprises the following steps:
step 201, log-likelihood ratio LLR data of the received wireless signal is acquired.
In the embodiment of the present application, after receiving a wireless signal, a receiving end device may obtain log-likelihood ratio LLR data of the wireless signal.
After receiving the wireless signal, the receiving end device may input the wireless signal into a Demapper module (Demapper), process the wireless signal through the Demapper, and output the wireless signal by the Demapper to obtain Log-Likelihood Ratio (LLR) data of the wireless signal.
The LLR data obtained by the Demapper output can be subjected to code block segmentation and descrambling processing to obtain LLR data with the length of 1CB being E.
Step 202, storing the effective data in the LLR data to the buffer of the de-interleaver in sequence.
In this embodiment, the LLR data acquired by the receiving end device includes valid data or may include invalid data, and the receiving end device sequentially stores the valid data in the LLR data in the deinterleaver buffer and waits for deinterleaving.
Step 203, according to the number of invalid data in the LLR data, the code block size K of the LLR data, and the parallelism P of the Turbo decoder, 3 × K valid data in the deinterleaver buffer and the addresses of the 3 × K valid data after deinterleaving are obtained.
In this embodiment, the receiving end device obtains, according to the number of invalid data included in the LLR data, the code block size K of the LLR data, and the parallelism P possessed by the Turbo decoder, 3 × K valid data in the de-interleaving buffer and addresses corresponding to the 3 × K valid data after de-interleaving.
And step 204, inputting the 3 × K effective data and the address of the 3 × K effective data after de-interleaving into a Turbo decoder for decoding.
In this embodiment, the receiving end device may input the acquired 3 × K valid data and the addresses corresponding to the 3 × K valid data after deinterleaving to a Turbo decoder for decoding, and the Turbo decoder decodes the 3 × K valid data.
To sum up, in the embodiment of the present application, the receiving end device stores the valid data in the obtained LLR data into the deinterleaver buffer, and performs deinterleaving processing on the valid data in the deinterleaver buffer and obtains the address of the deinterleaved valid data according to the number of invalid data in the obtained LLR data, the code block size of the LLR data, and the parallelism of the Turbo decoder, so that the Turbo decoder can directly decode the LLR data according to the obtained valid data and the address of the valid data after deinterleaving. The condition that the Turbo decoder needs to rearrange the sequence to adapt to the operation requirement of the Turbo decoder after receiving the data is avoided, and the decoding efficiency of the Turbo decoder is improved.
In addition, according to the scheme shown in the embodiment of the application, the data stored in the buffer area of the deinterleaver is valid data in the LLR data, and invalid data is not included, so that the utilization rate of the buffer area of the interleaver is improved, and the storage resources of a system are saved.
Fig. 3 shows a flowchart of a decoding method provided by an exemplary embodiment of the present application. The decoding method may be performed by a receiving end device, for example, the receiving end device may be the terminal device 14 or the access network device 120 in the communication system shown in fig. 1. The decoding method comprises the following steps:
step 301, log-likelihood ratio LLR data of the received wireless signal is acquired.
In this embodiment, the receiving end device may obtain LLR data corresponding to the received wireless signal through the Demapper.
Step 302, select each valid data from the LLR data.
In this embodiment, the receiving end device may obtain valid data from LLR data including valid data and invalid data.
In a possible implementation manner, for a PDSCH (Physical Downlink Shared Channel), a receiving end device receives E pieces of coded effective data sent by a sending end device, and performs descrambling on the E pieces of coded effective data, and the receiving end device obtains the effective data of the E pieces of LLR data.
The receiving end device may filter the received valid data of the coded LLR data through the demapping module and the descrambling module to obtain each valid data of the LLR data.
Step 303, storing each valid data in the circular buffer area in sequence.
In this embodiment, the receiving end device may sequentially store each acquired valid data in the circular buffer.
In a possible implementation manner, when the number E of each valid data is greater than the upper limit Ncb of the amount of cache data in the circular buffer, circularly writing each valid data into each cache address in the circular buffer according to the sequence of each valid data; the cache address supports the storage of a specified number of valid data; for the repeated writing address in each cache address, carrying out soft combination on effective data in the repeated writing address; the repeated write address is a cache address to which valid data is written again after a specified number of valid data have been written.
The circular buffer may be implemented by a circular buffer, which may be a CBM (Code Block Memory). The upper limit of the amount of buffer data in the circular buffer may be Ncb, the number of buffer addresses in the circular buffer is N ═ Ncb/32, N is an upward-fetching integer, and a specified number of LLR data of 32 may be stored in each of the N buffer addresses in one cycle, so when the number E of each valid data is greater than the upper limit of the amount of buffer data Ncb of the circular buffer, the data in the current cycle may be written into the circular buffer and Ncb LLR data may be stored, for E-Ncb repeated LLR data, it is necessary to read out the data in the circular buffer from address 0, perform a rewinding operation, perform soft combining, and then rewrite the combined result after soft combining, that is, 32LLR data obtained after soft combining, to the corresponding buffer addresses until E LLR data are combined into Ncb data.
The receiving end device can use a whole Random Access Memory (RAM) to complete the operation of the unwinding ring, and does not insert Dummy Bit, so that a small number of registers can be used, the problem of overlarge area caused by too many RAM blocks is avoided, the area is reduced, and meanwhile, the operation of the unwinding ring can be quickly completed.
For example, 32LLR data may be stored in each buffer address in one cycle, and in this case, if the number of buffer addresses in the circular buffer is N, 32N LLR data may be stored in the circular buffer in one cycle. If the last buffer address for data storage in one cycle can store less than 32LLR data, when the last buffer address for data storage stores less than 32LLR data, the insufficient part can be zero-padded.
In another possible implementation manner, when the number E of each valid data is not greater than the upper limit Ncb of the buffer data amount in the circular buffer, each valid data is sequentially written into each buffer address in the circular buffer in the order of each valid data.
In one possible implementation, the transmitting device encodes the transmitted data through a Turbo encoder before transmitting the wireless signal.
The Turbo encoder may adopt a parallel cascade convolutional coding structure, and may be composed of 2 8-state member encoders and 1 Turbo code inner interleaver.
In one possible implementation, the data is processed by a Turbo encoder to output three information bit streams, where the three information bit streams include a systematic bit stream, a first parity stream, and a second parity stream. Respectively inputting a system Bit stream, a first parity check stream And a second parity check stream into respective corresponding sub-module interleavers, outputting respective corresponding interleaved Bit streams through the respective corresponding sub-module interleavers, outputting a total Bit stream with the length of the total length of three information Bit streams through a Bit Collection module (Bit Collection), buffering the total Bit stream into an interleaver buffer, supporting And storing invalid data in the interleaver buffer, inputting data stored in the interleaver buffer into a Bit Selection And clipping module (Bit Selection And clipping), selecting data with the specified length, which is contained in the interleaver buffer And is used for removing the invalid data, through the Bit Selection And clipping module, And outputting a data sequence with the length of E through the Bit Selection And clipping module.
The interleaver buffer may be a circular buffer, the circular buffer has an upper limit of a buffer data amount, and the upper limit of the buffer data amount of the circular buffer may be Ncb, valid data is selected from a specified address of the circular buffer by the bit selection and clipping module, if the data amount to be selected is greater than the upper limit of the buffer data amount of the circular buffer, the data in the circular buffer may be read in a circular manner, and finally, the bit selection and clipping module outputs a data sequence of a data length to be selected.
Illustratively, the three information bit streams are obtained by a Turbo encoder output, and comprise a system bit stream
Figure BDA0003452237280000051
First parity stream
Figure BDA0003452237280000052
And a second parity stream
Figure BDA0003452237280000053
The length of each information bit stream in the three information bit streams is D (K + 4).
Where D is the number of bits input to the interleaver, and K is the code block Size (CB Size).
Systematic bit stream
Figure BDA0003452237280000054
Outputting the interleaved system bit stream by the system bit interleaver
Figure BDA0003452237280000055
First parity stream
Figure BDA0003452237280000056
Outputting the interleaved first parity stream through a first parity bit interleaver
Figure BDA0003452237280000057
Second parity stream
Figure BDA0003452237280000058
Outputting the interleaved second parity stream through a second parity bit interleaver
Figure BDA0003452237280000059
Wherein the Sub-Block Interleaver (Sub-Block Interleaver) includes a systematic bit Interleaver, a first parity bit Interleaver, and a second parity bit Interleaver, and the systematic bit Interleaver and the first parity bit Interleaver may be the same, and the systematic bit Interleaver and the second parity bit Interleaver may be different.
Outputting the interleaved bit stream through the sub-module interleaver corresponding to each bit stream, if the length of each bit stream is KπThen K is output through the bit collection modulew=3KπTotal bit stream of length KwMay be w0,w1,w2,…
Figure BDA0003452237280000061
The total bit stream is then buffered in a circular buffer.
Wherein the system bit stream is divided into a plurality of bit streams
Figure BDA0003452237280000062
First parity stream
Figure BDA0003452237280000063
And a second parity stream
Figure BDA0003452237280000064
The rule for merging to generate the total bitstream may be to satisfy the following.
Figure BDA0003452237280000065
Figure BDA0003452237280000066
Figure BDA0003452237280000067
If the upper limit of the amount of data held in the circular buffer is Ncb, Ncb of the circular buffer in the transmitting device may contain Null (Null pointer) bit data, that is, invalid data, and Ncb of the circular buffer in the receiving device may not contain invalid data. From a given address k in the circular buffer0Starting to select and cut bits at the address, and selecting non-Null bit lengthThe data with the degree of E, in this case, if E is less than or equal to Ncb, the data in the circular buffer is directly read>Ncb, reading the data in the circular buffer circularly, and finally outputting a data sequence E with the length of E0,e1,e2,…eE-1
In a possible implementation, during the sub-module interleaving, an R × 32 matrix may be constructed for K +4 data corresponding to the system Bit stream, where the length D (K +4) < R × 32 of the information Bit stream of each path, that is, if R × 32> D, Nd (32R-D) blank bits (Dummy bits) need to be filled at the beginning of the first row of the R × 32 matrix.
The code block size K is a multiple of 8, K +4 is obtained after passing through the Turbo encoder, and values of the blank bits Nd are 4, 12, 20, and 28 can be obtained by combining D < 32R.
Fig. 4 is a schematic diagram of a matrix before interleaving according to an embodiment of the present application. Taking K304 as an example, from R × 32> D and D ═ K +4 as described above, R ═ 10 and Nd ═ 12 can be obtained, and matrices corresponding to the systematic Bit stream, the first parity stream, and the second parity stream before interleaving can be as shown in fig. 4(a) and fig. 4(b), where the matrix having the row number R of 10 shown in fig. 4(a) is a matrix corresponding to the systematic Bit stream, and the 0 th to 11 th columns of the 0 th row in the matrix corresponding to the systematic Bit stream are Dummy bits. The matrix shown in fig. 4(b) in which the number of rows R is 20 is a matrix corresponding to a parity Bit stream interleaved with each other, the part corresponding to the even-numbered row is the first parity Bit, the part corresponding to the odd-numbered row is the second parity Bit, and the 0 th to 11 th columns in the 0 th row and the 1 st row are Dummy bits. The inter-column permutation mode corresponding to the sub-module interleaver may be as shown in table 1 below, and the three bit streams may be interleaved by the respective sub-module interleavers according to the rule shown in table 1, and output an interleaved bit stream matrix.
Figure BDA0003452237280000068
Figure BDA0003452237280000071
TABLE 1
For example, fig. 5 is a schematic diagram of an interleaved matrix according to an embodiment of the present application. The matrix corresponding to the interleaved systematic Bit stream, the first parity stream and the second parity stream may be as shown in fig. 5(a) and fig. 5(b), wherein the matrix with the row number R of 10 shown in fig. 5(a) is the matrix corresponding to the interleaved systematic Bit stream, the 0 th column, the 2 nd column, the 4 th column, the 8 th column, the 10 th column, the 12 th column, the 16 th column, the 18 th column, the 20 th column, the 24 th column, the 26 th column and the 28 th column of the 0 th row in the matrix corresponding to the interleaved systematic Bit stream are Dummy bits, and the columns in the matrix are adjusted according to the corresponding relationship shown in table 1, for example, the 1 st column of the matrix corresponding to the systematic Bit stream before interleaving shown in fig. 4(a) is 154-162, and the 1 st column of the matrix corresponding to the systematic Bit stream before interleaving corresponds to the 16 th column of the matrix corresponding to the interleaved systematic Bit stream according to the corresponding to table 1, that is, the 16 th column in the matrix shown in FIG. 5(a) is 154-162. The matrix shown in fig. 5(b) in which the number of rows R is 20 is a matrix corresponding to interleaved parity Bit streams, the part corresponding to even rows is interleaved first parity bits, the part corresponding to odd rows is interleaved second parity bits, the 0 th, 2 nd, 4 th, 8 th, 10 th, 12 th, 16 th, 18 th, 20 th, 24 th, 26 th, 28 th columns in the 0 th row, and the 31 th, 2 nd, 4 th, 8 th, 10 th, 12 th, 16 th, 18 th, 20 th, 24 th, 28 th, and 19 th columns in the 1 st row are Dummy bits, and the columns in the matrix are adjusted according to the correspondence relationship shown in table 1.
In one possible implementation, the circular buffer is assigned an address k at which data storage is to begin0The calculation is performed by the following formula,
Figure BDA0003452237280000081
exemplarily shown as E>Ncb is an example, and FIG. 6 is a schematic diagram of a circular buffer according to an embodiment of the present application, where when the matrix data shown in FIG. 5 is stored in the circular buffer from the 0 th column to the 31 th column in sequence, k in the circular buffer can be selected0Starting at 61, E data without Dummy Bit, i.e., rvidxWhen the number of bits is 0, E data may be selected from the 2 nd column of the matrix, and the selected E encoded valid data may be obtained.
For example, fig. 7 is a schematic matrix diagram of encoded valid data according to an embodiment of the present application. As shown in fig. 7, E data excluding the Dummy Bit are selected from the 2 nd column of the matrix, and the selected E valid data are obtained. As shown in fig. 7(a), the valid data matrix corresponding to the systematic bit stream is shown, and as shown in fig. 7(b), the valid data matrix corresponding to the parity stream and the valid data 701 corresponding to the repeated systematic bit stream data are shown.
Illustratively, fig. 8 is a schematic diagram of CBM storage when E > N is involved in the embodiments of the present application. As shown in fig. 8, if the obtained E LLR data cannot be stored in the CBM at one time, the first N LLR data are sequentially written into the CBM in sequence, then the remaining E-N LLR data are obtained, the E-N LLR data are read from the CBM from address 0, the data in the CBM is read out and then subjected to a rewind operation, and soft-combined with the remaining LLR data, and then the combined result after soft-combining is written into the corresponding cache address until the E LLR data are combined into N data.
Illustratively, FIG. 9 is a schematic diagram of a CBM storage when E ≦ N according to an embodiment of the present application. As shown in fig. 9, if the acquired E LLR data can be stored in the CBM at one time, the E LLR data is written into the CBM at one time in sequence.
In a possible implementation manner, when the matrix of the encoded valid data is stored in the CBM, the storage format of the data may be that the matrix corresponding to the obtained valid data is converted into a matrix with the number of columns of 32, and each column of data is written into the matrix with the number of columns of 32 according to the order of the valid data matrix corresponding to the systematic bit stream, the valid data matrix corresponding to the parity check stream, and the valid data matrix corresponding to the repeated systematic bit stream, and the matrix is stored in the CBM.
And step 304, sequentially storing the data in the circular buffer to a deinterleaver buffer from the initial address of the circular buffer.
In this embodiment, the receiving end device sequentially stores N data stored in the circular buffer into the deinterleaver buffer from the start address in the circular buffer.
The de-interleaver buffer is used for temporarily storing data to be de-interleaved and reorganizing the order of the temporarily stored data, so that the data reorganized in the order can ensure that the data read out after being input into the Turbo decoder can be directly used.
Because the data required by the Turbo decoder is not the sequence read out in the row sequence after interleaving, the data is input into the buffer area of the Turbo decoder after being reorganized according to a certain arrangement sequence, so as to ensure that the read data can be directly used after the Turbo decoder reads out the data.
In addition, the start address of the circular buffer may be k0From the address k of the circular buffer0Initially, the data in the circular buffer is sequentially stored to the deinterleaver buffer.
Step 305, according to the number of invalid data and the parallelism P of the code block size K, Turbo decoder of the LLR data, obtaining the bit positions of 3 × K valid data after deinterleaving.
In this embodiment of the present application, after N data are sequentially stored in the deinterleaver buffer, the bit positions of the 3 × K valid data after deinterleaving may be obtained according to the number of invalid data corresponding to the data, the code block size K of the LLR data, and the parallelism P of the Turbo decoder.
In a possible implementation manner, the number Nd of invalid data is the number of blank bits existing before the blank bit screening is performed on the data, and the parallelism P of the Turbo decoder is the number of data that each phase of the Turbo decoder supports simultaneous processing.
The receiving end equipment can obtain the data sequence required by the Turbo decoder according to the code block size K value and the parallelism value of the Turbo decoder, read the LLR data from the de-interleaver buffer area and complete de-interleaving.
In a possible implementation manner, LLR data at P positions of each phase support processing in the Turbo decoder is determined according to the parallelism P of the Turbo decoder, and the number of columns and the number of rows of LLR data at P positions of each phase support processing can be determined according to the number Nd of invalid data and the code block size K of the LLR data, so as to obtain 3 × K bit positions of the valid data after deinterleaving.
Determining LLR data of P positions for each phase support process in the Turbo decoder according to the parallelism P of the Turbo decoder may include the following:
when P is 16, the Turbo decoder needs LLR data of 16 positions including 0, K/16, K/8, 3K/16, … …, 7K/8, and 15K/16 at phase 0;
when P is 8, the Turbo decoder needs LLR data of 8 positions including 0, K/8, K/4, 3K/8, K/2, 5K/8, 3K/4, and 7K/8 at phase 0;
when P is 4, the Turbo decoder needs LLR data of 4 positions including 0, K/4, K/2, 3K/4 at phase 0;
when P is 2, the Turbo decoder needs LLR data of 2 positions including 0, K/2 at phase 0;
when P is 1, the Turbo decoder needs LLR data of 1 position including 0 at phase 0.
In addition, the number of columns and the number of rows in which the LLR data of P positions for each phase support process are located may be determined according to the number Nd of invalid data and the code block size K of the LLR data, and the following cases may be included:
when P is 16, the number of columns in which 16 data are located is Nd, K/16% 32+ Nd, K/8% 32+ Nd, … …, 15K/16% 32+ Nd;
when P is 8, the number of columns in which 8 data are located is Nd, K/8% 32+ Nd, … …, 7K/8% 32+ Nd;
when P is 4, the number of columns in which 4 data are located is Nd, K/4% 32+ Nd, K/2% 32+ Nd, 3K/4% 32+ Nd, respectively;
when P is 2, the number of columns in which 2 data are located is Nd, K/2% 32+ Nd;
when P is 1, the number of columns in which 1 data is located is Nd.
For example, if P is 16, then the Turbo decoder needs to include 16LLR data at phase 0, including 0, K/16, K/8, 3K/16, … …, and 15K/16, and since both the systematic bit stream and the parity bit stream are 32 × R matrices, the column number and the row number of the 16LLR data at the 16 positions after deinterleaving can be calculated from the Nd value and the K value, where the column number of the 16LLR data is Nd, K/16% 32+ Nd, K/8% 32+ Nd, … …, and 15K/16% 32+ Nd, respectively.
And step 306, acquiring addresses of the 3 × K effective data in the deinterleaver buffer according to the bit positions of the 3 × K effective data after deinterleaving and the interleaving rule of the LLR data.
In this embodiment, the receiving end device may calculate, according to the number of columns where the obtained 3 × K valid data are located after de-interleaving and the interleaving rule of the LLR data, bit positions corresponding to the 3 × K valid data before de-interleaving, so as to obtain addresses of the 3 × K valid data in the de-interleaving buffer.
In a possible implementation manner, according to the bit positions of the 3 × K effective data after deinterleaving and the interleaving rule, the bit positions of the 3 × K effective data before deinterleaving are obtained; acquiring input indexes of the 3 x K effective data according to the bit positions of the 3 x K effective data before de-interleaving; acquiring bit positions of invalid data in the LLR data after interleaving according to an interleaving rule; acquiring address offset correction values of 3 x K effective data according to the bit positions of the ineffective data after interleaving; and acquiring addresses of the 3X K effective data in the de-interleaver buffer according to the input indexes of the 3X K effective data and the address offset correction values of the 3X K effective data.
The receiving end device may obtain addresses of the 3 × K valid data in the deinterleaver buffer according to the input index of the 3 × K valid data, the address offset correction value of the 3 × K valid data, the size of the starting point, and the bit width of the deinterleaver buffer.
The interleaving rule may be a correspondence relationship between a number of columns before interleaving and a number of columns after interleaving, where the number of columns before interleaving is data interleaving, and the interleaving rule may be a rule as shown in table 1.
In a possible implementation manner, a data format output after data in the de-interleaving buffer area is de-interleaved is determined according to a K value corresponding to the data and a parallelism P value corresponding to the Turbo decoder.
The data format output by the deinterleaver is shown in table 2 below.
Figure BDA0003452237280000101
TABLE 2
For example, if the Turbo decoder has four phases in total, then when K ≦ 40 ≦ 384, where P ≦ 1, and one data is processed at each phase in the Turbo decoder at one time, 4 data can be processed at the same time for the four phases, and at this time, 0/1/2/3 th LLR data is processed for each of the four phases, and since the bit width of the Turbo decoder for inputting data to the Turbo decoder can be 16 LLRs, the data format output to the Turbo decoder at this time can be as follows,
{3,2,1,0,null,null,null,null,null,null,null,null,null,null,null,null}
for example, if P is 16, the Turbo decoder needs 16LLR data at 0, K/16, K/8, 3K/16, … …, and 15K/16 at phase 0, and since the systematic bit stream is a 32 × R matrix, the first parity bit stream and the second parity bit stream are crossed to obtain a parity bit stream, which is a 32 × 2R matrix, and the number of columns and the number of rows of the 16 data at the 16 positions after deinterleaving can be calculated from Nd and K values, where the number of columns of the 16 data is Nd, K/16% 32+ Nd, K/8% 32+ Nd, … …, and 15K/16% 32+ Nd, respectively. Then, the column number where the 16 data before deinterleaving is located is obtained according to the interleaving rule shown in table 1, the row number R where the 16 data before deinterleaving is located is calculated, the calculated column number and row number R where the 16 data before deinterleaving is located are Bit positions corresponding to the 16 data, and an input index input _ idx written in columns from 0 column, which contains Dummy Bit before deinterleaving, can be calculated according to the Bit positions corresponding to the 16 data before deinterleaving. Fig. 10 is a schematic diagram of bit positions before and after data interleaving in an Nd-4/12/20/28 according to an embodiment of the present application. Since the value of Nd can only be 4/12/20/28, the position of the interleaved Dummy Bit can be obtained according to the interleaving rule, as shown in fig. 10, the data marked in the black box corresponds to the position of the Dummy Bit, for example, when Nd is 4, the Dummy Bit in the system Bit stream before interleaving is located at the position corresponding to the 0 th column to the 3 rd column, and the positions of the Dummy Bit after interleaving corresponding to the 0 th column, the 8 th column, the 16 th column and the 24 th column can be obtained according to the interleaving rule.
In a possible implementation manner, the address offset correction value of each column can be obtained by acquiring the Bit position of Dummy Bit before interleaving and the Bit position of Dummy Bit after interleaving.
For example, fig. 11 is a schematic diagram of address offset correction values of a three-way bitstream according to an embodiment of the present application. The address correction is performed on the three bit streams according to the respective address offset correction values shown in fig. 11, and 3 × K addresses of valid data in the deinterleaver buffer are obtained.
In a possible implementation manner, the addresses of 16 data in the deinterleaver buffer are calculated according to the calculated input index _ idx, the corresponding address offset correction value, the size of K0 and the bit width of the deinterleaver buffer, if the addresses do not exist, zero padding is performed on the data of the addresses, 32LLR data are sequentially taken out of the 16 addresses respectively, the 32LLR data are buffered in a register, then the data of 16 positions including 0, K/16, K/8, 3K/16, … … and 15K/16 in the 32LLR data are sequentially taken out and spliced into effective data with the width of 16LLR, then the data and the addresses of 16LLR data are sequentially input into a Turbo decoder, and finally 12 tail bits are input into the Turbo decoder through the register.
The bit width of the de-interleaver buffer can be set according to the requirement of Turbo coding and decoding, because the de-interleaver buffer is a hardware interface, the width of the maximum required 16LLR is used as the bit width, if the K value and the P value are smaller, the high part of bits are occupied for decoding, and the low part of bits can be filled with zero.
And 307, extracting the 3 × K valid data from the deinterleaver buffer according to the addresses of the 3 × K valid data in the deinterleaver buffer.
In the embodiment of the present application, the receiving end device extracts 3 × K valid data from the deinterleaver buffer according to addresses of the acquired 3 × K valid data in the deinterleaver buffer.
Fig. 12 is a timing diagram illustrating an exemplary method for reading data from a deinterleaver buffer and writing data to a Turbo decoder according to an embodiment of the present disclosure. As shown in fig. 12.
Illustratively, when a systematic bit stream is taken as an example, and K is 304, K0=19,E=108>3(K +4), if P is 1, Nd is 12, and R is 10, it is necessary to output data at the 0 th position, then the 0 th data after deinterleaving is Nd 12, the 0 th data before deinterleaving is at the 6 th column and the 0 th data at the 0 th row can be obtained according to the interleaving rule, the bit position of the 0 th data before deinterleaving can be obtained according to the bit position of the 0 th data after deinterleaving and the interleaving rule, the input index input _ idx of the 0 th data can be obtained by calculation according to the following formula,
6*R-k0-address offset value + R (x)
Where 6R is the number of rows of the system bit stream, k0Is the starting point of the address, and R (x) is the number of rows in which the x-th data is located.
The input index input _ idx of the 0 th data is calculated as 6 × R (the number of rows of the systematic bit stream) -k0 (start point) - (3-1) (address offset value) + R (0) (the number of rows where the 0 th data is located) × 39 by the above formula. Then, it can be calculated that the address in the buffer is 1_6 according to the writing sequence 39, wherein 1_6 represents the 6 th LLR data of the 1 st address. By analogy, the position of the 1/2/3 th datum can be calculated.
And step 308, inputting the 3 × K effective data and the address of the 3 × K effective data after de-interleaving into a Turbo decoder for decoding.
In this embodiment, the receiving end device sequentially inputs the acquired 3 × K valid data and the deinterleaved addresses of the 3 × K valid data to the Turbo decoder for decoding.
For example, when P is 1, 4 data transmitted to the Turbo decoder for decoding are read from the deinterleaver buffer, and after 4 cycles (cycles), splicing of the data according to an output format, which may be determined as shown in table 2 above, is started, and the deinterleaver buffer transmits the acquired data to the Turbo decoder together with an address corresponding to the data, and the data may be read from the deinterleaver buffer and written to the Turbo decoder according to a timing chart shown in fig. 12.
Fig. 13 is a structural diagram of LLR data generation and processing according to an embodiment of the present application, and as shown in fig. 13, LLR data output by a receiving demapping module is received, and then is subjected to code block segmentation and descrambling processing to obtain LLR data with a length of E, where the size of a circular buffer is Ncb, and if E > Ncb, a rewind operation needs to be performed by using a CBM; if E is less than or equal to Ncb, at the moment, the operation of a unwinding ring is not needed, LLR data with the Qm parallelism output from the descrambling module is converted into LLR data with the 32 parallelism after passing through a Register Matrix (Register Matrix) and is directly written into the CBM; if E > Ncb, the first Ncb LLR data are written into the CBM in sequence, and the data after Ncb LLRs and the LLR data at the same position read from the CBM are subjected to soft combination and then written into the CBM. In this case, the data in the CBM does not contain Null bits (blank bits). The method includes the steps that if transmitted data are newly transmitted data, HARQ (Hybrid Automatic Repeat ReQuest) soft combining operation is not needed, current data in a CBM (Cone-beam modulation) can be directly written into a de-interleaver buffer area, if the transmitted data are retransmitted data, HARQ (Hybrid Automatic Repeat ReQuest) soft combining operation is needed, the current data are written into the de-interleaver buffer area after the HARQ soft combining operation is conducted, sub-block de-interleaving function is needed to be completed on the data in the de-interleaver buffer area, de-interleaving is conducted according to a K value (code block size) and a P (parallelism) value, and LLR data which can be directly used by a Turbo decoder are output. The de-interleaving module can complete data rearrangement to adapt to the operation requirement of the Turbo decoder while de-interleaving, and the output LLR data can be directly used by the Turbo decoder without reorganizing the sequence, thereby obviously improving the decoding efficiency of the Turbo decoder.
To sum up, in the embodiment of the present application, the receiving end device stores the valid data in the obtained LLR data into the deinterleaver buffer, and performs deinterleaving processing on the valid data in the deinterleaver buffer and obtains the address of the deinterleaved valid data according to the number of invalid data in the obtained LLR data, the code block size of the LLR data, and the parallelism of the Turbo decoder, so that the Turbo decoder can directly decode the LLR data according to the obtained valid data and the address of the valid data after deinterleaving. The condition that the Turbo decoder needs to rearrange the sequence to adapt to the operation requirement of the Turbo decoder after receiving the data is avoided, and the decoding efficiency of the Turbo decoder is improved.
Fig. 14 shows a block diagram of a decoding apparatus according to an exemplary embodiment of the present application. The decoding device is used in a receiving end device, and comprises:
a data obtaining module 1410, configured to obtain log-likelihood ratio LLR data of a received wireless signal;
a data storage module 1420, configured to store valid data in the LLR data in a deinterleaver buffer sequentially;
an address obtaining module 1430, configured to obtain, according to the number of invalid data in the LLR data, the code block size K of the LLR data, and the parallelism P of the Turbo decoder, 3 × K valid data in the deinterleaver buffer, and addresses of the 3 × K valid data after deinterleaving;
a decoding module 1440, configured to input the 3 × K valid data and the deinterleaved addresses of the 3 × K valid data into the Turbo decoder for decoding.
In one possible implementation manner, the data storage module 1420 includes:
the screening submodule is used for screening each effective data from the LLR data;
the first storage submodule is used for sequentially storing each effective data to a circular buffer area;
and the second storage submodule is used for sequentially storing the data in the circular buffer area to the de-interleaver buffer area from the initial address of the circular buffer area.
In one possible implementation manner, the first storage submodule includes:
a data writing unit, configured to, when the number E of each piece of valid data is greater than the upper limit Ncb of the amount of cached data in the circular buffer, circularly write each piece of valid data into each cache address in the circular buffer in the order of each piece of valid data; the cache address supports storing a specified number of the valid data;
a merging unit, configured to perform soft merging on the valid data in the repeated write address for the repeated write address in each cache address; the repeated writing address is the cache address where the valid data is written again after a specified number of the valid data have been written.
In one possible implementation manner, the first storage submodule includes:
and a storage unit, configured to, when the number E of each piece of valid data is not greater than the upper limit Ncb of the amount of cached data in the circular buffer, sequentially write each piece of valid data into each cache address in the circular buffer in the order of each piece of valid data.
In a possible implementation manner, the address obtaining module 1430 includes:
a position obtaining submodule, configured to obtain bit positions of the 3 × K valid data after de-interleaving according to the number of the invalid data, a code block size K of the LLR data, and a parallelism P of a Turbo decoder;
an address obtaining submodule, configured to obtain addresses of the 3 × K valid data in the deinterleaver buffer according to bit positions of the 3 × K valid data after deinterleaving and an interleaving rule of the LLR data;
and the data extraction submodule is used for extracting the 3 x K effective data from the de-interleaver buffer according to the addresses of the 3 x K effective data in the de-interleaver buffer.
In a possible implementation manner, the address obtaining sub-module includes:
a first position obtaining unit, configured to obtain, according to the bit positions of the 3 × K valid data after deinterleaving and the interleaving rule, the bit positions of the 3 × K valid data before deinterleaving;
an index obtaining unit, configured to obtain input indexes of the 3 × K valid data according to bit positions of the 3 × K valid data before deinterleaving;
a second position obtaining unit, configured to obtain, according to the interleaving rule, a bit position of the invalid data in the LLR data after interleaving;
a correction value obtaining unit, configured to obtain address offset correction values of the 3 × K pieces of valid data according to bit positions of the invalid data after interleaving;
and an address obtaining unit, configured to obtain addresses of the 3 × K valid data in the deinterleaver buffer according to the input index of the 3 × K valid data and the address offset correction value of the 3 × K valid data.
In a possible implementation manner, the address obtaining unit is configured to,
and acquiring the addresses of the 3 x K effective data in the deinterleaver buffer according to the input indexes of the 3 x K effective data, the address offset correction values of the 3 x K effective data, the size of a starting point and the bit width of the deinterleaver buffer.
To sum up, in the embodiment of the present application, the receiving end device stores the valid data in the obtained LLR data into the deinterleaver buffer, and performs deinterleaving processing on the valid data in the deinterleaver buffer and obtains the address of the deinterleaved valid data according to the number of invalid data in the obtained LLR data, the code block size of the LLR data, and the parallelism of the Turbo decoder, so that the Turbo decoder can directly decode the LLR data according to the obtained valid data and the address of the valid data after deinterleaving. The condition that the Turbo decoder needs to rearrange the sequence to adapt to the operation requirement of the Turbo decoder after receiving the data is avoided, and the decoding efficiency of the Turbo decoder is improved.
Fig. 15 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application. The terminal may be an electronic device installed and running with an application, such as a smart phone, a tablet computer, an electronic book, a portable personal computer, and the like. A terminal in the present application may include one or more of the following components: a processor 1510, a memory 1520, and a screen 1530.
Processor 1510 may include one or more processing cores. The processor 1510 is connected to various parts within the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1520, and calling data stored in the memory 1520. Alternatively, the processor 1510 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1510 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is responsible for rendering and drawing the content that the screen 1530 needs to display; the modem is used to handle wireless communications. It is to be appreciated that the modem can be implemented as a single communication chip without being integrated into the processor 1510.
The Memory 1520 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 1520 includes a non-transitory computer-readable medium. The memory 1520 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1520 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
The screen 1530 may be a capacitive touch display screen for receiving a touch operation of a user thereon or nearby using any suitable object such as a finger, a stylus, or the like, and displaying a user interface of each application. The touch display screen is generally provided at a front panel of the terminal. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, a shooting component, a sensor, an audio circuit, a Wireless Fidelity (WiFi) component, a power supply, a bluetooth component, and other components, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by a processor to implement the decoding method according to the above embodiments.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal executes the decoding method provided in the various alternative implementations of the above-mentioned aspects.
The embodiment of the present application further provides a communication chip, where the communication chip is used in a receiving end device, and the communication chip is used for executing to implement the decoding method according to the above embodiments.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium. Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A decoding method, wherein the method is performed by a receiving end device, and wherein the method comprises:
obtaining log-likelihood ratio (LLR) data of a received wireless signal;
sequentially storing effective data in the LLR data to a buffer area of a de-interleaver;
obtaining 3 x K effective data in a buffer of a de-interleaver and the address of the 3 x K effective data after de-interleaving according to the number of invalid data in the LLR data, the code block size K of the LLR data and the parallelism P of a Turbo decoder;
and inputting the 3X K effective data and the address of the 3X K effective data after de-interleaving into the Turbo decoder for decoding.
2. The method of claim 1, wherein the sequentially storing valid ones of the LLR data into a deinterleaver buffer comprises:
screening each effective data from the LLR data;
sequentially storing each effective data into a circular buffer area;
and sequentially storing the data in the circular buffer to the de-interleaver buffer from the initial address of the circular buffer.
3. The method of claim 2, wherein the sequentially storing each of the valid data into a circular buffer comprises:
when the number E of each effective data is larger than the upper limit Ncb of the cache data amount in the circular buffer, circularly writing each effective data into each cache address in the circular buffer according to the sequence of each effective data; the cache address supports storing a specified number of the valid data;
for the repeated writing address in each cache address, performing soft combination on the effective data in the repeated writing address; the repeated writing address is the cache address where the valid data is written again after a specified number of the valid data have been written.
4. The method of claim 2, wherein the sequentially storing each of the valid data into a circular buffer comprises:
when the number E of each of the valid data is not greater than the upper limit Ncb of the amount of cache data in the circular buffer, sequentially writing each of the valid data into each of the cache addresses in the circular buffer in the order of each of the valid data.
5. The method of any of claims 1 to 4, wherein the obtaining 3 x K valid data in the deinterleaver buffer and the addresses of the 3 x K valid data after deinterleaving according to the number of invalid data in the LLR data, the code block size K of the LLR data, and the parallelism P of a Turbo decoder comprises:
acquiring the bit positions of the 3 xK effective data after de-interleaving according to the number of the ineffective data, the code block size K of the LLR data and the parallelism P of a Turbo decoder;
acquiring addresses of the 3 x K effective data in a buffer of the deinterleaver according to the bit positions of the 3 x K effective data after deinterleaving and the interleaving rule of the LLR data;
and extracting the 3X K effective data from the deinterleaver buffer according to the addresses of the 3X K effective data in the deinterleaver buffer.
6. The method of claim 5, wherein the obtaining the addresses of the 3 x K valid data in the deinterleaver buffer according to the bit positions of the 3 x K valid data after deinterleaving and the interleaving rule of the LLR data comprises:
acquiring the bit positions of the 3 x K effective data before de-interleaving according to the bit positions of the 3 x K effective data after de-interleaving and the interleaving rule;
acquiring input indexes of the 3 x K effective data according to the bit positions of the 3 x K effective data before de-interleaving;
acquiring the bit positions of the invalid data in the LLR data after interleaving according to the interleaving rule;
acquiring address offset correction values of the 3 x K effective data according to the bit positions of the ineffective data after interleaving;
and acquiring addresses of the 3 x K effective data in the de-interleaver buffer according to the input indexes of the 3 x K effective data and the address offset correction values of the 3 x K effective data.
7. The method according to claim 6, wherein the obtaining addresses of the 3 × K valid data in the deinterleaver buffer according to the input index of the 3 × K valid data and the address offset correction values of the 3 × K valid data comprises:
and acquiring the addresses of the 3 x K effective data in the deinterleaver buffer according to the input indexes of the 3 x K effective data, the address offset correction values of the 3 x K effective data, the size of a starting point and the bit width of the deinterleaver buffer.
8. A decoding apparatus, wherein the apparatus is used in a receiving end device, the apparatus comprising:
the data acquisition module is used for acquiring log-likelihood ratio (LLR) data of the received wireless signals;
the data storage module is used for sequentially storing effective data in the LLR data to a de-interleaver buffer area;
an address obtaining module, configured to obtain, according to the number of invalid data in the LLR data, a code block size K of the LLR data, and a parallelism P of a Turbo decoder, 3 × K valid data in the deinterleaver buffer, and addresses of the 3 × K valid data after deinterleaving;
and the decoding module is used for inputting the 3X K effective data and the address of the 3X K effective data after de-interleaving into the Turbo decoder for decoding.
9. A computer device, wherein the computer device comprises a processor and a memory; the memory has stored therein at least one computer instruction that is loaded and executed by the processor to implement the decoding method of any of claims 1 to 7.
10. A computer-readable storage medium having stored therein at least one computer instruction, which is loaded and executed by a processor to implement the decoding method according to any one of claims 1 to 7.
11. A computer program product, characterized in that it comprises computer instructions which, when executed by a processor of a terminal, cause the terminal to perform the decoding method according to any one of claims 1 to 7.
12. A communication chip, wherein the communication chip is used in a receiving end device, and the communication chip is used for executing the decoding method according to any one of claims 1 to 7.
CN202111668318.3A 2021-12-31 2021-12-31 Decoding method, device, apparatus, storage medium, program product, and communication chip Pending CN114337927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111668318.3A CN114337927A (en) 2021-12-31 2021-12-31 Decoding method, device, apparatus, storage medium, program product, and communication chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111668318.3A CN114337927A (en) 2021-12-31 2021-12-31 Decoding method, device, apparatus, storage medium, program product, and communication chip

Publications (1)

Publication Number Publication Date
CN114337927A true CN114337927A (en) 2022-04-12

Family

ID=81020699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111668318.3A Pending CN114337927A (en) 2021-12-31 2021-12-31 Decoding method, device, apparatus, storage medium, program product, and communication chip

Country Status (1)

Country Link
CN (1) CN114337927A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060005880A (en) * 2004-07-14 2006-01-18 삼성전자주식회사 Method and apparatus for de-interleaving of high speed downlink packet in a mobile communication system
CN101124731A (en) * 2004-12-22 2008-02-13 高通股份有限公司 Pruned bit-reversal interleaver
CN102111162A (en) * 2009-12-28 2011-06-29 重庆重邮信科通信技术有限公司 Turbo component decoding method, component decoder, branch calculator and Turbo decoder
US20110280133A1 (en) * 2010-05-11 2011-11-17 Qualcomm Incorporated Scalable scheduler architecture for channel decoding
CN102412850A (en) * 2010-09-25 2012-04-11 中兴通讯股份有限公司 Turbo code parallel interleaver and parallel interleaving method thereof
CN102594507A (en) * 2012-02-24 2012-07-18 缪蔚 High-speed parallel Turbo decoding method and system in software radio system
CN102611460A (en) * 2011-02-15 2012-07-25 香港应用科技研究院有限公司 Memory efficient implementation of LDPC decoder
CN102792624A (en) * 2009-12-10 2012-11-21 德克萨斯仪器股份有限公司 Method for high-efficient implementation of de-rate matching including HARQ combining for LTE
US20170187434A1 (en) * 2014-09-24 2017-06-29 Hitachi Kokusai Electric Inc. Wireless transmission system and reception device
CN110601792A (en) * 2019-07-31 2019-12-20 苏州门海微电子科技有限公司 Front-end coding and decoding system and method for broadband power carrier communication

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060005880A (en) * 2004-07-14 2006-01-18 삼성전자주식회사 Method and apparatus for de-interleaving of high speed downlink packet in a mobile communication system
CN101124731A (en) * 2004-12-22 2008-02-13 高通股份有限公司 Pruned bit-reversal interleaver
CN102792624A (en) * 2009-12-10 2012-11-21 德克萨斯仪器股份有限公司 Method for high-efficient implementation of de-rate matching including HARQ combining for LTE
CN102111162A (en) * 2009-12-28 2011-06-29 重庆重邮信科通信技术有限公司 Turbo component decoding method, component decoder, branch calculator and Turbo decoder
US20110280133A1 (en) * 2010-05-11 2011-11-17 Qualcomm Incorporated Scalable scheduler architecture for channel decoding
CN102412850A (en) * 2010-09-25 2012-04-11 中兴通讯股份有限公司 Turbo code parallel interleaver and parallel interleaving method thereof
CN102611460A (en) * 2011-02-15 2012-07-25 香港应用科技研究院有限公司 Memory efficient implementation of LDPC decoder
CN102594507A (en) * 2012-02-24 2012-07-18 缪蔚 High-speed parallel Turbo decoding method and system in software radio system
US20170187434A1 (en) * 2014-09-24 2017-06-29 Hitachi Kokusai Electric Inc. Wireless transmission system and reception device
CN110601792A (en) * 2019-07-31 2019-12-20 苏州门海微电子科技有限公司 Front-end coding and decoding system and method for broadband power carrier communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈绪斌;曹嘉麟;陈;曾晓洋;: "高性能并行Turbo译码器的VLSI设计", 计算机工程, no. 23 *

Similar Documents

Publication Publication Date Title
JP7017627B2 (en) Redundant version design solution in communication systems
US8433987B2 (en) Method for high-efficient implementation of de-rate matching including HARQ combining for LTE
JP7471357B2 (en) Encoding method, decoding method, and device
US8614977B2 (en) Method and apparatus for parallel de-interleaving of LTE interleaved data
US20130051354A1 (en) De-rate matching method and device for downlink traffic channel in long term evolution
AU2018282443B2 (en) Device
CN102405599B (en) Extension TURBO interleaver for parallel turbo decoding
US20080014871A1 (en) System and method for interleaving data in a wireless transmitter
US20100197302A1 (en) Techniques for extracting a control channel from a received signal in a wireless communication system
US11323209B2 (en) Modem chips and receivers for performing hybrid automatic repeat request processing
JP2023508449A (en) Decoding method, device, network device and recording medium
WO2019129014A1 (en) Communication method, device and system
WO2011095115A1 (en) Method and device for de-interleaving
CN112202530B (en) Channel blind detection method and device, communication device and storage medium
CN109391347B (en) Coding and decoding method and device
JP3920220B2 (en) Communication device
CN114337927A (en) Decoding method, device, apparatus, storage medium, program product, and communication chip
CN112202531B (en) Channel blind detection method and device, communication device and storage medium
US7352723B2 (en) Method of forming a coded composite transport channel for downlink transmissions
US8977913B2 (en) Method, device and baseband chip for receiving service data in a communication system
WO2012149741A1 (en) De-interleaving method and device for rate de-matching
CN114208375A (en) PDCCH detection method and device
CN104935399A (en) Interleaving mapping method of LDPC codeword and de-interleave de-mapping method
CN112740582B (en) Storage method and polar code receiving equipment
US8792375B2 (en) Data rate matching method and apparatus for use in mobile communication systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination