WO2016076677A1 - 대용량 병렬 처리를 위해 비디오 신호를 엔트로피 인코딩 또는 엔트로피 디코딩하는 방법 및 장치 - Google Patents
대용량 병렬 처리를 위해 비디오 신호를 엔트로피 인코딩 또는 엔트로피 디코딩하는 방법 및 장치 Download PDFInfo
- Publication number
- WO2016076677A1 WO2016076677A1 PCT/KR2015/012297 KR2015012297W WO2016076677A1 WO 2016076677 A1 WO2016076677 A1 WO 2016076677A1 KR 2015012297 W KR2015012297 W KR 2015012297W WO 2016076677 A1 WO2016076677 A1 WO 2016076677A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- segment
- decoding
- present
- symbols
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
Definitions
- the present invention relates to a method and apparatus for entropy encoding or entropy decoding a video signal for massively parallel processing. Specifically, the present invention relates to a technique for adapting a segment entropy coding method to a hardware dataflow architecture.
- Entropy coding is a process for representing data with the smallest average number of bits and is a basic technique in all media compression methods.
- arithmetic coding is a method of entropy coding that has a higher computational complexity than other coding techniques such as prefix coding (e.g., Huffman, Rice, exp-Golomb code). Compression results are shown, and are used in the latest video coding standards H.265 / HEVC and VP9, and are expected to be used in the future.
- the definition of a segmented bit format is required, where the data segment is an independent entropy coding FSM that can be processed in parallel at the end of the coding process or at the beginning of the decoding.
- This approach is suitable for software execution on a typical processor because it avoids thread management or communication overheads, while overhead is not critical and processing elements are temporarily idle. There is a problem that may not be optimal for general hardware that must be.
- the present invention seeks to provide a method of applying a segment entropy coding approach to a hardware dataflow architecture.
- the present invention also provides an entropy coding method for optimally compressing a video signal with high data throughput.
- the present invention seeks to show how the machine states can be untangled by changing the way compressed data is organized using a form of data segmentation.
- the present invention provides greater flexibility of data processing and enables larger scale parallel processing through segment entropy coding in which compressed data is stored in divided data segments.
- the present invention seeks to design a system that takes into account cache operations to reduce the memory cost of buffers.
- the present invention provides a method of applying a segment entropy coding approach to a hardware dataflow architecture, thereby enabling optimal compression of a video signal with data throughput.
- the present invention seeks to change how compressed data is organized using the form of data segmentation.
- the present invention provides a parallel processing method that eliminates non-essential data dependencies.
- the present invention provides a segment entropy coding method in which compressed data is stored in divided data segments.
- the present invention also provides a method of designing a system that takes into account cache operation to reduce the memory cost of buffers.
- the present invention also provides a way to solve the excessive data buffer problem by replacing large buffers in the encoder and decoder with much smaller data queues.
- the present invention also provides a more general system for segment entropy coding by changing the blocks representing data models (MOD) to blocks representing FSMs.
- MOD data models
- the present invention provides a method of applying a segment entropy coding approach to a hardware dataflow architecture, thereby enabling optimal compression of a video signal with data throughput.
- the encoder can run the main decoding process in parallel.
- the decoder does not have to complete all entropy decoding before the main decoding process begins.
- the memory cost of the buffers can be reduced by designing a system that takes into account cache behavior.
- FIG. 1 is a schematic block diagram of an encoder in which encoding of a video signal is performed as an embodiment to which the present invention is applied.
- FIG. 2 is a schematic block diagram of a decoder in which decoding of a video signal is performed as an embodiment to which the present invention is applied.
- 3 and 4 are schematic block diagrams of an entropy encoding unit and an entropy decoding unit that process video signals according to embodiments of the present invention.
- FIG. 5 is a diagram for describing a sequential operation of an FSM according to an embodiment to which the present invention is applied.
- FIG. 6 is a schematic block diagram of an adaptive coding system for independent and identically distributed source symbols (hereinafter, referred to as 'i.i.d.') as an embodiment to which the present invention is applied.
- FIG. 7 is a schematic block diagram of a coding system that performs compression by applying a different data model to each data type according to an embodiment to which the present invention is applied.
- FIGS. 8 and 9 illustrate schematic block diagrams of a coding system for describing a data flow for binary arithmetic coding, as an embodiment to which the present invention is applied.
- FIGS. 10 and 11 are exemplary embodiments to which the present invention is applied and show schematic block diagrams of a multiplexed coding system performing the same compression as in FIGS. 8 and 9.
- 12 and 13 are schematic block diagrams of an entropy coding system for performing parallel processing according to embodiments of the present invention.
- FIG. 14 illustrates a structure of a compressed data array divided into independent segments according to an embodiment to which the present invention is applied.
- 15 and 16 are schematic block diagrams of coding systems for performing parallel processing using data queues as an embodiment to which the present invention is applied.
- 17 is a flowchart illustrating a method of encoding a video signal using parallel processing as an embodiment to which the present invention is applied.
- FIG. 18 is a flowchart illustrating a method of decoding a video signal based on parallel processing as an embodiment to which the present invention is applied.
- a method of encoding a video signal using parallel processing comprising: generating data symbols to be encoded; Encoding a first data symbol within the base segment; Copying a second data symbol in another segment into a buffer; And encoding in parallel the second data symbol in the other segment.
- the load-balancing algorithm is used to assign segments to different threads.
- the method further includes generating a compressed data header when all data is compressed, wherein the compressed data header includes an index of each segment.
- the present invention is characterized in that it further comprises the step of connecting all the segments to create one data array.
- the non-blocking process is started when allocated to the data segment.
- the present invention provides a method of decoding a video signal based on parallel processing, comprising: reading each segment in a compressed data header; Decoding all segments except the base segment using a plurality of threads; Storing data symbols in a buffer; And performing media signal processing based on the data symbols.
- the data symbols in the base segment are decoded.
- the data symbols are read from the corresponding buffer.
- the present invention provides an apparatus for encoding a video signal using parallel processing, comprising: a data symbol generator for generating data symbols to be encoded; A serial processor which encodes a first data symbol in the base segment; And a parallel processor which copies a second data symbol in another segment into a buffer and encodes the second data symbol in the other segment in parallel.
- the present invention further includes a data bit array generator for generating a compressed data header when all data is compressed, wherein the compressed data header includes an index of each segment.
- the data bit array generator is characterized in that to connect all the segments to generate one data array.
- the present invention provides an apparatus for decoding a video signal based on parallel processing, reading each segment in the compressed data header, decoding all segments except the base segment using a plurality of threads, A parallel processor for storing data symbols in a buffer; And a decoding unit for decoding the video signal based on the data symbols.
- terms used in the present invention may be replaced for more appropriate interpretation when there are general terms selected to describe the invention or other terms having similar meanings.
- signals, data, samples, pictures, frames, blocks, etc. may be appropriately replaced and interpreted in each coding process.
- partitioning, decomposition, splitting, and division may be appropriately replaced and interpreted in each coding process.
- FIG. 1 is a schematic block diagram of an encoder in which encoding of a video signal is performed as an embodiment to which the present invention is applied.
- the encoder 100 may include an image splitter 110, a transformer 120, a quantizer 130, an inverse quantizer 140, an inverse transformer 150, a filter 160, and a decoder. It may include a decoded picture buffer (DPB) 170, an inter predictor 180, an intra predictor 185, and an entropy encoder 190.
- DPB decoded picture buffer
- the image divider 110 may divide an input image (or a picture or a frame) input to the encoder 100 into one or more processing units.
- the processing unit may be a Coding Tree Unit (CTU), a Coding Unit (CU), a Prediction Unit (PU), or a Transform Unit (TU).
- CTU Coding Tree Unit
- CU Coding Unit
- PU Prediction Unit
- TU Transform Unit
- the terms are only used for the convenience of description of the present invention, the present invention is not limited to the definition of the terms.
- the term coding unit is used as a unit used in encoding or decoding a video signal, but the present invention is not limited thereto and may be appropriately interpreted according to the present invention.
- the encoder 100 may generate a residual signal by subtracting a prediction signal output from the inter predictor 180 or the intra predictor 185 from the input image signal, and generate the residual signal. Is transmitted to the converter 120.
- the transformer 120 may generate a transform coefficient by applying a transform technique to the residual signal.
- the conversion process may be applied to pixel blocks having the same size as the square, or may be applied to blocks of variable size rather than square.
- the quantization unit 130 may quantize the transform coefficients and transmit the quantized coefficients to the entropy encoding unit 190, and the entropy encoding unit 190 may entropy code the quantized signal and output the bitstream. Details of the entropy encoding unit 190 will be described in more detail in the entropy decoding unit of FIG. 2.
- the quantized signal output from the quantization unit 130 may be used to generate a prediction signal.
- the quantized signal may restore the residual signal by applying inverse quantization and inverse transformation through the inverse quantization unit 140 and the inverse transform unit 150 in the loop.
- a reconstructed signal may be generated by adding the reconstructed residual signal to a prediction signal output from the inter predictor 180 or the intra predictor 185.
- the filtering unit 160 applies filtering to the reconstruction signal and outputs it to the reproduction apparatus or transmits the decoded picture buffer to the decoding picture buffer 170.
- the filtered signal transmitted to the decoded picture buffer 170 may be used as the reference picture in the inter predictor 180. As such, by using the filtered picture as a reference picture in the inter prediction mode, not only image quality but also encoding efficiency may be improved.
- the decoded picture buffer 170 may store the filtered picture for use as a reference picture in the inter prediction unit 180.
- the inter prediction unit 180 performs temporal prediction and / or spatial prediction to remove temporal redundancy and / or spatial redundancy with reference to a reconstructed picture.
- the reference picture used to perform the prediction is a transformed signal that has been quantized and dequantized in units of blocks at the time of encoding / decoding in the previous time, blocking artifacts or ringing artifacts may exist. have.
- the inter prediction unit 180 may interpolate the signals between pixels in sub-pixel units by applying a lowpass filter in order to solve performance degradation due to discontinuity or quantization of such signals.
- the subpixel refers to a virtual pixel generated by applying an interpolation filter
- the integer pixel refers to an actual pixel existing in the reconstructed picture.
- the interpolation method linear interpolation, bi-linear interpolation, wiener filter, or the like may be applied.
- the interpolation filter may be applied to a reconstructed picture to improve the precision of prediction.
- the inter prediction unit 180 generates an interpolation pixel by applying an interpolation filter to integer pixels, and uses an interpolated block composed of interpolated pixels as a prediction block. You can make predictions.
- the intra predictor 185 may predict the current block by referring to samples around the block to which current encoding is to be performed.
- the intra prediction unit 185 may perform the following process to perform intra prediction. First, reference samples necessary for generating a prediction signal may be prepared. The prediction signal may be generated using the prepared reference sample. Then, the prediction mode is encoded. In this case, the reference sample may be prepared through reference sample padding and / or reference sample filtering. Since the reference sample has been predicted and reconstructed, there may be a quantization error. Accordingly, the reference sample filtering process may be performed for each prediction mode used for intra prediction to reduce such an error.
- a prediction signal generated through the inter predictor 180 or the intra predictor 185 may be used to generate a reconstruction signal or to generate a residual signal.
- FIG. 2 is a schematic block diagram of a decoder in which decoding of a video signal is performed as an embodiment to which the present invention is applied.
- the decoder 200 may include an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, a filtering unit 240, and a decoded picture buffer unit (DPB) 250. ), An inter predictor 260, and an intra predictor 265.
- the reconstructed video signal output through the decoder 200 may be reproduced through the reproducing apparatus.
- the decoder 200 may receive a signal output from the encoder 100 of FIG. 1, and the received signal may be entropy decoded through the entropy decoding unit 210.
- the entropy decoding unit 210 parses and decodes the received signal and outputs syntax elements such as prediction modes, image parameters, and residual data.
- the entropy decoding unit 210 may include Context-based Adaptive Binary Arithmetic Coding (CABAC), and the CABAC is a context-adaptive binary arithmetic decoding method, and syntax element information and parameter information for decoding the same. Calculate the occurrence probability based on the context, and decode the input signal sequentially to output the value of the syntax element.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- the decoder 200 may generate a prediction signal based on the prediction vector information. This may be performed by the inter prediction unit 260, but the present invention is not limited thereto.
- the inverse quantization unit 220 obtains a transform coefficient from the entropy decoded signal using the quantization step size information.
- the obtained transform coefficients may be applied to various embodiments described in the transform unit 120 of FIG.
- the inverse transform unit 230 inversely transforms the transform coefficient to obtain a residual signal.
- a reconstructed signal is generated by adding the obtained residual signal to a prediction signal output from the inter predictor 260 or the intra predictor 265.
- the filtering unit 240 applies filtering to the reconstructed signal and outputs the filtering to the reproducing apparatus or transmits it to the decoded picture buffer unit 250.
- the filtered signal transmitted to the decoded picture buffer unit 250 may be used as the reference picture in the inter predictor 260.
- the embodiments described by the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the encoder 100 are respectively the filtering unit 240, the inter prediction unit 260, and the decoder. The same may be applied to the intra predictor 265.
- 3 and 4 are schematic block diagrams of an entropy encoding unit and an entropy decoding unit that process video signals according to embodiments of the present invention.
- the entropy encoding unit 190 includes a binarization unit 310, a context modeling unit 320, and a binary arithmetic encoding unit 330.
- the binarization unit 310 may output a binary symbol string composed of a binary value of 0 or 1 by receiving a sequence of data symbols and performing binarization.
- the binarization unit 310 may map syntax elements into binary symbols.
- Several different binarization processes such as unary (U), truncated unary (TU), k-th order Exp-Golomb (EGk), and fixed length processes, support binarization. Can be used for The binarization process may be selected based on the type of syntax element.
- the output binary symbol string is transmitted to the context modeling unit 320.
- the context modeling unit 320 performs probability estimation on entropy-encoding. That is, the context modeling unit 320 may evaluate the probability of the bins.
- the context modeler 320 may provide accurate probability estimation necessary to achieve high coding efficiency. Accordingly, different context models may be used for different binary symbols and the probability of such context model may be updated based on the values of previously coded binary symbols.
- Binary symbols with similar distributions can share the same context model.
- the context model for each binary symbol may be selected based on at least one of the type of syntax element, the position of the binary symbol within the syntax element (binIdx), luma / chroma, and adjacent information.
- the binary arithmetic encoding unit 330 performs entropy encoding on the output string and outputs compressed data bits.
- the binary arithmetic encoding unit 330 performs arithmetic coding on the basis of recursive interval division.
- An interval (or range) having an initial value of 0 to 1 is divided into two lower intervals based on the probability of a binary symbol.
- the encoded bits when converted to binary fractions, provide an offset that selects one of two lower intervals representing the value of the decoded binary symbol.
- the interval can be updated to equalize the selected subsection, and the interval division process itself is repeated.
- the intervals and offsets have limited bit precision, so renormalization may be needed to prevent overflow each time the interval falls below a certain value. The renormalization may occur after each binary symbol is decoded.
- Arithmetic coding can be performed using the estimated probability or assuming an equal probability of 0.5. In order to bypass the coded binary symbols, dividing the range into lower intervals may be performed through a shift operation, while a look up table may be needed for context coded binary symbols.
- the entropy decoding unit 210 includes a binary arithmetic decoding unit 410, a context modeling unit 420, and an inverse binarization unit 430.
- the binary arithmetic decoding unit 410 decodes the bitstream based on the binary occurrence probability information and outputs a binary bin.
- the context modeling unit 420 selects probability information necessary to decode the current bitstream from the context memory and provides it to the binary arithmetic decoding unit 410.
- the context memory includes a context model defined in bin units of syntax elements.
- the context model represents occurrence probability information of binary MPS and LPS according to the context.
- the context modeling unit 420 may select the context memory through the syntax element information to be decoded, and then select and output probability information necessary for decoding the current syntax element through bin index information which is a decoding order of bins therein. have.
- the inverse binarization unit 430 receives a binary coded bin decoded by the binary arithmetic decoding unit 410, collects the bins, and converts the bins into integer syntax elements.
- FIG. 5 is a diagram for describing a sequential operation of a finite state machine (FSM) according to an embodiment to which the present invention is applied.
- FSM finite state machine
- a finite state machine refers to a device that sequentially shows changes in restricted states.
- those known as Mealy Machines or Mealy Circuits used in a circuit design may be applied.
- the FSM may be generally defined by 6 tuples, and the tuple may mean a finite ordered list of elements.
- S represents states of a finite set
- S0 represents an initial state
- ⁇ denotes a finite set of input alphabets
- ⁇ denotes a finite set of output alphabets
- T denotes a state transition function
- P represents the output function
- P S ⁇ ⁇ ⁇ ⁇ can be represented.
- the FSM may be one of the basic elements used for concurrent computation. Since the FSM simply represents a continuous process, the calculations defined by the FSM are difficult to parallelize and require speculative execution to solve this problem.
- functional units representing a data model may be changed to FSM functional units and applied to an entropy coding system for segment entropy coding.
- FIG. 6 is a schematic block diagram of an adaptive coding system for independent and identically distributed source symbols (hereinafter, referred to as 'i.i.d.') as an embodiment to which the present invention is applied.
- Entropy coding is the process of representing information without loss in a way that minimizes the average number of bits used, and is a fundamental part of any form of compression. This can be done by various methods, but basically its computational complexity tends to increase as compression efficiency reaches theoretical limits.
- the present invention uses a sequence of random data symbols ⁇ d 1 , d 2 , d 3 ,... ⁇ , The source alphabet d k ⁇ ⁇ 0, 1,... , M-1 ⁇ . These are iids, a series of binary symbols (bits) ⁇ b 1 , b 2 , b 3 ,... ⁇ Is coded.
- This model is useful as an optimal way to code iid sources. As discussed later, real data sources are much more complex, but the present invention can use them as a fundamental element in more complex coding systems.
- the optimal code is a set of symbol probabilities ⁇ p 0 , p 1 ,... , p M-1 ⁇ .
- the average number of bits used for coding symbol d k is given by the following equation.
- the average number of bits used for coding symbol d k is not an integer.
- prefix codes Huffman, Golomb-Rice, Elias-Teuhola, etc.
- it is gradually being replaced with arithmetic codes that are more complicated computationally but can be represented by the number of bits of Equation 1 almost exactly on average.
- an adaptive coding system to which the present invention is applied may include an encoder 610 and a decoder 620.
- the encoder 610 may include a data modeling unit 611 and an entropy encoding unit 613
- the decoder 620 may include an entropy decoding unit 621 and a data modeling unit 623.
- each functional unit of the adaptive coding system may be an FSM for parallelization of entropy coding.
- the data modeling unit 611 may manage the data model by estimating the symbol probability and dynamically updating the code by receiving the source symbols d 1 , d 2 ,...
- the entropy encoding unit 613 performs entropy encoding based on the data and code information input from the data modeling unit 611 and calculates the compressed data bit arrays b 1 , b 2 ,...
- the entropy decoding unit 621 receives the compressed data bit arrays b 1 , b 2 ,... And performs entropy decoding, and transmits the entropy decoded data to the data modeling unit 623.
- the data modeling unit 623 outputs a source symbol based on the received data, generates a code, and transmits the code to the entropy decoding unit 621.
- the coding process may be divided into code selection and actual coding even when codes are fixed.
- a state machine for data modeling is basically designed to count the occurrence of each symbol to estimate its probability. There may be transformations based on the source alphabet size (M), or larger weights may be needed for more recent events, but the state space must always be large enough for efficient compression.
- M source alphabet size
- the entropy coding FSM has a state of tracking its position in the compressed data bit array.
- Arithmetic coders include FSMs with a larger state space because they are used in some sense to represent fractional numbers of pending bits. For example, to code a series of symbols using 1.732 and 0.647 bits, the coder can subtract 1 bit after the first symbol and change its state to represent 0.732 pending bits. Next, after the second symbol, the second state can be subtracted and the internal state representing the 0.379 remaining bits can be updated.
- entropy coding may multiplex and demultiplex all separated data elements in addition to efficiently representing data.
- the signal is transformed and decomposed into many sub-elements as it is processed and has its own coding requirements, data alphabets, and statistical properties.
- the improved encoding method can change the decomposition and transformation forms during the coding process and also adjust the parameters according to the signal characteristics in each segment.
- FIG. 7 is a schematic block diagram of a coding system that performs compression by applying a different data model to each data type according to an embodiment to which the present invention is applied.
- the encoder 710 of FIG. 7 includes a sequence compressing unit 720 and an entropy encoding unit 716.
- the sequence compressing unit 720 includes a first context model selecting unit 711 and a data modeling unit 713. And a second context model selector 715.
- the decoder 750 includes a sequence canceller 760 and an entropy decoder 751, and the sequence canceller 760 includes a third context model selector 752, a data modeler 754, and the like.
- a fourth context model selector 756 may be included.
- the sequence compressor 720 may perform m different sequences ⁇ s t1 , s t2 , s t3 ,...) For the interleaved data symbols. . .., Forming the type t ⁇ ⁇ 1, 2,. . . Organize according to m ,. Data samples are estimated with iid with that particular probability distribution. Each element of these sequences enters the entropy encoding unit 716 according to the original order, but the data type may vary randomly.
- the entropy encoding unit 716 may perform entropy encoding on input elements and output a compressed data bit array.
- the first context model selector 711 may select a context model corresponding to the input interleaved symbols based on the coding context.
- the data modeling unit 713 may include a data modeling unit corresponding to each of the m sequences, and each data modeling unit may perform data modeling corresponding to each sequence based on the selected context model.
- the second context model selector 715 may select a suitable context model for data corresponding to each sequence based on the coding context.
- the entropy decoding unit 751 may receive the compressed data bit array to perform entropy decoding.
- the sequence decompressor 760 may decompress the entropy decoded data bit array.
- the third context model selector 752, the data modeling unit 754, and the fourth context model selector 756 in the singer sequence release unit 760 are performed by the sequence compressor 720 of the encoder 710. Since the reverse process of the process will be omitted detailed description.
- Both encoders and decoders share algorithms that define in which sequence different types of data are encoded, according to the compression standard syntax.
- the algorithm may be modeled as an FSM. Since the order depends on the data being encoded, a strict causal relationship must be required. For example, the encoder may only select execution paths based on the data that is also available at the decoder when the step is reached.
- the present invention assumes that there is a 1: 1 relationship between the data model and the coding context.
- the coding context may refer to the context of how data is generated and how it is coded. This is important because it means that data models are completely self-contained and their coding characteristics are defined only by their FSM.
- the coding characteristic should not be confused with compression efficiency, which may depend on which data model is used.
- different models may be used for the same type of coded data if different models are created under different contexts. This is because the symbols may have different probability sets. Under this assumption, the context model selectors shown in FIG. 7 represent context switching.
- FIGS. 8 and 9 illustrate schematic block diagrams of a coding system for describing a data flow for binary arithmetic coding, as an embodiment to which the present invention is applied.
- FIG. 8 A diagram with the most important entropy coding elements is shown in FIG. 8 above.
- the encoder 800 includes a data source block 810, a first context model selector 820, a BIN 830, a sequence compressor 840, a second context model selector 850, and a BAC. (Binary Arithmetic Coding) encoding unit 860 may be included.
- the contents of FIG. 7 may be applied to the sequence compressor 840.
- the decoder 900 may include a binary arithmetic coding (BAC) decoding unit 910, a third context model selecting unit 920, a sequence release unit 930, a REC 940, and a fourth context model selecting unit. 950 and a data sink block 960.
- BAC binary arithmetic coding
- the data source block 810 may correspond to signal decomposition and signal conversion functional units. That is, the data source block 810 is a part for generating control and signal data entering an entropy encoder.
- the first context model selector 820 may control the multiplexing operation by selecting different entropy coding data models based on the coding context.
- the data extracted from non-binary alphabets is binarized through BIN 830 and m different sequences ⁇ s t1 , s t2 , s t3,. . . ⁇ Can be formed.
- the second context model selector 850 may select an appropriate context model for data corresponding to each sequence based on the coding context.
- the binary arithmetic coding (BAC) encoding unit 860 may output a compressed data bit array by performing binary arithmetic coding based on the selected context model.
- BAC binary arithmetic coding
- the binary arithmetic coding (BAC) decoding unit 910 in the decoder 900 may receive the compressed data bit array and perform binary arithmetic decoding.
- the third context model selector 920, the sequence canceller 930, and the fourth context model selector 950 may include the first context model selector 820, the sequence compressor 840, and the encoder 800. Since the reverse process of the process performed by the second context model selector 850 is performed, a detailed description thereof will be omitted.
- the REC 940 recovers binarized data through the BIN 830 of the encoder.
- the data sink block 960 may correspond to inverse transform and signal recovery functional units.
- the data sink block 960 shares with the encoder the rules and data needed to demultiplex and select the appropriate data model.
- the data sink block 960 may provide the entropy decoder with information necessary to extract data from the compressed data bit array.
- BIN blocks BIN 1, BIN 2, ..., BIN n Data extracted from non-binary alphabets must go through a process called binarization. 8 and 9, this is performed by the BIN blocks BIN 1, BIN 2, ..., BIN n, and returned to the original state by the REC blocks.
- each of the BIN blocks may be different. This is because the BIN blocks BIN 1, BIN 2,..., BIN n are used for different size alphabets and use different binarization strategies.
- each binary symbol bin the same process as shown in FIG. 6 may be used.
- each binary symbol may be sent to a data model for probability estimation and code generation.
- FIGS. 8 and 9 can show logical relationships instead of temporal data flow. For example, each data symbol produces a variable of binary symbols.
- blocks that appear to be parallel data paths can be multiplexed and always performed continuously.
- entropy coding includes four layers of inter-related finite state machines (model selection unit, BIN / REC, data modeling unit, BAC encoding unit / decoding unit). You can see that it is defined by).
- the context model selectors may perform high level multiplexing of data types and coding contexts.
- the BIN 830 and the REC 940 may each perform binarization and recovery processes applied to all non-binary symbols.
- the data modeling unit may perform data modeling for estimating the probability of binary symbols.
- the BAC encoding unit and the BAC decoding unit may perform entropy coding for generation or interpretation of compressed data bits.
- the present invention may also consider the following facts.
- the current data throughput is not limited by the high complexity for coding each binary symbol, but by the total number of clock cycles needed to classify or divide the information into the smallest elements. And multiplexed and coded.
- FIGS. 10 and 11 are exemplary embodiments to which the present invention is applied and show schematic block diagrams of a multiplexed coding system performing the same compression as in FIGS. 8 and 9.
- One way to perform parallelization processing to which the present invention is applied is to take advantage of the fact that it can be achieved by removing data dependencies when designing a new compression algorithm. However, this can adversely affect compression. Therefore, the present invention proposes a method for finding and removing non-essential dependencies.
- association and commutativity are used to change the order of operation and create a new form of parallel execution.
- entropy coding a feature we can use is the fact that optimal compression is achieved as long as each data symbol is represented using the optimal number of bits. Except for some semantic constraints, it does not matter exactly where those bits are placed.
- the encoder 1000 of FIG. 10 may largely include a context model selector 1010, a MOD 1020, an ENC 1030, and a BUF 1040, and the decoder 1100 of FIG. 11 may include a DEC 1110. , MOD 1120, BUF 1130, and context model selector 1140.
- the context model selector 1010 may select a context model corresponding to the input interleaved symbols based on the coding context.
- data modeling may be performed through the MOD 1020, sequence compression may be performed through the ENC 1030, and then stored in the BUF 1040.
- the compressed data through this process is divided into separate blocks DAT 1, DAT 2,... , DAT m can be stored.
- the encoder 1000 of FIG. 10 completely removes the dependency between the updates in the data model and the sequences in the data added to one compressed data array.
- the present invention allows compressed data to be stored in divided data segments so that this approach, called segment entropy coding, provides much more flexibility in defining the order of execution and allows greater scale parallelism. can do.
- the decoder 1100 of FIG. 11 includes separate blocks DAT 1, DAT 2,.
- all entropy decoding may be performed using a parallel processor through the DEC 1110 and may store all data symbols in the BUF 1130 via the MOD 1120.
- context model selector 1140 performs an inverse process of the process performed by the context model selector 1010 of the encoder 1000, a detailed description thereof will be omitted.
- the decoder 1100 when the decoder 1100 needs to perform full media decompression and access symbols in the appropriate sequence, it does not need to be completely entropy decoded from its binary representation but can read efficiently from the buffer.
- the invention shows that the encoder can utilize parallelism by buffering data symbols.
- the present invention may find the same type of trade-off for this approach. There is no compression loss, but at the expense of using more memory, one can find a way to parallelize entropy coding.
- the present invention therefore seeks to design a system that takes into account cache operations.
- 12 and 13 are schematic block diagrams of an entropy coding system for performing parallel processing according to embodiments of the present invention.
- Adaptive parallel processing requires an FSM design with inputs that determine the parallel state and changes in the parallel state.
- the input that determines the state of parallel processing can be defined as a syntax element to be decoded while generating a branch, and a syntax element consisting of one bin is 1, and a syntax element not generating a branch is 0.
- the output of the FSM may mean the number of syntax elements to be decoded. For example, it may mean the number of context models to be parallelized.
- the encoding / decoding sequences of FIGS. 12 and 13 to which the present invention is applied may be rearranged to generate segmented compressed data arrays.
- the encoder 1200 of FIG. 12 may largely include a context model selector 1210, a serial processor 1220, and a parallel processor 1230.
- the serial processor 1220 may include a context model selector 1221, an FSM 1222, a context model selector 1223, an ENC 1224, and a BUF 1225, and the parallel processor 1230.
- the decoder 1300 of FIG. 13 may include a serial processor 1310, a parallel processor 1320, and a context model selector 1330.
- the serial processor 1310 may include a DEC 1311, a context model selector 1312, an FSM 1313, and a context model selector 1314
- the parallel processor 1320 may include a DEC 1321. ), FSM 1322, BUF 1323, and context model selector 1324.
- FIGS. 12 and 13 More general systems for segmented entropy coding are shown in FIGS. 12 and 13. Although similar to the basic system of FIGS. 10 and 11 above, it includes several elements that are more important in practical applications. Thus, the functional units illustrated in FIGS. 12 and 13 will be described with reference to FIGS. 10 and 11, and only other portions will be described below.
- FIGS. 12 and 13 One important change in FIGS. 12 and 13 is that the blocks representing data models (MOD) have been changed to FSM 1313, 1322 blocks. This is because the technology will work well with any type of self-contained FSM.
- the blocks may represent the corresponding binary data model as well as the binarization process.
- the present invention includes a base segment (serial processing units 1220 and 1310), and the data is processed in a serial execution method which is a general method.
- the decoder Since the decoding elements must use data from other segments, the decoder must know where the data segments start. For example, this can be done using a data format with a header that includes a pointer to the beginning of each data segment, as in FIG. 14.
- FIG. 14 illustrates a structure of a compressed data array divided into independent segments according to an embodiment to which the present invention is applied.
- the header of the segment data to which the present invention is applied may include a pointer (index) assigned to each segment.
- the compressed data byte array may include an index INDEX at the beginning and segment data after the index byte.
- the segment data includes DAT 1, DAT 2,... , DAT P.
- 15 and 16 are schematic block diagrams of coding systems for performing parallel processing using data queues as an embodiment to which the present invention is applied.
- the encoder 1500 of FIG. 15 may largely include a context model selector 1510, a serial processor 1520, and a parallel processor 1530.
- the serial processor 1520 may include a context model selector 1521, an FSM 1522, a context model selector 1523, an ENC 1524, and a BUF 1525, and the parallel processor 1530.
- the decoder 1600 of FIG. 16 may include a serial processor 1610, a parallel processor 1620, and a context model selector 1630.
- the serial processor 1610 may include a DEC 1611, a context model selector 1612, an FSM 1613, and a context model selector 1614, and the parallel processor 1620 includes a DEC 1621. ), FSM 1622, QUE 1623, and context model selector 1624.
- FIGS. 15 and 16 For the functional units shown in FIGS. 15 and 16, the description of FIGS. 12 and 13 will be applied, and only other parts will be described below.
- One important change in FIGS. 15 and 16 is that the BUF 1232, 1323 blocks have changed to QUE 1632, 1623 blocks.
- a thread-based parallelization can be used using a general processor including several cores.
- the important point here is that thread generation is relatively expensive, so we assume that we can have a smaller number of threads than many data segments. This may be possible through the use of thread pools.
- the encoder may be performed as follows.
- the encoder 1500 processes the media and generates a data symbol to be encoded, those in the base segment are encoded immediately, while those in the other segments are only copied into the buffer.
- the decoder can use multiple threads to encode the buffered data in parallel.
- a load-balancing algorithm can be used to allocate segments to other threads.
- the encoder can generate a compressed data header and concatenate all segments to produce one data array.
- Thread-level parallelization of decoding is performed similarly, but in a different order.
- the decoder reads the compressed data header and begins to decode all segments except the base segment using a number of threads. Then, the decoding symbols in the buffer are stored.
- the present invention does not require the same high cost to start or synchronize threads.
- a data flow approach can be adopted, where the encoder can start non-blocking processing of the data as soon as it is assigned to the data segment, or add a short queue for post processing. have. If the queue is not full, the remaining operations for data generation can continue in parallel.
- the hardware only begins to decode the data segments, and instead of decoding the entire segment, it will write the decoded symbols to the queue whenever they are not FULL. In this case, if the QUEUE is not empty, execution of other operations will continue.
- the main algorithm data sources and sinks of Figs. 12 and 13
- the present invention provides parallel processing. You can expect to have several elements for a while, and you'll almost never run on serial because of FULL at the encoder or empty QUEUE at the decoder.
- 17 is a flowchart illustrating a method of encoding a video signal using parallel processing as an embodiment to which the present invention is applied.
- a data symbol generator (not shown) in an encoder may generate data symbols to be encoded (S1710).
- the encoder may encode the first data symbol in the base segment (S1720).
- step S1720 may be performed by the serial processor.
- the encoder may copy a second data symbol in another segment into a buffer (S1730).
- a load-balancing algorithm can be used to assign segments to different threads.
- the non-blocking process may begin when assigned to a data segment.
- the encoder may encode second data symbols in the other segment in parallel (S1740).
- steps S1730 and S1740 may be performed by the parallel processor.
- the data bit array generator (not shown) may generate a compressed data header when all data is compressed.
- the compressed data header may include an index of each segment.
- the data bit array generator may connect all segments to generate one data array.
- FIG. 18 is a flowchart illustrating a method of decoding a video signal based on parallel processing as an embodiment to which the present invention is applied.
- the decoder may read each segment in the compressed data header (S1810). Then, all segments except the base segment are decoded using a plurality of threads (S1820), and data symbols may be stored in a buffer (S1830). The steps may be performed by a parallel processor.
- the decoder may decode the video signal based on the data symbols (S1840).
- the data symbols in the base segment may be decoded.
- the data symbols may be read from the corresponding buffer.
- the embodiments described herein may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units illustrated in FIGS. 1 to 4, 6 to 13, and 15 to 16 may be implemented and performed on a computer, a processor, a microprocessor, a controller, or a chip.
- the decoder and encoder to which the present invention is applied include a multimedia broadcasting transmitting and receiving device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, a real time communication device such as video communication, a mobile streaming device, Storage media, camcorders, video on demand (VoD) service providing devices, internet streaming service providing devices, three-dimensional (3D) video devices, video telephony video devices, and medical video devices Can be used for
- the processing method to which the present invention is applied can be produced in the form of a program executed by a computer, and can be stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the present invention can also be stored in a computer-readable recording medium.
- the computer readable recording medium includes all kinds of storage devices for storing computer readable data.
- the computer-readable recording medium may include, for example, a Blu-ray disc (BD), a universal serial bus (USB), a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device. Can be.
- the computer-readable recording medium also includes media embodied in the form of a carrier wave (eg, transmission over the Internet).
- the bit stream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (16)
- 병렬 처리를 이용하여 비디오 신호를 인코딩하는 방법에 있어서,인코딩될 데이터 심볼들을 생성하는 단계;베이스 세그먼트 내에 있는 제 1 데이터 심볼을 인코딩하는 단계;다른 세그먼트 내에 있는 제 2 데이터 심볼을 버퍼에 카피하는 단계; 및상기 다른 세그먼트 내에 있는 제 2 데이터 심볼을 병렬적으로 인코딩하는 단계를 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,로드-밸런싱 알고리즘이 세그먼트들을 서로 다른 쓰레드(threads)에 할당하기 위해 이용되는 것을 특징으로 하는 방법.
- 제1항에 있어서,모든 데이터가 압축되었을 때 압축된 데이터 헤더를 생성하는 단계를 더 포함하되,상기 압축된 데이터 헤더는 각 세그먼트의 인덱스를 포함하는 것을 특징으로 하는 방법.
- 제3항에 있어서,하나의 데이터 어레이를 생성하기 위해 모든 세그먼트들을 연결하는 단계를 더 포함하는 것을 특징으로 하는 방법.
- 제1항에 있어서,넌-블록킹 과정은 데이터 세그먼트에 할당될 때 시작되는 것을 특징으로 하는 방법.
- 병렬 처리에 기초하여 비디오 신호를 디코딩하는 방법에 있어서,압축된 데이터 헤더 내에 있는 각 세그먼트를 읽는 단계;다수의 쓰레드(threads)를 이용하여 베이스 세그먼트를 제외한 모든 세그먼트들을 디코딩하는 단계;데이터 심볼들을 버퍼 내에 저장하는 단계; 및상기 데이터 심볼들에 기초하여 상기 비디오 신호를 디코딩하는 단계를 포함하는 것을 특징으로 하는 방법.
- 제6항에 있어서,상기 데이터 심볼들이 상기 베이스 세그먼트 내에 존재하는 경우, 상기 베이스 세그먼트 내의 데이터 심볼들은 디코딩되는 것을 특징으로 하는 방법.
- 제6항에 있어서,상기 데이터 심볼들이 상기 베이스 세그먼트 내에 존재하지 않는 경우, 상기 데이터 심볼들은 대응되는 버퍼로부터 읽히는 것을 특징으로 하는 방법.
- 병렬 처리를 이용하여 비디오 신호를 인코딩하는 장치에 있어서,인코딩될 데이터 심볼들을 생성하는 데이터 심볼 생성부;베이스 세그먼트 내에 있는 제 1 데이터 심볼을 인코딩하는 직렬 처리부;다른 세그먼트 내에 있는 제 2 데이터 심볼을 버퍼에 카피하고, 상기 다른 세그먼트 내에 있는 제 2 데이터 심볼을 병렬적으로 인코딩하는 병렬 처리부를 포함하는 것을 특징으로 하는 장치.
- 제9항에 있어서,로드-밸런싱 알고리즘이 세그먼트들을 서로 다른 쓰레드(threads)에 할당하기 위해 이용되는 것을 특징으로 하는 장치.
- 제9항에 있어서,모든 데이터가 압축되었을 때 압축된 데이터 헤더를 생성하는 데이터 비트 어레이 생성부를 더 포함하되,상기 압축된 데이터 헤더는 각 세그먼트의 인덱스를 포함하는 것을 특징으로 하는 장치.
- 제11항에 있어서,상기 데이터 비트 어레이 생성부는 하나의 데이터 어레이를 생성하기 위해 모든 세그먼트들을 연결하는 것을 특징으로 하는 장치.
- 제9항에 있어서,넌-블록킹 과정은 데이터 세그먼트에 할당될 때 시작되는 것을 특징으로 하는 장치.
- 병렬 처리에 기초하여 비디오 신호를 디코딩하는 장치에 있어서,압축된 데이터 헤더 내에 있는 각 세그먼트를 읽고, 다수의 쓰레드(threads)를 이용하여 베이스 세그먼트를 제외한 모든 세그먼트들을 디코딩하고, 데이터 심볼들을 버퍼 내에 저장하는 병렬 처리부; 및상기 데이터 심볼들에 기초하여 상기 비디오 신호를 디코딩하는 디코딩부를 포함하는 것을 특징으로 하는 장치.
- 제14항에 있어서,상기 데이터 심볼들이 상기 베이스 세그먼트 내에 존재하는 경우, 상기 베이스 세그먼트 내의 데이터 심볼들은 디코딩되는 것을 특징으로 하는 장치.
- 제14항에 있어서,상기 데이터 심볼들이 상기 베이스 세그먼트 내에 존재하지 않는 경우, 상기 데이터 심볼들은 대응되는 버퍼로부터 읽히는 것을 특징으로 하는 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/526,847 US10455244B2 (en) | 2014-11-14 | 2015-11-16 | Method and device for entropy encoding or entropy decoding video signal for high-capacity parallel processing |
KR1020177013845A KR102123620B1 (ko) | 2014-11-14 | 2015-11-16 | 대용량 병렬 처리를 위해 비디오 신호를 엔트로피 인코딩 또는 엔트로피 디코딩하는 방법 및 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462079564P | 2014-11-14 | 2014-11-14 | |
US62/079,564 | 2014-11-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016076677A1 true WO2016076677A1 (ko) | 2016-05-19 |
Family
ID=55954678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/012297 WO2016076677A1 (ko) | 2014-11-14 | 2015-11-16 | 대용량 병렬 처리를 위해 비디오 신호를 엔트로피 인코딩 또는 엔트로피 디코딩하는 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10455244B2 (ko) |
KR (1) | KR102123620B1 (ko) |
WO (1) | WO2016076677A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018182183A1 (ko) * | 2017-03-31 | 2018-10-04 | 주식회사 칩스앤미디어 | 향상된 산술부호화를 제공하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 |
CN113767641A (zh) * | 2019-09-28 | 2021-12-07 | 腾讯美国有限责任公司 | 分段数据流处理的先进先出功能 |
CN114928747A (zh) * | 2022-07-20 | 2022-08-19 | 阿里巴巴(中国)有限公司 | 基于av1熵编码的上下文概率处理电路、方法及相关装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10097833B2 (en) * | 2014-12-26 | 2018-10-09 | Intel Corporation | Method and system of entropy coding using look-up table based probability updating for video coding |
JP7067095B2 (ja) * | 2018-02-07 | 2022-05-16 | 京セラドキュメントソリューションズ株式会社 | 画像形成装置及び画像形成プログラム |
US10587284B2 (en) * | 2018-04-09 | 2020-03-10 | International Business Machines Corporation | Multi-mode compression acceleration |
WO2023138687A1 (en) * | 2022-01-21 | 2023-07-27 | Beijing Bytedance Network Technology Co., Ltd. | Method, apparatus, and medium for data processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006525731A (ja) * | 2003-05-02 | 2006-11-09 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 新標準への移行をサポートする多階層符号化 |
JP2007214998A (ja) * | 2006-02-10 | 2007-08-23 | Fuji Xerox Co Ltd | 符号化装置、復号化装置、符号化方法、復号化方法、及びプログラム |
KR20110004770A (ko) * | 2009-07-08 | 2011-01-14 | 삼성에스디아이 주식회사 | 이차전지 및 그 이차전지의 제조방법 |
KR20130085389A (ko) * | 2012-01-19 | 2013-07-29 | 삼성전자주식회사 | 서브영역별로 엔트로피 부호화의 병렬 처리가 가능한 비디오 부호화 방법 및 장치, 서브영역별로 엔트로피 복호화의 병렬 처리가 가능한 비디오 복호화 방법 및 장치 |
KR20130086004A (ko) * | 2012-01-20 | 2013-07-30 | 삼성전자주식회사 | 병렬 처리가 가능한 엔트로피 부호화 방법 및 장치, 병렬 처리가 가능한 엔트로피 복호화 방법 및 장치 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141446A (en) * | 1994-09-21 | 2000-10-31 | Ricoh Company, Ltd. | Compression and decompression system with reversible wavelets and lossy reconstruction |
JP3807342B2 (ja) * | 2002-04-25 | 2006-08-09 | 三菱電機株式会社 | デジタル信号符号化装置、デジタル信号復号装置、デジタル信号算術符号化方法、およびデジタル信号算術復号方法 |
KR100746007B1 (ko) * | 2005-04-19 | 2007-08-06 | 삼성전자주식회사 | 엔트로피 코딩의 컨텍스트 모델을 적응적으로 선택하는방법 및 비디오 디코더 |
US7245241B2 (en) * | 2005-11-25 | 2007-07-17 | Microsoft Corporation | Image coding with scalable context quantization |
US8542748B2 (en) * | 2008-03-28 | 2013-09-24 | Sharp Laboratories Of America, Inc. | Methods and systems for parallel video encoding and decoding |
US8494059B1 (en) | 2008-07-29 | 2013-07-23 | Marvell International Ltd. | Buffer controller |
US7932843B2 (en) * | 2008-10-17 | 2011-04-26 | Texas Instruments Incorporated | Parallel CABAC decoding for video decompression |
KR101038531B1 (ko) * | 2009-06-25 | 2011-06-02 | 한양대학교 산학협력단 | 복호화시 병렬처리가 가능한 영상 부호화 장치 및 방법, 그리고 병렬처리가 가능한 영상 복호화 장치 및 방법 |
US8379718B2 (en) * | 2009-09-02 | 2013-02-19 | Sony Computer Entertainment Inc. | Parallel digital picture encoding |
KR101631944B1 (ko) | 2009-10-30 | 2016-06-20 | 삼성전자주식회사 | 복호화 가속화를 위한 엔트로피 부호화 방법과 그 장치 및 엔트로피 복호화 방법과 그 장치 |
KR20110066523A (ko) | 2009-12-11 | 2011-06-17 | 한국전자통신연구원 | 스케일러블 비디오 디코딩 장치 및 방법 |
KR20120004319A (ko) | 2010-07-06 | 2012-01-12 | 한국전자통신연구원 | 병렬적으로 엔트로피 부호화 및 복호화를 수행하는 장치 및 방법 |
US8761240B2 (en) * | 2010-07-13 | 2014-06-24 | Blackberry Limited | Methods and devices for data compression using context-based coding order |
US8344917B2 (en) * | 2010-09-30 | 2013-01-01 | Sharp Laboratories Of America, Inc. | Methods and systems for context initialization in video coding and decoding |
US9215473B2 (en) * | 2011-01-26 | 2015-12-15 | Qualcomm Incorporated | Sub-slices in video coding |
CN105871766B (zh) * | 2015-01-23 | 2021-02-23 | 北京三星通信技术研究有限公司 | 干扰删除方法、干扰删除辅助方法、以及干扰删除装置 |
-
2015
- 2015-11-16 WO PCT/KR2015/012297 patent/WO2016076677A1/ko active Application Filing
- 2015-11-16 KR KR1020177013845A patent/KR102123620B1/ko active IP Right Grant
- 2015-11-16 US US15/526,847 patent/US10455244B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006525731A (ja) * | 2003-05-02 | 2006-11-09 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 新標準への移行をサポートする多階層符号化 |
JP2007214998A (ja) * | 2006-02-10 | 2007-08-23 | Fuji Xerox Co Ltd | 符号化装置、復号化装置、符号化方法、復号化方法、及びプログラム |
KR20110004770A (ko) * | 2009-07-08 | 2011-01-14 | 삼성에스디아이 주식회사 | 이차전지 및 그 이차전지의 제조방법 |
KR20130085389A (ko) * | 2012-01-19 | 2013-07-29 | 삼성전자주식회사 | 서브영역별로 엔트로피 부호화의 병렬 처리가 가능한 비디오 부호화 방법 및 장치, 서브영역별로 엔트로피 복호화의 병렬 처리가 가능한 비디오 복호화 방법 및 장치 |
KR20130086004A (ko) * | 2012-01-20 | 2013-07-30 | 삼성전자주식회사 | 병렬 처리가 가능한 엔트로피 부호화 방법 및 장치, 병렬 처리가 가능한 엔트로피 복호화 방법 및 장치 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018182183A1 (ko) * | 2017-03-31 | 2018-10-04 | 주식회사 칩스앤미디어 | 향상된 산술부호화를 제공하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 |
US11109032B2 (en) | 2017-03-31 | 2021-08-31 | Electronics And Telecommunications Research Institute | Method for processing image providing improved arithmetic encoding, method for decoding and encoding image using same, and apparatus for same |
CN114363616A (zh) * | 2017-03-31 | 2022-04-15 | 明达半导体股份有限公司 | 图像解码/编码方法和存储介质 |
CN114422786A (zh) * | 2017-03-31 | 2022-04-29 | 明达半导体股份有限公司 | 图像解码/编码方法和存储介质 |
CN114422785A (zh) * | 2017-03-31 | 2022-04-29 | 明达半导体股份有限公司 | 图像解码/编码方法和存储介质 |
US11706418B2 (en) | 2017-03-31 | 2023-07-18 | Electronics And Telecommunications Research Institute | Method for processing image providing improved arithmetic encoding, method for decoding and encoding image using same, and apparatus for same |
CN113767641A (zh) * | 2019-09-28 | 2021-12-07 | 腾讯美国有限责任公司 | 分段数据流处理的先进先出功能 |
CN114928747A (zh) * | 2022-07-20 | 2022-08-19 | 阿里巴巴(中国)有限公司 | 基于av1熵编码的上下文概率处理电路、方法及相关装置 |
CN114928747B (zh) * | 2022-07-20 | 2022-12-16 | 阿里巴巴(中国)有限公司 | 基于av1熵编码的上下文概率处理电路、方法及相关装置 |
Also Published As
Publication number | Publication date |
---|---|
US10455244B2 (en) | 2019-10-22 |
KR102123620B1 (ko) | 2020-06-26 |
US20170359591A1 (en) | 2017-12-14 |
KR20170075759A (ko) | 2017-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016076677A1 (ko) | 대용량 병렬 처리를 위해 비디오 신호를 엔트로피 인코딩 또는 엔트로피 디코딩하는 방법 및 장치 | |
WO2017086765A2 (ko) | 비디오 신호를 엔트로피 인코딩, 디코딩하는 방법 및 장치 | |
WO2018128322A1 (ko) | 영상 처리 방법 및 이를 위한 장치 | |
WO2020071873A1 (ko) | Mpm 리스트를 사용하는 인트라 예측 기반 영상 코딩 방법 및 그 장치 | |
WO2014163248A1 (ko) | 동영상 처리 방법 및 장치 | |
WO2018190594A1 (ko) | 비디오 신호를 엔트로피 인코딩, 디코딩하는 방법 및 장치 | |
WO2020009556A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2019221472A1 (ko) | 참조 샘플을 이용하는 비디오 신호 처리 방법 및 장치 | |
WO2017065592A1 (ko) | 비디오 신호의 인코딩, 디코딩 방법 및 그 장치 | |
WO2013162249A1 (ko) | 비디오 인코딩 방법, 비디오 디코딩 방법 및 이를 이용하는 장치 | |
WO2020076064A1 (ko) | Mpm 리스트를 사용하는 인트라 예측 기반 영상 코딩 방법 및 그 장치 | |
WO2018207956A1 (ko) | 비디오 신호를 엔트로피 인코딩, 디코딩하는 방법 및 장치 | |
WO2020071879A1 (ko) | 변환 계수 코딩 방법 및 그 장치 | |
WO2021040319A1 (ko) | 비디오/영상 코딩 시스템에서 라이스 파라미터 도출 방법 및 장치 | |
WO2019240539A1 (ko) | Cabac에 기반한 엔트로피 코딩 방법 및 그 장치 | |
WO2018190595A1 (ko) | 비디오 신호를 엔트로피 인코딩, 디코딩하는 방법 및 장치 | |
WO2020185039A1 (ko) | 레지듀얼 코딩 방법 및 장치 | |
WO2016056709A1 (ko) | 이미지 재부호화 방법 및 그 장치 | |
WO2021040488A1 (ko) | 팔레트 모드에서의 이스케이프 이진화 기반 영상 또는 비디오 코딩 | |
WO2021071187A1 (ko) | 비디오/영상 코딩 시스템에서 라이스 파라미터 도출 방법 및 장치 | |
WO2020180044A1 (ko) | Lmcs에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020145798A1 (ko) | 변환 스킵 플래그를 이용한 영상 코딩 방법 및 장치 | |
WO2020071856A1 (ko) | 변환 계수 코딩 방법 및 장치 | |
WO2020071672A1 (ko) | 움직임 벡터를 압축하는 방법 및 그 장치 | |
WO2018186510A1 (ko) | 비디오 신호를 엔트로피 인코딩, 디코딩하는 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15858957 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15526847 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20177013845 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15858957 Country of ref document: EP Kind code of ref document: A1 |