CN102801974A - Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding) - Google Patents

Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding) Download PDF

Info

Publication number
CN102801974A
CN102801974A CN2012102511074A CN201210251107A CN102801974A CN 102801974 A CN102801974 A CN 102801974A CN 2012102511074 A CN2012102511074 A CN 2012102511074A CN 201210251107 A CN201210251107 A CN 201210251107A CN 102801974 A CN102801974 A CN 102801974A
Authority
CN
China
Prior art keywords
context
mux
coefficient
links
code stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102511074A
Other languages
Chinese (zh)
Other versions
CN102801974B (en
Inventor
李甫
樊春晓
石光明
张犁
周蕾蕾
董伟生
齐飞
赵光辉
林杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210251107.4A priority Critical patent/CN102801974B/en
Publication of CN102801974A publication Critical patent/CN102801974A/en
Application granted granted Critical
Publication of CN102801974B publication Critical patent/CN102801974B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding), and the coder is mainly used for solving the problems that the coding efficiency is not high, and data overflows when a code stream is rapidly packaged in the prior art. The image compression and entropy coder comprises an image control unit (1), a symbol generation module (2), symbol FIFO (First In, First Out) (3), a binary arithmetic coding module (4), a context memory (5), code stream FIFO (6) and a code stream packaging byte module (7), wherein the image control unit is used for controlling the resetting of each module and the generation of an enabling signal, the symbol generation module is used for reading data from the external and finishing binarization and context modeling, the symbol FIFO is used for storing a binary symbol and a context model, the context memory is used for storing context information, the binary arithmetic coding module is used for acquiring a middle code stream according to the binary symbol, the code stream FIFO is used for storing middle code stream data, and the code stream packaging byte module is used for generating a final code stream according to the middle code stream. The image compression and entropy coder has the advantages that the throughput is high, the circuit scale is small, a key path is short, a flow line is prevented from stagnating, and the data is avoided overflowing; and the image compression and entropy coder is used for realizing arithmetic entropy coding with high throughput.

Description

Image compression entropy coder based on CABAC
Technical field
The invention belongs to the very large scale integration technology field, relate to based on contextual adaptive binary arithmetic coding CABAC and realize circuit, can be applicable to realize image compression encoding.
Background technology
Along with fast development of information technology, the application of digital video technology becomes more and more widely.But,, make the transmission of data and storage become very difficult because the data volume of digital video is very big.Therefore, if will make it obtain effective application, at first must solve the problem of video compression coding.
Entropy coding is most important a kind of coded system in the hybrid video coding, is mainly used in the compression to quantization transform coefficient, adaptive block transformation, motion vector and other coded messages, to reduce the redundant information between data.Through constructing context model, carrying out the adaptive probability modeling and adopt binary arithmetic coding, improved compression performance based on contextual adaptive binary arithmetic coding CABAC greatly.Yet because context management and arithmetic coding module in the CABAC encryption algorithm need be carried out a large amount of memory accesses and ALU operates, complexity is higher to be difficult to efficient realization.Therefore, be necessary design angle, invent a kind of efficient CABAC devices at full hardware implementation method from VLSI.
The patent application that Tsing-Hua University proposes " the parallel normalized coding based on CABAC is realized circuit and coding method " (number of patent application 201010103440.9, publication number CN101771879A) discloses a kind of parallel normalized coding based on CABAC and has realized circuit and coding method.This circuit comprises two level production lines: first order streamline is used to accomplish the normalization operation, and second level streamline is used to produce output code flow, and the two connects with First Input First Output FIFO, and the degree of depth of FIFO is 5 sections.This method uses two-stage flowing water to accomplish normalization and encoding operation, has only realized portion C ABAC structure, only can accomplish normalization and wherein not have context modeling and the binaryzation operation, can not independently accomplish the CABAC encoding operation.
The patent application " high-performance CABAC encoder design method " (number of patent application 201010106086.8, publication number CN102148997A) that Chengdu Shijia Electronic Industrial Co., Ltd. proposes discloses a kind of high-performance CABAC encoder design method.The implementation step of this method is: accomplish the arithmetic coding implementation part 1.; 2. accomplish the renormalization implementation part; 3. accomplish bit and generate implementation part.But this method for designing is merely a general thought, does not have physical circuit, and implementation efficiency is not high, is difficult to realize high throughput operation.
The patent application that Tsing-Hua University proposes " based on H.264/AVC parallel encoding realization circuit and the coding method of middle CABAC " (number of patent application 201010291264.9; Publication number CN101951516) discloses a kind of parallel encoding and realized circuit and coding method based on CABAC in H.264/AVC; Comprise that the dualization engine that is used for parallel algorithm normalization computing, the context that is used to carry out weekly the phase dibit read and upgrade the context model engine of operation, are used to carry out weekly the parallel normalized engine of the normalization operation of phase dibit, the RBSP code stream that is used to produce the RBSP output code flow generates engine, dualization engine and context model engine; And use 3 to write 2 and read first dequeue and connect parallel normalized engine and RBSP code stream and generate engine, accomplish the speeds match between two engines.But because its design limit, in the time of can't solving the extensive burst output of code stream, data are the overflow problem of packing fast.
Summary of the invention
The objective of the invention is to propose a kind of image compression entropy coder,, increase the circuit throughput, avoid the data when code stream is packed fast to overflow to improve code efficiency based on CABAC to the defective and the deficiency that exist in the above-mentioned background technology.
For realizing above-mentioned purpose, the image compression entropy coder based on CABAC of the present invention comprises: image control unit, symbol generation module, symbol FIFO, binary arithmetic coding module, context-memory and code stream packing byte module; Image control unit links to each other with Enable Pin with resetting of each module, through the reset signal of controlling each module, the generation of enable signal, realizes the coordinating and unifying work of each intermodule; Symbol FIFO links to each other with the binary arithmetic coding module with the symbol generation module respectively, is used for binaryzation symbol and corresponding context model that the stored symbols generation module produces; Context-memory links to each other with the binary arithmetic coding module, is used to store the contextual information of 512 context models, and the context model during with the realization binary coding reads and upgrades operation, it is characterized in that:
Be connected with code stream FIFO between binary arithmetic coding module and the code stream packing byte module, this code stream FIFO is in order to the middle code stream of storage binary arithmetic coding module output;
Said symbol generation module; Constitute by first memory, second memory, the 3rd memory, first MUX, state machine control unit, header state machine, coefficient information state machine, second MUX, binaryzation submodule and context modeling submodule; Wherein the input of first memory, second memory, the 3rd memory all links to each other with the outside input; Output all links to each other with first MUX, realizes from the input of outside, reading syntactic element and depositing the operation in the corresponding memory respectively in according to its predictive mode; The state machine control unit links to each other with the coefficient information state machine with the header state machine respectively, accomplishes the work of two normal redirects of state machine of control; The input of binaryzation submodule links to each other with the coefficient information state machine with the header state machine respectively, and output links to each other with second MUX, operates in order to accomplish binaryzation, and the symbol after the binaryzation is write symbol FIFO through second MUX; The input of context modeling submodule links to each other with the coefficient information state machine with the header state machine respectively, and output links to each other with second MUX, operates in order to accomplish context modeling, and contextual information is write symbol FIFO through second MUX;
Said binary arithmetic coding module is made up of the level Four streamline that joins successively, and first order streamline is made up of the context reading submodule, in order to accomplish the work of reading character to be encoded and context model from symbol FIFO; Second level streamline judges that by character probabilities state updating submodule, probability state computation submodule and coded system submodule constitutes; Calculate, judge coded system in order to the intermediate variable of accomplishing the renewal of character probabilities state, big probability character/small probability character MPS/LPS respectively, and the probability state and the intermediate variable of calculating are sent to subordinate; Third level streamline is by normalization submodule between updating context submodule, code area and calculate output figure place submodule and constitute, context state upgrades in order to accomplish respectively, between the code area Range normalization and select shift count and send into next stage; Fourth stage streamline produces submodule by interval lower limit normalization submodule and partial code streams and constitutes, and accomplishes respectively interval lower limit Low is carried out normalization to reform and calculate new interval lower limit Low value and produce partial code streams and write the operation of code stream FIFO;
Said code stream packing byte module is made up of barrel shifter, state machine controller, wait character register, coding buffer register and code stream output sub-module; This state machine controller links to each other with the code stream output sub-module with barrel shifter, wait character register, coding buffer register respectively, through the output of correctly packing of the redirect control stream of state of a control; This coding buffer register links to each other with code stream FIFO, reads the intermediate code flow data among the code stream FIFO; These treat that the output of character register links to each other with barrel shifter, is used to store the shift count of barrel shifter; The data input pin of this barrel shifter links to each other with the output of coding buffer register; Output links to each other with the input of code stream output sub-module; Data to the coding buffer register are shifted and export the code stream output sub-module to; And use the code stream output sub-module that the data of barrel shifter output are handled, obtain final code stream.
As preferably; Above-mentioned image compression entropy coder based on CABAC; It is characterized in that the binaryzation submodule in the symbol generation module is made up of binary value sequence ROM, prefix ROM and suffix ROM; Corresponding binary sequence, prefix and suffix after the difference packing coefficient binaryzation, the address end of these three memories all links to each other with the output of first MUX, and the coefficient of exporting with first MUX is these three ROM of address search; Obtain binary sequence, prefix and suffix after this coefficient binaryzation, accomplish the binaryzation operation and will obtain the result writing among the symbol FIFO through second MUX.
As preferably; Above-mentioned image compression entropy coder based on CABAC; It is characterized in that the context modeling submodule in the symbol generation module is made up of header modeling submodule, coefficient information modeling submodule and the 3rd MUX, wherein, the input of header modeling submodule links to each other with first MUX with the header state machine respectively; Output links to each other with the 3rd MUX, in order to accomplish the context modeling work of header; The input of coefficient information modeling submodule links to each other with first MUX with the coefficient information state machine respectively, and output links to each other with the 3rd MUX, in order to accomplish the context modeling work of coefficient information.
As preferably; Above-mentioned image compression entropy coder based on CABAC; It is characterized in that header modeling submodule 2101 is made up of logical circuit, be used for directly obtaining the context model of current state, and export the 3rd MUX 2103 to according to the current state of header state machine 26.
As preferably, above-mentioned image compression entropy coder based on CABAC is characterized in that coefficient information modeling submodule comprises:
The first nonzero coefficient context ROM, in order to the corresponding context model of 4 * 4 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX;
The second nonzero coefficient context ROM, in order to the corresponding context model of 8 * 8 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX;
The 3rd nonzero coefficient context ROM, in order to the corresponding context model of 16 * 16 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX;
The first last non-zero coefficient context ROM, in order to the corresponding context model of 4 * 4 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX;
The second last non-zero coefficient context ROM, in order to the corresponding context model of 8 * 8 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX;
The 3rd last non-zero coefficient context ROM, in order to the corresponding context model of 16 * 16 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX;
Address counter; Link to each other respectively with the address end of above-mentioned six nonzero coefficient context ROM memories; When coefficient information being carried out context modeling work startup, address counter is accomplished reset operation, and coefficient counter of every input adds one; As this coefficient corresponding address 6 context ROM are searched, read out six kinds of corresponding context sequence numbers of this coefficient positions;
The 4th MUX is used to receive the data that the first nonzero coefficient context ROM, the second nonzero coefficient context ROM and the 3rd nonzero coefficient context ROM export, and according to type of prediction these data is selected output;
The 5th MUX is used to receive the data that the first last non-zero coefficient context ROM, the second last non-zero coefficient context ROM and the 3rd last non-zero coefficient context ROM export, and according to type of prediction these data is selected output;
The zero coefficient determining device is used to judge whether current coefficient is nonzero coefficient, and it links to each other with the data selecting side of the 6th MUX;
Last coefficient determining device is used to judge whether current coefficient is last nonzero coefficient, and it links to each other with the data selecting side of the 6th MUX;
The 6th MUX; Its data input pin links to each other with the 5th MUX with the 4th MUX respectively; Be used for according to the judged result of nonzero coefficient determining device with last coefficient determining device; The output of the 0,16,64,254, the 4th MUX and the output of the 5th MUX are selected, exported to next stage.
As preferably, above-mentioned image compression entropy coder based on CABAC, it is characterized in that: context-memory is used to store the contextual information of 512 context models, and the context model when accomplishing binary coding reads and upgrades operation; Each contextual information is used 7 bit representations, i.e. the corresponding big probability character of highest order storage context sequence number, back six corresponding states of storage context sequence number.
The present invention compared with prior art has the following advantages:
The first, the present invention totally adopts Fully-pipelined design, adds control module and prevents pipeline stall; And it is multiplexing that multiple coding mode is carried out efficient circuits, increased the circuit throughput, reduced circuit scale and complexity; Shorten critical path, accelerated processing of circuit speed;
Second; Binary arithmetic coding module among the present invention adopts the level Four pipeline organization, and flowing water distributes rationally, and every grade of flowing water function was accomplished in a clock cycle; Improve code efficiency, and cooperated the data redirection circuit to solve the data collision problem in the monocycle streamline;
The 3rd, the present invention has added code stream FIFO in binary arithmetic coding module and code stream packing byte module, solved the normalization resurfacing operation and produced partial code streams speed problem of unstable, prevents pipeline stall or data take place to overflow;
The 4th; The present invention adopts barrel shifter and state machine controller structure combining; Design the code stream packing link of 16 outputs simultaneously, can tackle the extensive burst of code stream and export 127 situation, solved the problem that code stream packing is in particular cases overflowed effectively.
Description of drawings
Fig. 1 is a structured flowchart of the present invention;
Fig. 2 is the symbol generation module structured flowchart among the present invention;
Fig. 3 is the binary arithmetic coding modular structure block diagram among the present invention;
Fig. 4 is the binaryzation sub modular structure block diagram among the present invention;
Fig. 5 is the state transition diagram of header state machine among the present invention;
Fig. 6 is the coefficient information processing sequence sketch map among the present invention;
Fig. 7 is the coefficient information modeling sub modular structure block diagram among the present invention;
Fig. 8 upgrades structural representation for the context among the present invention;
Fig. 9 is the probability state update stage data passes sketch map among the present invention;
Figure 10 is the packing of the code stream among the present invention byte modular structure block diagram;
Figure 11 is the byte packetization module state transition diagram among the present invention.
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is elaborated.
With reference to Fig. 1, the present invention is based on the image compression entropy coder of CABAC, comprise image control unit 1, symbol generation module 2, symbol FIFO 3, binary arithmetic coding module 4, context-memory 5, code stream FIFO 6 and code stream packing byte module 7; This image control unit 1 links to each other with Enable Pin with symbol generation module 2, symbol FIFO 3, binary arithmetic coding module 4, context-memory 5, code stream FIFO 6 and the reset terminal of code stream packing byte module 7 respectively; Through controlling the generation of their reset signals and enable signal, realize the coordinating and unifying work of circuit; The input of this symbol generation module 2 links to each other with external input terminals, is used for accomplishing fetching data from external read, and these data are carried out binaryzation and context modeling; The input of this symbol FIFO 3 links to each other with symbol generation module 2; Output links to each other with symbol FIFO 3; Be used for binaryzation symbol and context model that stored symbols generation module 2 produces; And wait for that binary arithmetic coding module 4 reads, to solve the code rate mismatch problem, prevent pipeline stall or data are overflowed; This context-memory 5 links to each other with binary arithmetic coding module 4, is used to store the contextual information of 512 context models, and the context model when accomplishing binary coding with binary arithmetic coding module 4 is common is upgraded operation; This binary arithmetic coding module 4 is the contextual information of storage in the memory 5 based on context, to the binaryzation symbol that reads carry out arithmetic coding obtain in the middle of code stream; The input of this code stream FIFO 6 links to each other with binary arithmetic coding module 4; Output links to each other with code stream packing byte module 7; Be used to store the middle code stream that binary arithmetic coding module 4 produces; And wait for that code stream packing byte module 7 reads, to solve the code rate mismatch problem, prevent pipeline stall or data are overflowed; This code stream packing byte module 7 realizes from code stream FIFO 6, reading the intermediate code flow data, and the overflow position in the middle code stream is removed, and produces final encoding code stream and packing output, accomplishes encoding operation.
Said context-memory 5 is used to store the contextual information of 512 context models, and the context model when accomplishing binary coding reads and upgrades operation.Each context is represented with 7bit: the corresponding big probability character of Gao Yiwei storage context sequence number, low six corresponding states of storage context sequence number.When compression request started, the image layers control unit carried out initialization to the context state memory, and the initialization clock time is 512 clock cycle.After context pixel status initialize memory was accomplished, image control unit 1 control binary arithmetic coding module 4 started arithmetic coding work.
Said symbol generation module 2, binary arithmetic coding module 4 and code stream packing byte module 7 are respectively like Fig. 2, Fig. 3 and shown in Figure 10, and its structure and function are described below respectively:
With reference to Fig. 2, symbol generation module 2, input links to each other with the outside input, and output links to each other with symbol FIFO 3, accomplishes from the input of outside and reads syntactic element and it is carried out the operation of binaryzation and context modeling.Constitute by first memory 21, second memory 22, the 3rd memory 23, first MUX 24, state machine control unit 25, header state machine 26, coefficient information state machine 27, second MUX 28, binaryzation submodule 29 and context modeling submodule 210, wherein:
The input of first memory 21, second memory 22, the 3rd memory 23 all links to each other with the outside input, and output all links to each other with first MUX 24, realizes from the input of outside, reading syntactic element and depositing the operation in the corresponding memory respectively in according to its predictive mode; The bit wide of first memory 21 is 9bit, and the degree of depth is 319, in order to the data of 4 * 4 predictive modes in the storage frame; The bit wide of second memory 22 is 9bit; The degree of depth is 269, and in order to the data of 8 * 8 predictive modes in the storage frame, the bit wide of the 3rd memory 23 is 9bit; The degree of depth is 259, in order to the data of 16 * 16 predictive modes in the storage frame;
First MUX 24 is selected to read to the data in three memories according to type of prediction, sends into binaryzation submodule 29 and context modeling submodule 210 respectively, to accomplish binaryzation and context modeling;
State machine control unit 25 links to each other with coefficient information state machine 27 with header state machine 26 respectively, through controlling two normal redirects of state machine, alternately accomplishes the binaryzation and the context modeling operation of header and coefficient information;
Header state machine 26; Link to each other with context modeling submodule 210 with binaryzation submodule 29 respectively, accomplish the binaryzation and the context modeling work of header, its state transitions situation respectively in order to control these two modules; As shown in Figure 5; Header is handled operation and is started with overall signal by the macro block reset signal, at first gets into disarmed state INVALID, jumps to during for high level at enable signal and finishes preceding state TERMINAL.The TERMINAL state processing needs a clock cycle, under this state, if processing is not when first macro block of topsheet layer, then writes end mark 0 through second MUX 28 to symbol FIFO 3; Otherwise, do not write end mark 0 to symbol FIFO3 through second MUX 28.After the TERMINAL state processing is accomplished; Jump to next state according to type of prediction: in type of prediction is frame during 16 * 16 predictive modes; Then jump to 16 * 16 mode state individual processing; Accomplishing this state needs a clock cycle, and under this state, accomplishes the operation that writes macro block information, predictive mode and a CBP information through second MUX 28 to symbol FIFO3.Get into done state FINISH subsequently and under this state Infinite Cyclic, arrive up to the compression request of next macro block, change state over to processing that the INVALID state is opened new round header by reset signal; During 8 * 8 predictive modes, state jumps to the TSF state in type of prediction is intra-frame 4 * 4 forecasting model or frame, accomplishes 8 * 8 mode flags positions are write the operation of symbol FIFO3 through second MUX 28, needs a clock cycle.Get into pattern-coding state MODE subsequently; When predictive mode is intra-frame 4 * 4; This state needs 16 clock cycle; 8 * 8 the time, this state needs 4 clock cycle in predictive mode is frame, in this state, accomplishes the operation that the predictive mode flag bit is write symbol FIFO3 through second MUX 28.Get into the CBP state after state is accomplished, this state needs 4 clock cycle, accomplishes the operation that 8 * 8 block conversion coefficients is write full symbol FIFO3 for the zero flag position through second MUX 28.Jump to the FINISHI state subsequently and under this state Infinite Cyclic, arrive up to next macro block compression request, change state over to processing that the INVALID state is opened new round header by reset signal.
Coefficient information state machine 27; Link to each other with context modeling submodule 210 with binaryzation submodule 29 respectively, accomplish the binaryzation and the context modeling work of coefficient information in order to control these two modules, as shown in Figure 6; When the present invention handles coefficient information; Carry out different coding number of times and coded sequence according to type of prediction: type of prediction is 16 * 16 o'clock, only needs to accomplish the linear transformation coefficient processing, 256 conversion coefficients of single treatment this moment.Accomplish the encoding operation that nonzero coefficient flag bit, last non-zero coefficient flags position, coefficient subtract 1 value successively; Type of prediction is 8 * 8 o'clock, need carry out the processing of 4 conversion coefficients, handles 64 coefficients at every turn.When carrying out each encoding operation; One time 8 * 8 predictive mode flag bit=1 of at first encoding; Carry out 4 times 8 * 8 luminance block predictive mode flag bits, 8 * 8 luminance block predictive modes then; Then encode 4 times and contain the nonzero coefficient flag bit, accomplish the encoding operation that 4 times nonzero coefficient flag bit, last non-zero coefficient flags position, coefficient subtracts 1 value subsequently; Type of prediction is 4 * 4 o'clock, need carry out the processing of 16 conversion coefficients, handles 16 coefficients at every turn.One time 8 * 8 predictive mode flag bit=0 of at first encoding during each coding; Carry out 16 times 4 * 4 luminance block predictive mode flag bits, 4 * 4 luminance block predictive modes then; Then carry out 4 four 4 * 4 luminance block and contain the nonzero coefficient flag bit, accomplish successively subsequently and contain the encoding operation that nonzero coefficient flag bit, nonzero coefficient flag bit, last non-zero coefficient flags position, coefficient subtract 1 value for 16 times.
Binaryzation submodule 29; Constitute by binary value sequence ROM 291, prefix ROM 292 and suffix ROM 293; As shown in Figure 4; Corresponding binary sequence, prefix and suffix after the difference packing coefficient binaryzation, the address end of these three memories all links to each other with the output of first MUX 24, and the coefficient of exporting with first MUX 24 is these three ROM of address search; Obtain binary sequence, prefix and suffix after this coefficient binaryzation, accomplish the binaryzation operation and will obtain the result writing among the symbol FIFO 3 through second MUX 28.
Context modeling submodule 210; Constitute by header modeling submodule 2101, coefficient information modeling submodule 2102 and the 3rd MUX 2103; Wherein, The input of header modeling submodule 2101 links to each other with first MUX 24 with header state machine 26 respectively, and output links to each other with the 3rd MUX 2103, in order to accomplish the context modeling work of header; The input of coefficient information modeling submodule 2102 links to each other with first MUX 24 with coefficient information state machine 27 respectively, and output links to each other with the 3rd MUX 2103, in order to accomplish the context modeling work of coefficient information.This header modeling submodule 2101 is made up of logical circuit, is used for directly obtaining according to the current state of header state machine 26 context model of current state, and exports the 3rd MUX 2103 to; As shown in Figure 7, this coefficient information modeling submodule 2102 comprises: the first nonzero coefficient context ROM21022, and in order to the corresponding context model of 4 * 4 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX 21028; The second nonzero coefficient context ROM21023, in order to the corresponding context model of 8 * 8 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX 21028; The 3rd nonzero coefficient context ROM21024, in order to the corresponding context model of 16 * 16 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX 21028; The first last non-zero coefficient context ROM21025, in order to the corresponding context model of 4 * 4 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX 21029; The second last non-zero coefficient context ROM21026, in order to the corresponding context model of 8 * 8 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX 21029; The 3rd last non-zero coefficient context ROM21027, in order to the corresponding context model of 16 * 16 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX 21029; Address counter 21021; Link to each other respectively with the address end of above-mentioned six nonzero coefficient context ROM memories; When coefficient information being carried out context modeling work startup, address counter 21021 is accomplished reset operations, and coefficient counter of every input adds one; As this coefficient corresponding address 6 context ROM are searched, read out six kinds of corresponding context sequence numbers of this coefficient positions; The 4th MUX 21028 is used to receive the data that the first nonzero coefficient context ROM21022, the second nonzero coefficient context ROM21023 and the 3rd nonzero coefficient context ROM21024 export, and according to type of prediction these data is selected output; The 5th MUX 21029; Be used to receive the data of the first last non-zero coefficient context ROM21025, the second last non-zero coefficient context ROM21026 and the 3rd last non-zero coefficient context ROM21027 output, and these data selected output according to type of prediction; Zero coefficient determining device 210211 is used to judge whether current coefficient is nonzero coefficient, and its data selecting side with the 6th MUX 210210 links to each other; Last coefficient determining device 210212 is used to judge whether current coefficient is last nonzero coefficient, and its data selecting side with the 6th MUX 210210 links to each other; The 6th MUX 210210; Its data input pin links to each other with the 5th MUX 21029 with the 4th MUX 21028 respectively; Be used for according to the judged result of nonzero coefficient determining device 210211 with last coefficient determining device 210212; The output of the 0,16,64,254, the 4th MUX 21028 and the output of the 5th MUX 21029 are selected, exported to next stage.
With reference to Fig. 3, binary arithmetic coding module 4 is made up of the level Four streamline that joins successively,
First order streamline is made up of context reading submodule 41; In order to accomplish the work of from symbol FIFO3, reading character to be encoded and context model, obtain this context model corresponding big probability character and probability state through reading context-memory 5 simultaneously;
Second level streamline judges that by character probabilities state updating submodule 42, probability state computation submodule 43 and coded system submodule 44 constitutes; Be sent to subordinate respectively in order to the intermediate variable calculating of accomplishing the renewal of character probabilities state, big probability character and small probability character, the operation that coded system is judged, and with the probability state and the intermediate variable that calculate;
Third level streamline is made up of normalization submodule 47 between updating context submodule 45, code area and calculating output figure place submodule 46; The operation that context state upgrades in order to accomplish respectively, Range normalization and shift count are selected between the code area, and shift count sent into next stage; Wherein, the normalization of Range operation between the code area is to work as the scope of Range between the code area not (2 8, 2 9] when interior, the encoder count_shift position that will in a clock cycle, move to left to the value of Range between the code area, wherein the number of times count_shift of displacement is the number of first nonzero value leading zero in this Range binary value.Context state read with new logic more in, adopt redirecting technique to dissolve data collision.As shown in Figure 8, when first order streamline starts, read register and obtain the context sequence number.In first order streamline, read the context state memory and obtain this context sequence number corresponding big probability character and probability state.Second level streamline is accomplished the calculating of big probability character and probability state and is write pipeline register.Third level streamline is accomplished second level pipeline register content is write the context state memory.As shown in Figure 9, adjacent pending character can be sent constantly into binary arithmetic coding module 4 with corresponding context sequence number, and selects context state through MUX, to avoid data collision.In order to prevent pipeline stall, designed the data collision control unit is redirected to the data that do not write context-memory 5 in the pipeline register context-memory 5 of next clock cycle through MUX input register.
Fourth stage streamline produces submodule 49 by interval lower limit normalization submodule 48 and partial code streams and constitutes, and accomplishes respectively interval lower limit Low is carried out normalization to reform and calculate new interval lower limit Low value and produce partial code streams and write the operation of code stream FIFO6; Wherein, The normalization of Low value is handled; Promptly in one-period with the former Low value count_shift position that moves to left; And according to the value of (count_shift+1) position before it first of new Low carried out assignment, its assignment mode is: having only preceding (count_shift+1) position when former Low value all is 1, after count_shift the displacement newly first of Low just be 1; Otherwise first is 0.Send into the data bit width 18bit among the code stream FIFO6: highest order is for finishing the coding maker position, if 1 code stream that shows this moment need finish the preceding coding of judging.Three is the code stream bit number subsequently, in order to characterize the number of significant digit of low 14 of current code stream.Its low 14 middle code streams that produce when being Low normalization obtain a final code stream in code stream packing link according to the every two bits in these 14bit data.
With reference to Figure 10; Said code stream packing byte module 7; Constitute by barrel shifter 71, state machine controller 72, wait character register 73, coding buffer register 74 and code stream output sub-module 75; Redirect through state in the state of a control machine controller 72 realizes from code stream FIFO6, reading the intermediate code flow data, and deposits coding buffer register 74 in after the overflow position in the middle code stream removed, and is shifted according to the data in 71 pairs of coding buffer registers 74 of the Data Control barrel shifter in the wait character register 73 subsequently; Produce the data buffering position, send into code stream output sub-module 75 and accomplish packing output.
State machine controller 72 links to each other with code stream output sub-module 75 with barrel shifter 71, wait character register 73, coding buffer register 74 respectively, through the output of correctly packing of the redirect control stream of state of a control; The coding output that links to each other with code stream FIFO6 respectively of buffer register 74 and the data input pin of barrel shifter 71 link to each other, and read and send into barrel shifter 71 output that is shifted after outside intermediate code flow data is handled it; The output of wait character register 73 links to each other with the control end of barrel shifter 71, the shift count of control barrel shifter 71; The data output end of barrel shifter 71 links to each other with the input of code stream output sub-module 75; Data to coding buffer register 74 are shifted and export code stream output sub-module 75 to; And use the data of 75 pairs of barrel shifters of code stream output sub-module, 71 outputs to handle, obtain final code stream.
The state redirect situation of state machine controller 72, shown in figure 11, when compression request arrives, code stream packing byte module 7 is carried out reset operation.When code stream FIFO6 is not sky, jump to the FIFO reading state, read data fifo one time, change the data wait state subsequently over to, wait for that data fifo is effective.Change the data distribution state afterwards over to, in this state, obtain finishing preceding judge coding maker position, code stream number of significant digit, three groups of data of middle code stream respectively from 18 bit data that read.Get into byte packing state subsequently; And completion translation under this state; Two bits with middle code stream is converted into the one digit number certificate and deposits coding buffer register 74 at every turn, and it carries out number of times by the control of code stream number of significant digit, and the code stream number of significant digit is from subtracting one after each execution.This stage is accomplished by the buffer register 74 of will encoding, wait character register 73 and barrel shifter 71 jointly; According to different situations, divide three kinds of situation to handle:, then in coding buffer register 74, to add one 0 if this two bits is 00 when pre-treatment intermediate code stream two bits; And use barrel shifter that the data in the coding buffer register 74 are shifted; Low level mends 1, and its shift count is by 73 controls of wait character register, after the completion with 73 zero setting of wait character register; If two bits is 01, then wait character register 73 coefficients are added one; If two bits is 10 or 11; Then in coding buffer register 74, add one 1, and use barrel shifter that the data in the coding buffer register 74 are shifted, low level mends 0; Its shift count is by wait character register 73 control, after the completion with 73 zero setting of wait character register.
The data buffering position is greater than 8 the time in coding buffer register 74, and code stream output sub-module 75 is sent in buffer register 74 displacements of will encode, and its displacement figure place be the most approaching 8 the integral multiple that cushions figure place.If judgement coding maker position is 1 before the end, then get into the state that empties, carry out emptying the state individual processing, when occurring finishing " 1 " when finishing to encode, carry out the extra caching that empties with realization.Get into done state subsequently, accomplish coding.When the code stream number of significant digit was 1, state machine got into disarmed state, circulates next time.
Code stream output sub-module 75 is received the buffering position at every turn, and the code stream output element is carried out 16 output, will cushion the position respectively by byte output, promptly accomplishes byte packing operation.

Claims (6)

1. the image compression entropy coder based on CABAC comprises image control unit (1), symbol generation module (2), symbol FIFO (3), binary arithmetic coding module (4), context-memory (5) and code stream packing byte module (7); Image control unit (1) links to each other with Enable Pin with resetting of each module, through the reset signal of controlling each module, the generation of enable signal, realizes the coordinating and unifying work of each intermodule; Symbol FIFO (3) links to each other with binary arithmetic coding module (4) with symbol generation module (2) respectively, is used for binaryzation symbol and corresponding context model that stored symbols generation module (2) produces; Context-memory (5) links to each other with binary arithmetic coding module (4), is used to store the contextual information of 512 context models, and the context model during with the realization binary coding reads and upgrades operation, it is characterized in that:
Be connected with code stream FIFO (6) between binary arithmetic coding module (4) and the code stream packing byte module (7), this code stream FIFO (6) is in order to the middle code stream of storage binary arithmetic coding module (4) output;
Said symbol generation module (2); Constitute by first memory (21), second memory (22), the 3rd memory (23), first MUX (24), state machine control unit (25), header state machine (26), coefficient information state machine (27), second MUX (28), binaryzation submodule (29) and context modeling submodule (210); Wherein the input of first memory (21), second memory (22), the 3rd memory (23) all links to each other with the outside input; Output all links to each other with first MUX (24), realizes from the input of outside, reading syntactic element and depositing the operation in the corresponding memory respectively in according to its predictive mode; State machine control unit (25) links to each other with coefficient information state machine (27) with header state machine (26) respectively, accomplishes the work of two normal redirects of state machine of control; The input of binaryzation submodule (29) links to each other with coefficient information state machine (27) with header state machine (26) respectively; Output links to each other with second MUX (28); Operate in order to accomplish binaryzation, and the symbol after the binaryzation is write symbol FIFO (3) through second MUX (28); The input of context modeling submodule (210) links to each other with coefficient information state machine (27) with header state machine (26) respectively; Output links to each other with second MUX (28); Operate in order to accomplish context modeling, and contextual information is write symbol FIFO (3) through second MUX (28);
Said binary arithmetic coding module (4) is made up of the level Four streamline that joins successively, and first order streamline is made up of context reading submodule (41), in order to accomplish the work of reading character to be encoded and context model from symbol FIFO (3); Second level streamline judges that by character probabilities state updating submodule (42), probability state computation submodule (43) and coded system submodule (44) constitutes; Calculate, judge coded system in order to the intermediate variable of accomplishing the renewal of character probabilities state, big probability character/small probability character MPS/LPS respectively, and the probability state and the intermediate variable of calculating are sent to subordinate; Third level streamline is by normalization submodule (47) between updating context submodule (45), code area and calculate output figure place submodule (46) and constitute, context state upgrades in order to accomplish respectively, between the code area Range normalization and select shift count and send into next stage; Fourth stage streamline produces submodule (49) by interval lower limit normalization submodule (48) and partial code streams and constitutes, and accomplishes respectively interval lower limit Low is carried out normalization to reform and calculate new interval lower limit Low value and produce partial code streams and write the operation of code stream FIFO (6);
Said code stream packing byte module (7) is made up of barrel shifter (71), state machine controller (72), wait character register (73), coding buffer register (74) and code stream output sub-module (75); This state machine controller (72) links to each other with code stream output sub-module (75) with barrel shifter (71), wait character register (73), coding buffer register (74) respectively, through the output of correctly packing of the redirect control stream of state of a control; This coding buffer register (74) links to each other with code stream FIFO (6), reads the intermediate code flow data among the code stream FIFO (6); These treat that the output of character register (73) links to each other with barrel shifter (71), is used to store the shift count of barrel shifter (71); The data input pin of this barrel shifter (71) links to each other with the output of coding buffer register (74); Output links to each other with the input of code stream output sub-module (75); Data to coding buffer register (74) are shifted and export code stream output sub-module (75) to; And use code stream output sub-module (75) that the data of barrel shifter (71) output are handled, obtain final code stream.
2. the image compression entropy coder based on CABAC according to claim 1; It is characterized in that the binaryzation submodule (29) in the symbol generation module (2) is made up of binary value sequence ROM (291), prefix ROM (292) and suffix ROM (293); Corresponding binary sequence, prefix and suffix after the difference packing coefficient binaryzation; The address end of these three memories all links to each other with the output of first MUX (24); Coefficient with first MUX (24) output is these three ROM of address search; Obtain binary sequence, prefix and suffix after this coefficient binaryzation, accomplish the binaryzation operation and will obtain the result writing among the symbol FIFO (3) through second MUX (28).
3. the image compression entropy coder based on CABAC according to claim 1; It is characterized in that the context modeling submodule (210) in the symbol generation module (2) is made up of header modeling submodule (2101), coefficient information modeling submodule (2102) and the 3rd MUX (2103); Wherein, The input of header modeling submodule (2101) links to each other with first MUX (24) with header state machine (26) respectively; Output links to each other with the 3rd MUX (2103), in order to accomplish the context modeling work of header; The input of coefficient information modeling submodule (2102) links to each other with first MUX (24) with coefficient information state machine (27) respectively, and output links to each other with the 3rd MUX (2103), in order to accomplish the context modeling work of coefficient information.
4. the image compression entropy coder based on CABAC according to claim 3; It is characterized in that header modeling submodule (2101) is made up of logical circuit; Be used for directly obtaining the context model of current state, and export the 3rd MUX (2103) to according to the current state of header state machine (26).
5. the image compression entropy coder based on CABAC according to claim 3 is characterized in that coefficient information modeling submodule (2102) comprising:
The first nonzero coefficient context ROM (21022), in order to the corresponding context model of 4 * 4 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX (21028);
The second nonzero coefficient context ROM (21023), in order to the corresponding context model of 8 * 8 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX (21028);
The 3rd nonzero coefficient context ROM (21024), in order to the corresponding context model of 16 * 16 predictive mode diverse locations in the storage frame, its data output end links to each other with the 4th MUX (21028);
The first last non-zero coefficient context ROM (21025), in order to the corresponding context model of 4 * 4 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX (21029);
The second last non-zero coefficient context ROM (21026), in order to the corresponding context model of 8 * 8 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX (21029);
The 3rd last non-zero coefficient context ROM (21027), in order to the corresponding context model of 16 * 16 predictive mode last non-zero coefficient diverse locations in the storage frame, its data output end links to each other with the 5th MUX (21029);
Address counter (21021); Link to each other respectively with the address end of above-mentioned six nonzero coefficient context ROM memories; When coefficient information being carried out context modeling work startup, address counter (21021) is accomplished reset operation, and coefficient counter of every input adds one; As this coefficient corresponding address 6 context ROM are searched, read out six kinds of corresponding context sequence numbers of this coefficient positions;
The 4th MUX (21028); Be used to receive the data of the first nonzero coefficient context ROM (21022), the second nonzero coefficient context ROM (21023) and the 3rd nonzero coefficient context ROM (21024) output, and these data selected output according to type of prediction;
The 5th MUX (21029); Be used to receive the data of the first last non-zero coefficient context ROM (21025), the second last non-zero coefficient context ROM (21026) and the 3rd last non-zero coefficient context ROM (21027) output, and these data selected output according to type of prediction;
Zero coefficient determining device (210211) is used to judge whether current coefficient is nonzero coefficient, and its data selecting side with the 6th MUX (210210) links to each other;
Last coefficient determining device (210212) is used to judge whether current coefficient is last nonzero coefficient, and its data selecting side with the 6th MUX (210210) links to each other;
The 6th MUX (210210); Its data input pin links to each other with the 5th MUX (21029) with the 4th MUX (21028) respectively; Be used for judged result according to nonzero coefficient determining device (210211) and last coefficient determining device (210212); The output of the 0,16,64,254, the 4th MUX (21028) and the output of the 5th MUX (21029) are selected, exported to next stage.
6. the image compression entropy coder based on CABAC according to claim 1 is characterized in that: context-memory (5) is used to store the contextual information of 512 context models, and the context model when accomplishing binary coding reads and upgrades operation; Each contextual information is used 7 bit representations, i.e. the corresponding big probability character of highest order storage context sequence number, back six corresponding states of storage context sequence number.
CN201210251107.4A 2012-07-19 2012-07-19 Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding) Expired - Fee Related CN102801974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210251107.4A CN102801974B (en) 2012-07-19 2012-07-19 Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210251107.4A CN102801974B (en) 2012-07-19 2012-07-19 Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding)

Publications (2)

Publication Number Publication Date
CN102801974A true CN102801974A (en) 2012-11-28
CN102801974B CN102801974B (en) 2014-08-20

Family

ID=47200929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210251107.4A Expired - Fee Related CN102801974B (en) 2012-07-19 2012-07-19 Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding)

Country Status (1)

Country Link
CN (1) CN102801974B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041271A1 (en) * 2015-09-10 2017-03-16 Mediatek Singapore Pte. Ltd. Efficient context modeling for coding a block of data
CN108243342A (en) * 2016-12-23 2018-07-03 晨星半导体股份有限公司 Binary coding arithmetic unit and method
CN109889834A (en) * 2019-01-11 2019-06-14 珠海亿智电子科技有限公司 A kind of CABAC arithmetic decoding method and device
CN109889835A (en) * 2013-04-08 2019-06-14 索尼公司 It is coded and decoded according to the significant coefficient of the parameter of significant coefficient
CN109922341A (en) * 2017-12-13 2019-06-21 博雅视云(北京)科技有限公司 The advanced entropy coder implementation method of AVS2 and device
CN113382265A (en) * 2021-05-19 2021-09-10 北京大学深圳研究生院 Hardware implementation method, apparatus, medium, and program product for video data entropy coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1624579A2 (en) * 2004-08-02 2006-02-08 Samsung Electronics Co., Ltd. Apparatus and methods for binary arithmetic decoding using a pipelined structure
US20090225861A1 (en) * 2002-04-26 2009-09-10 Sony Corporation Coding device and method, decoding device and method, recording medium, and program
CN101771879A (en) * 2010-01-28 2010-07-07 清华大学 Parallel normalized coding realization circuit based on CABAC and coding method
CN101951516A (en) * 2010-09-25 2011-01-19 清华大学 Parallel encoding realization circuit and encoding method based on CABAC (Context-based Adaptive Binary Arithmetic Coding) in H.264/AVC (Advanced Video Coding)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090225861A1 (en) * 2002-04-26 2009-09-10 Sony Corporation Coding device and method, decoding device and method, recording medium, and program
EP1624579A2 (en) * 2004-08-02 2006-02-08 Samsung Electronics Co., Ltd. Apparatus and methods for binary arithmetic decoding using a pipelined structure
CN101771879A (en) * 2010-01-28 2010-07-07 清华大学 Parallel normalized coding realization circuit based on CABAC and coding method
CN101951516A (en) * 2010-09-25 2011-01-19 清华大学 Parallel encoding realization circuit and encoding method based on CABAC (Context-based Adaptive Binary Arithmetic Coding) in H.264/AVC (Advanced Video Coding)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
THINH M. LE EA TL: "System-on-Chip Design Methodology for a", 《IEEE INTERNATIONAL WORKSHOP ON RAPID SYSTEM PROTOTYPING》 *
孙书为 等: "高效的CABAC熵编码器体系结构", 《计算机工程与科学》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109889835A (en) * 2013-04-08 2019-06-14 索尼公司 It is coded and decoded according to the significant coefficient of the parameter of significant coefficient
US11463698B2 (en) 2013-04-08 2022-10-04 Sony Corporation Selection of the maximum dynamic range of transformed data and the data precision of transform matrices according to the bit depth of input data
WO2017041271A1 (en) * 2015-09-10 2017-03-16 Mediatek Singapore Pte. Ltd. Efficient context modeling for coding a block of data
CN108243342A (en) * 2016-12-23 2018-07-03 晨星半导体股份有限公司 Binary coding arithmetic unit and method
CN109922341A (en) * 2017-12-13 2019-06-21 博雅视云(北京)科技有限公司 The advanced entropy coder implementation method of AVS2 and device
CN109889834A (en) * 2019-01-11 2019-06-14 珠海亿智电子科技有限公司 A kind of CABAC arithmetic decoding method and device
CN109889834B (en) * 2019-01-11 2021-07-13 珠海亿智电子科技有限公司 CABAC arithmetic decoding method and device
CN113382265A (en) * 2021-05-19 2021-09-10 北京大学深圳研究生院 Hardware implementation method, apparatus, medium, and program product for video data entropy coding

Also Published As

Publication number Publication date
CN102801974B (en) 2014-08-20

Similar Documents

Publication Publication Date Title
CN102801974B (en) Image compression and entropy coder based on CABAC (Context-Based Adaptive Binary Arithmetic Coding)
CN110390383B (en) Deep neural network hardware accelerator based on power exponent quantization
CN100531386C (en) Self-adaptive context binary arithmetic encoder and encoding method
US7443318B2 (en) High speed context memory implementation for H.264
CN101480054A (en) Hardware-based CABAC decoder with parallel binary arithmetic decoding
CN102857756B (en) Transfer coder adaptive to high efficiency video coding (HEVC) standard
CN101036299A (en) A method of and apparatus for implementing a reconfigurable trellis-type decoding
CN101771879A (en) Parallel normalized coding realization circuit based on CABAC and coding method
WO2012048053A2 (en) System and method for optimizing context-adaptive binary arithmetic coding
CN101848311B (en) JPEG2000 EBCOT encoder based on Avalon bus
CN101951516A (en) Parallel encoding realization circuit and encoding method based on CABAC (Context-based Adaptive Binary Arithmetic Coding) in H.264/AVC (Advanced Video Coding)
CN102176750A (en) High-performance adaptive binary arithmetic encoder
Fei et al. A 1 gbin/s cabac encoder for h. 264/avc
CN103227924A (en) Arithmetic coder and coding method
US7714753B2 (en) Scalable context adaptive binary arithmetic coding
CN111199277B (en) Convolutional neural network accelerator
JP6159240B2 (en) Binary arithmetic encoding device, binary arithmetic encoding method, and binary arithmetic encoding program
CN101489128A (en) JPEG2000 pipeline arithmetic encoding method and circuit
CN111313912B (en) LDPC code encoder and encoding method
Pastuszak A novel architecture of arithmetic coder in JPEG2000 based on parallel symbol encoding
Wang et al. A high performance fully pipelined architecture for lossless compression of satellite image
CN110046693B (en) Method for selecting coding options, method for training neural network, device and medium
CN106921859A (en) A kind of CABAC entropy coding methods and device based on FPGA
Ramesh Kumar et al. Two-symbol FPGA architecture for fast arithmetic encoding in JPEG 2000
CN109922341A (en) The advanced entropy coder implementation method of AVS2 and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140820

Termination date: 20190719

CF01 Termination of patent right due to non-payment of annual fee