CN104869404B - Method and apparatus to Video coding and the method and apparatus to video decoding - Google Patents

Method and apparatus to Video coding and the method and apparatus to video decoding Download PDF

Info

Publication number
CN104869404B
CN104869404B CN201510203206.9A CN201510203206A CN104869404B CN 104869404 B CN104869404 B CN 104869404B CN 201510203206 A CN201510203206 A CN 201510203206A CN 104869404 B CN104869404 B CN 104869404B
Authority
CN
China
Prior art keywords
unit
coding
information
data
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510203206.9A
Other languages
Chinese (zh)
Other versions
CN104869404A (en
Inventor
李泰美
韩宇镇
金壹求
李�善
李善一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN104869404A publication Critical patent/CN104869404A/en
Application granted granted Critical
Publication of CN104869404B publication Critical patent/CN104869404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A kind of method and apparatus and the method and apparatus to video decoding to Video coding.The method of coding includes:Determine coding mode, wherein, the coding method including predictive coding that the coding mode indicates the current data unit of the coding for picture and performed to current data unit;The generation for merging with least one adjacent data unit is determined based at least one of coding mode and predictive mode;Determine prediction mode information, merge relevant information and prediction relevant information, and determine the coding information including prediction mode information, merging relevant information and prediction relevant information of data cell.

Description

Method and apparatus to Video coding and the method and apparatus to video decoding
The application be to China Intellectual Property Office submit to the applying date be the entitled " by using block of on July 7th, 2011 Merge the method and apparatus encoded to video and the method and apparatus decoded to video by using merged block " No. 201180043660.2 divisional application of application.
Technical field
The apparatus and method consistent with exemplary embodiment are related to by using the merged block for predictive coding to video Coded and decoded.
Background technology
In order to be encoded to the block in present image, video compression technology is usually using using the most phase among contiguous block Like the motion estimation/compensation method and following compression method of the information of forecasting of block:The compression method passes through discrete cosine transform (DCT) differential signal between prior images and present image is encoded to remove redundant data, so as to reduce video counts According to size.
It is developed and supplies with for reproducing and storing the hardware of high-resolution or high-quality video content, for is used for The demand of the Video Codec that high-resolution or high-quality video content are effectively encoded or decoded increased. In the Video Codec of prior art, based on the macro block with preliminary dimension, video is carried out according to limited coding method Coding.Additionally, the Video Codec of prior art is performed by using the block for being respectively provided with same size to macro block converting and inverse Conversion, to be coded and decoded to video data.
The content of the invention
Technical problem
There is provided a kind of method and apparatus encoded to video by using merged block and one kind by using block The method and apparatus that merging is decoded to video.
Solution
According to the one side of exemplary embodiment, there is provided one kind is compiled by using data cell merging to video The method of code, methods described includes:Determine coding mode, wherein, the coding mode indicates the data of the coding for picture Unit and the coding method including predictive coding performed to each data cell;According to data cell, based on predictive mode and The generation for merging that determines with least one adjacent data unit of at least one of coding mode;According to data cell, base In with least one adjacent data unit merge to determine prediction mode information, merge relevant information and prediction Relevant information, and determine the coding letter including prediction mode information, merging relevant information and prediction relevant information of data cell Breath.
Brief description of the drawings
Fig. 1 is the frame of the equipment encoded to video by using data cell merging according to exemplary embodiment Figure;
Fig. 2 is the frame of the equipment decoded to video by using data cell merging according to exemplary embodiment Figure;
Fig. 3 is the diagram for showing the adjacent macroblocks that can merge with current macro according to prior art;
Fig. 4 and Fig. 5 are for explaining the neighbour in current data unit according to prior art and exemplary embodiment respectively Selected the diagram of the method for the data cell merged with current data unit among nearly data cell;
Fig. 6 and Fig. 7 are for explaining according to exemplary embodiment to prediction mode information, merging relevant information and prediction The block diagram of the order that relevant information is coded and decoded;
Fig. 8 and Fig. 9 are for explaining the expansion in current data unit according to prior art and exemplary embodiment respectively Selected the diagram of the method for the data cell merged with current data unit among exhibition adjacent data unit;
Figure 10, Figure 11 and Figure 12 are for explaining according to each exemplary embodiment to prediction mode information, merging phase The block diagram of the order that pass information and prediction relevant information are coded and decoded;
Figure 13 is the diagram for showing the adjacent data unit not merged with current bay according to exemplary embodiment;
Figure 14 is to show the candidate data changed according to shape and the position of current bay according to exemplary embodiment The diagram of unit;
Figure 15 is to show can not be closed with the current bay as the subregion with geometry according to exemplary embodiment And adjacent data unit diagram;
Figure 16 is to show to be confirmed as the proximity data that merges with current data unit according to the use of exemplary embodiment The diagram of the example of unit;
Figure 17 is to show the side encoded to video by using data cell merging according to exemplary embodiment The flow chart of method;
Figure 18 is to show the side decoded to video by using data cell merging according to exemplary embodiment The flow chart of method;
Figure 19 is, based on the coding unit with tree construction, to be merged by using data cell according to exemplary embodiment Come the block diagram of equipment encoded to video;
Figure 20 is, based on the coding unit with tree construction, to be merged by using data cell according to exemplary embodiment Come the block diagram of equipment decoded to video;
Figure 21 is the diagram for explaining the concept of the coding unit according to exemplary embodiment;
Figure 22 is the block diagram of the image coding unit based on coding unit according to exemplary embodiment;
Figure 23 is the block diagram of the image decoder based on coding unit according to exemplary embodiment;
Figure 24 is shown according to exemplary embodiment according to depth and the diagram of the coding unit of subregion;
Figure 25 is the diagram for explaining the relation between the coding unit and converter unit according to exemplary embodiment;
Figure 26 is the coding information for explaining the coding unit corresponding with coding depth according to exemplary embodiment Diagram;
Figure 27 is the diagram for showing the coding unit according to depth according to exemplary embodiment;
Figure 28 to Figure 30 is for explaining coding unit according to exemplary embodiment, between predicting unit and converter unit Relation diagram;
Figure 31 is for the coding mode information according to table 2, explains coding unit, between predicting unit and converter unit The diagram of relation;
Figure 32 is shown according to exemplary embodiment based on the coding unit with tree construction, by using data cell The flow chart of the method for merging to encode video;
Figure 33 is shown according to exemplary embodiment based on the coding unit with tree construction, by using data cell The flow chart of the method for merging to decode video.
Realize optimal mode of the invention
According to the one side of exemplary embodiment, there is provided a kind of video is encoded by using data cell merging Method, methods described includes:Determine coding mode, wherein, the coding mode indicates the data sheet of the coding for picture Unit and the coding method including predictive coding performed to each data cell;According to data cell, based on predictive mode and volume At least one of pattern, it is determined that the generation for merging with least one adjacent data unit;According to data cell, based on The generation of the merging of at least one adjacent data unit, determines that prediction mode information, merging relevant information are related to prediction Information, and determine the coding information of data cell, wherein, the coding information includes prediction mode information, merges relevant information With prediction relevant information.
The step of determining coding information may include:It is determined that whether the predictive mode of instruction data cell is to skip the jump of pattern Pattern information is crossed, and determines whether to encode pooling information based on skip mode information, wherein, pooling information indicated number Whether combined with each other according to unit and at least one adjacent data unit.
According to the one side of another exemplary embodiment, there is provided one kind is entered by using data cell merging to video The method of row decoding, methods described includes:Bit stream to receiving is parsed to extract encoded video data and coding Information, and extract the prediction mode information in coding information, merge relevant information and prediction relevant information;Based on predictive mode letter Breath and merging relevant information, are analyzed and at least one according to data cell based at least one of predictive mode and coding mode The generation of the merging of individual adjacent data unit, and by using the prediction relevant information pair of at least one adjacent data unit The data cell merged with least one adjacent data unit performs inter prediction and motion compensation, with according to based on coding The data cell that information determines is decoded to encoded video data.
Extract and be may include the step of reading:Extract and read and indicate whether the predictive mode of data cell is to skip pattern Skip mode information;Determine whether to extract pooling information based on skip mode information, wherein, pooling information indicates data cell Whether combined with each other with least one adjacent data unit.
According to the one side of another exemplary embodiment, there is provided a kind of video is carried out by using data cell merging The equipment of coding, the equipment includes:Coding mode determiner, determines coding mode, wherein, the coding mode indicates to be used for The data cell encoded to picture and the coding method including predictive coding for each data cell;Data cell is closed And determiner, according to data cell, based at least one of predictive mode and coding mode, counted it is determined that neighbouring with least one According to the generation of the merging of unit;Coding information determiner, according to data cell, based on merging with the adjacent data unit Occur, determine prediction mode information, merge relevant information and prediction relevant information, and determine the coding information of data cell, its In, the coding information includes prediction mode information, merges relevant information and prediction relevant information.
According to the one side of another exemplary embodiment, there is provided a kind of video is carried out by using data cell merging The equipment of decoding, the equipment includes:Resolver and extractor, the bit stream to receiving are parsed to extract encoded regarding Frequency evidence and coding information, and extract the prediction mode information in coding information, merge relevant information and prediction relevant information;Number According to unit combiner and decoder, based on prediction mode information and merging relevant information, predictive mode is based on according to data cell The generation for merging with least one adjacent data unit is analyzed with least one of coding mode, and by using described The data cell that the prediction relevant information pair of at least one adjacent data unit merges with the adjacent data unit performs interframe Prediction and motion compensation, are decoded with according to the data cell determined based on coding information to encoded video data.
According to the one side of another exemplary embodiment, there is provided one kind includes video is compiled for performing thereon The computer readable recording medium storing program for performing of the program of the method for code.
According to the one side of another exemplary embodiment, there is provided one kind includes video is solved for performing thereon The computer readable recording medium storing program for performing of the program of the method for code.
Specific embodiment
Hereinafter, " image " can not only refer to rest image, can also refer to moving image (such as video).Additionally, " data sheet Unit " refers to one group of data in the preset range among the data of composition video.Additionally, hereinafter, when such as " ... in extremely Few one " expression when after a column element, the expression modification permutation element is rather than the single unit modified in the row Element.
To be explained referring to figs. 1 to Figure 18 below and merged according to the use data cell of one or more exemplary embodiments The coding and decoding carried out to video.Reference picture 19 to Figure 33 is explained according to one or more exemplary embodiments below Based on the coding unit with tree construction, the coding and decoding carried out to video is merged using data cell.
To be explained according to one or more exemplary embodiments by using data cell referring to figs. 1 to Figure 18 below Merge, for video is encoded equipment, for the equipment decoded to video, the method encoded to video and The method decoded to video.
Fig. 1 is the equipment 10 encoded to video by using data cell merging according to exemplary embodiment Block diagram.
Equipment 10 includes that coding mode determiner 11, data cell merges determiner 13 and coding information determiner 15.For It is easy to explain, the equipment 10 encoded to video is merged by using data cell and is referred to as being encoded to video Equipment 10.
Equipment 10 receives video data, and inter prediction, the frame in picture between picture are performed by the picture to video Interior prediction, conversion, quantization and entropy code are encoded to video data, and are exported and included the letter on the video data for encoding The coding information of breath and coding mode.
Coding mode determiner 11 can determine that the data cell of the coding for picture, and can determine that to each data sheet The coding method that unit performs.In video compressing and encoding method, in order to be reduced by removing the redundancy section in video data The size of data, performs and uses the predictive coding method of proximity data.Coding mode determiner 11 can by rule square block or Subregion in the square block of rule is defined as the data cell for predictive coding.
Coding mode determiner 11 can be predictive mode (the such as frame that each data cell determines indication predicting coding method Between pattern, frame mode, skip mode or Direct Model).Additionally, coding mode determiner 11 can be according to the prediction of data cell Pattern determines addition Item (prediction direction such as useful to predictive coding or reference key).
Coding mode determiner 11 can determine that the various volumes including the predictive mode for predictive coding and related addition Item Pattern, and correspondingly video data can be encoded.
Data cell merging determiner 13 not only can determine that among the data cell determined by coding mode determiner 11 Predictive mode is whether the data cell of inter-frame mode merges with least one adjacent data unit, may further determine that by coding mode Predictive mode among the data cell that determiner 11 determines be to skip pattern or Direct Model data cell whether with least One adjacent data unit merges.
If current data unit merges with adjacent data unit, current data unit can share adjacent data unit Motion vector information.Although the motion vector difference information of current data unit is coded independently, due to can by accordance with or It is pre- to obtain the auxiliary of current data unit with reference to the auxiliary information of forecasting of the adjacent data unit merged with current data unit Measurement information, therefore the auxiliary information of forecasting of current data unit is not encoded separately.
Data cell merges determiner 13 can be determined including can be with current number in the region neighbouring with current data unit At least one candidate data unit group of the data cell merged according to unit.Data cell merge determiner 13 can it is described at least The data cell that will merge with current data unit is searched in one candidate data unit group.In this case, can be Determine to include a candidate unit group of the data cell that can merge with current data unit in each region.
According to default pre-defined rule between coded system reconciliation code system, can be set a kind of adjacent with current data unit Determine that the method and one kind of candidate data unit group determine one in candidate data unit group at least one near region The method of data cell.
Additionally, equipment 10 can be on determining candidate's number at least one region neighbouring with current data unit According to the information of the method for unit group and on determining an information for the method for data cell in candidate data unit group At least one encoded and exported.
For example, data cell merging determiner 13 can be searched in candidate data unit group and have phase with current data unit With the data cell of reference key as, and the data cell can be elected the candidate data list that will merge with current data unit Unit.
Selectively, it is interframe mould that data cell merge determiner 13 can search for predictive mode in candidate data unit group The data cell of formula, and the data cell can be elected as the candidate data unit that will merge with current data unit.Can be from pressing Finally determine a data cell as the time that will merge with current data unit in the candidate data unit that this mode is selected Select data cell.
Data cell merges determiner 13 can be come by using the conventional method of the motion-vector prediction according to inter-frame mode It is determined that the candidate data unit that will merge with current data unit.In detail, it is pre- according to the motion vector according to inter-frame mode The conventional method of survey, from the adjacent data unit of all borderless contacts of current data unit among determine will use current number According to multiple candidate vectors of the motion-vector prediction of unit.That is, neighbouring with what the left margin of current data unit was contacted One among an adjacent data unit contacted with the coboundary of current data unit among data cell and current One among the adjacent data unit of the turning contact of data cell is chosen, and the motion of three data cells is sweared One of amount is confirmed as candidate vector.
According to the conventional method of the motion-vector prediction according to inter-frame mode, data cell merges determiner 13 and can include In the left candidate data unit group of all multiple adjacent data units contacted with the left margin of current data unit, and in bag Include in the upper candidate data unit group of all multiple adjacent data units contacted with the coboundary of current data unit, search is simultaneously It is determined that the data cell that will merge with current data unit.
Additionally, in addition to the left candidate data unit group and upper candidate data unit group of current data unit, data sheet Unit merges determiner 13 can be in the upper left adjacent data unit, upper right proximity data contacted including the turning with current data unit Search for and determine to merge with current data unit in the turning candidate data unit group of unit and lower-left adjacent data unit One data cell.
In this case, method for candidate data unit is determined in left candidate data unit group, in upper candidate The method of one candidate data unit of determination and one candidate of determination in the candidate data unit group of turning in data unit group The method of data cell can be predetermined.Due to determining the every of candidate data unit among corresponding candidate data unit group The method of kind can be predetermined, therefore methods described impliedly can be sent with signal.
Additionally, from left candidate data unit group determine a candidate data unit, in upper candidate data unit group One candidate data unit of middle determination and the data cell (that is, three determined in the candidate data unit group of turning Candidate data unit) among finally determine that the method for adjacent data unit that will merge with current data unit can be pre- If.That is, due to determining that the every kind of method by the adjacent data unit merged with candidate data unit can be predetermined, therefore Methods described impliedly can be sent with signal.
For example, it is inter-frame mode that data cell merge determiner 13 can search for predictive mode among candidate data unit Data cell, and the data cell can be elected as the candidate data unit that will merge with current data unit.Selectively, number Merge the number that determiner 13 can be searched among candidate data unit and there is current data unit same reference to index according to unit According to unit as, and the data cell is elected the candidate data unit that will merge with current data unit.
Although the subregion divided to carry out the more accurately purpose of inter prediction to a data cell is located adjacent one another, But subregion can not be combined with each other.
Addressable data cell among due to the data cell neighbouring with current bay can be according to the shape of current bay Shape and position and change, therefore the merging candidate set of the adjacent data unit including that can be merged can be changed.Therefore, data sheet Unit merges determiner 13 can be based on the adjacent data unit that the shape and location finding of current bay can be merged.
Coding information determiner 15 can determine that prediction mode information, merging relevant information are related to prediction according to data cell Information.The data cell that coding information determiner 15 can merge determiner 13 according to data cell merges, and determines in coding mode Prediction relevant information is updated in the coding information that device 11 determines.Coding information determiner 15 can merge determiner according to data cell 13 data cell merging is encoded with including merging relevant information to coding information.Coding information determiner 15 it is exportable by The video data and coding information of the coding of coding mode determiner 11.
Prediction mode information in prediction relevant information is that the predictive mode of instruction current data unit is inter-frame mode, frame The information of internal schema, skip mode or Direct Model.For example, prediction mode information may include to indicate the pre- of current data unit Whether survey pattern is to skip whether the skip mode information of pattern and the predictive mode of instruction current data unit are Direct Models Direct Model information.
Merging relevant information includes that data cell merges or whether determination data cell merges the letter being performed for performing Breath.For example, merge relevant information may include indicate current data unit whether the pooling information that will merge with adjacent data unit With the merging index information of the data cell for indicating to be merged.Coding information determiner 15 can be by " proximity data list Unit predictive mode and divisional type " combination and on the upper of " whether current data unit and adjacent data unit are merged " Hereafter model to encode pooling information.
Prediction relevant information may also include the auxiliary information of forecasting and motion letter for being predicted coding to data cell Breath.For example, as described above, prediction relevant information may include that (including instruction will be by with reference to the additional information related to predictive coding Reference key of the data cell of reference etc.) auxiliary information of forecasting and motion vector or motion vector difference information.
Coding information determiner 15 can be based between the possibility that the predictive mode of data cell and predicting unit are merged Close relation, it is determined that merge relevant information whether be set according to prediction mode information.
In the first exemplary embodiment that data cell merges being performed to the data cell in addition to skip mode, Coding information determiner 15 can enter to the skip mode the information whether predictive mode of instruction current data unit is to skip pattern Row coding, and the conjunction that whether skip mode information determines to indicate current data unit and adjacent data unit to combine with each other can be based on And information.
In detail, in the first exemplary embodiment, if the predictive mode of current data unit is to skip pattern, Skip mode information can be set to indicate that the predictive mode of current data unit is to skip pattern by coding information determiner 15, and The pooling information of current data unit can not be encoded.
If the predictive mode of current data unit is not to skip pattern, coding information determiner 15 can be by skip mode Information is set to indicate that the predictive mode of current data unit is not to skip pattern, and can be to the pooling information of current data unit Encoded.
Coding information determiner 15 can be encoded based on pooling information to the motion vector difference information of data cell, and Can determine that whether the auxiliary information of forecasting of the data cell is encoded.
That is, if current data unit merges with adjacent data unit, coding information determiner 15 will can work as The pooling information of preceding data cell is set to indicate that current data unit merges with adjacent data unit, and can not be to current data The auxiliary information of forecasting of unit is encoded.On the other hand, if current data unit does not merge with adjacent data unit, compile Code information determiner 15 can by the pooling information of current data unit be set to indicate that current data unit not with proximity data list Unit merges, and the auxiliary information of forecasting of current data unit can be encoded.
No matter whether current data unit merges with adjacent data unit, coding information determiner 15 all can be to current data The motion vector difference information of unit is encoded.
Additionally, determining whether that performing data cell to the data cell in addition to skip mode and Direct Model merges The second exemplary embodiment in, coding information determiner 15 can be used to indicate whether be to predictive mode Direct Model number The merging relevant information for performing data cell merging according to unit is encoded.
In detail, in the second exemplary embodiment, can be set to for skip mode information by coding information determiner 15 Indicate the predictive mode of data cell not to be to skip pattern, and Direct Model information can be encoded.Additionally, coding information is true Determining device 15 can determine whether pooling information is encoded based on Direct Model information.If that is, current data unit is pre- Survey pattern is Direct Model, then Direct Model information can be set to indicate that the pre- of current data unit by coding information determiner 15 Survey pattern is Direct Model, and the pooling information of current data unit can not be encoded.If current data unit is pre- Survey pattern is not Direct Model, then Direct Model information can be set to indicate that current data unit by coding information determiner 15 Predictive mode is not Direct Model, and the pooling information of current data unit can be encoded.
If pooling information is encoded, based on pooling information determine current data unit auxiliary information of forecasting whether by Coding, the motion vector difference information to current data unit as more than described in the first exemplary embodiment is compiled Code.
The data cell obtained by dividing picture may include " compiling as the data cell for being encoded to picture Code unit ", " predicting unit " for predictive coding and " subregion (partition) " for inter prediction.Data cell is closed And determiner 13 can be directed to each coding unit and determine whether to perform and merge with adjacent data unit, and coding information determination Device 15 can determine skip mode information and pooling information for each coding unit.Additionally, data cell merging determiner 13 can pin Each predicting unit is determined whether to perform and is merged with adjacent data unit, and coding information determiner 15 can be that each is pre- Survey unit and determine skip mode information and pooling information.
If skip mode information and pooling information both of which are used, due to merging both in skip mode and data In the case of unique information of forecasting not to current data unit encode, therefore equipment 10 can be by according to the pre- of skip mode Survey method is distinguished with the Forecasting Methodology merged according to data.For example, the number with skip mode can be determined according to preset rules According to the reference key and reference direction of unit, and the data cell merged with adjacent data unit can observe adjacent data unit Movable information reference key and reference direction.Due to for determine with skip mode data cell reference key and The rule of reference direction can be predetermined, therefore the rule impliedly can be sent with signal.
Coding information determiner 15 can be encoded to the skip mode information for each predictive mode, and can be to being used for The merging relevant information of each subregion is encoded.Additionally, coding information determiner 15 can be to the conjunction for each data cell And relevant information and skip mode both information are encoded.Selectively, coding information determiner 15 can will merge related letter Breath is set to be encoded only for the data cell with default predetermined predictive mode.
Equipment 10 can determine that the data cell between data cell merges, or determine the data cell between predicting unit Merge.Additionally, equipment 10 can be encoded individually to skip mode information and Direct Model information.Therefore, if based on data The skip mode information of unit, the predictive mode of the data cell is not to skip pattern, then coding information determiner 15 can be right Whether the Direct Model information for indicating data cell be coded of skipping/direct model coding information, indicate between data cell The merging that merges between determination information and indication predicting unit of the coding unit that whether is determined of generation of merging be The no predicting unit being determined merges at least one of determination information and is encoded.
Fig. 2 is the equipment 20 decoded to video by using data cell merging according to exemplary embodiment Block diagram.
Equipment 20 includes resolver/extractor 21 and data cell combiner/decoder 23.For the ease of explaining, pass through Merge the equipment 20 for decoding video using data cell to be referred to as " for the equipment 20 for decoding video ".
Equipment 20 receives the bit stream of encoded video data, extracts the coding including the information on coding method and believes Breath and encoded video data, and performed by the inter prediction between entropy decoding, inverse quantization, inverse transformation and picture/compensation Decode to recover video data.
The bit stream of 21 pairs of receptions of resolver/extractor is parsed to be believed with extracting encoded video data and coding Breath, and extract the prediction mode information in coding information, merge relevant information and prediction relevant information.Resolver/extractor 21 Skip mode information, Direct Model information etc. can be extracted as prediction mode information.Resolver/extractor 21 can will include reference The auxiliary information of forecasting and the information extraction of motion vector difference of direction and reference key are prediction relevant information.
Pooling information, merging index information etc. can be extracted as merging relevant information by resolver/extractor 21.Resolver/ Extractor 21 can read pooling information and can analyze the predictive mode of the adjacent data unit merged with current data unit and divide Area's type, wherein, the pooling information by the combination on " predictive mode and divisional type of adjacent data unit " and " when Whether preceding data cell and adjacent data unit combine with each other " context modeling and be encoded.
First, determining whether to perform the data cell in addition to skip mode the first example that data cell merges Property embodiment in, resolver/extractor 21 from the bitstream extraction for receiving and can read the skip mode information of data cell, and Can determine whether the pooling information of data cell is extracted based on skip mode information.That is, if based on skip mode The predictive mode that information reads current data unit is not to skip pattern, then resolver/extractor 21 can be from the bit stream for receiving Extract the pooling information of current data unit.
Resolver/extractor 21 can be based on pooling information and extract the motion vector difference information of data cell, and can determine that Whether the interframe auxiliary information of forecasting of data cell is extracted.That is, reading current data list if based on pooling information Unit does not merge with adjacent data unit, then resolver/extractor 21 can from reception bitstream extraction motion vector difference information, And the auxiliary information of forecasting of extractable current data unit.On the other hand, current data unit is read if based on pooling information Merge with adjacent data unit, then resolver/extractor 21 can be from the bitstream extraction motion vector difference information for receiving, and can The auxiliary information of forecasting of current data unit is not extracted.
Next, determining whether that performing data cell to the data cell in addition to skip mode and Direct Model closes And the second exemplary embodiment in, if the predictive mode of data cell is not to skip pattern, resolver/extractor 21 can The Direct Model information of data cell is extracted, and can determine whether pooling information is extracted based on Direct Model information.
If that is, according to Direct Model information read current data unit predictive mode be Direct Model, Resolver/extractor 21 can not be from the bitstream extraction pooling information for receiving.On the other hand, if read according to Direct Model information The predictive mode for going out current data unit is not Direct Model, then resolver/extractor 21 can be closed from the bitstream extraction for receiving And information.
Resolver/extractor 21 can be based on pooling information extract data cell motion vector difference information, and can such as with On described in the first embodiment determination auxiliary information of forecasting whether be extracted.
Data cell combiner/decoder 23 is based on prediction mode information and merges relevant information, according to data cell base Analyse whether to perform at least one of predictive mode and coding mode and merge with least one adjacent data unit.Data Unit combiner/decoder 23 can based on coding information determine data cell and according to determine data cell encoded is regarded Frequency evidence is decoded to recover picture.
For example, data cell combiner/decoder 23 can by using the prediction relevant information pair of adjacent data unit with The data cell that adjacent data unit merges performs inter prediction and motion compensation, and video data is carried out with based on coding information Decoding.
Resolver/extractor 21 is extractable and reads for the skip mode information of each coding unit and pooling information, Data cell combiner/decoder 23 can be based on the conjunction of the pooling information determination for each coding unit and adjacent data unit And whether be performed.
Additionally, resolver/extractor 21 is extractable and reads for the skip mode information of each predicting unit and merging Information, data cell combiner/decoder 23 can be based on pooling information determination and proximity data list for each predicting unit Whether the merging of unit is generated.
The merging relevant information reading that data cell combiner/decoder 23 can be based on the extraction of resolver/extractor 21 is worked as Whether preceding data cell merges with adjacent data unit, and the data cell that will be merged can be searched in adjacent data unit.
First, data cell combiner/decoder 23 can be based on merging the pooling information analysis current number in relevant information Whether merge with adjacent data unit according to unit.Merge with adjacent data unit if the read out current data unit, then data sheet First combiner/decoder 23 can be based on merging the merging index information in relevant information, in the area neighbouring with current data unit Determine to include at least one candidate data unit group of the data cell that can merge with current data unit in domain.Data cell is closed And device/decoder 23 can determine the number that will merge with current data unit at least one candidate data unit group According to unit.Each determination that can be directed at least one region neighbouring with current data unit is used for current data unit The candidate data unit group of merging.
It is due to determining that the every kind of method by the adjacent data unit merged with candidate data unit can be predetermined therefore described Method impliedly can be sent with signal.Data cell combiner/decoder 23 can be based on according between coder/decoder system The default method for determining candidate data unit group of pre-defined rule and one data cell of determination in candidate data unit group At least one of method, it is determined that the data cell that will merge with current data unit.
Resolver/extractor 21 is extractable on determining to wait among at least one region neighbouring with current data unit Select the information of the method for data unit group and on determining an information for the method for data cell in candidate data unit group At least one of.Data cell combiner/decoder 23 can be based on the method on the candidate data unit group for determining to extract Information and on determining at least one of information for method for data cell in candidate data unit group, it is determined that will The data cell merged with current data unit.
If for example, data cell combiner/decoder 23 sets the first candidate data unit, second according to presetting method Candidate data unit or the 3rd candidate data unit, then data cell combiner/decoder 23 can be in the proximity data list on upper strata Search has the adjacent data unit of same reference index with current data unit in the merging candidate set of unit, and can be by the neighbour Nearly data cell is defined as the data cell that will be merged.
Selectively, if data cell combiner/decoder 23 according to presetting method determine the first candidate data unit, Second candidate data unit or the 3rd candidate data unit, then data cell combiner/decoder 23 can be in the neighbouring number on upper strata It is the adjacent data unit of inter-frame mode according to search predictive mode in the merging candidate set of unit, and can be by the proximity data list Unit is defined as the data cell that will merge with current data unit.
Because every kind of method that a candidate data unit is determined among corresponding candidate data unit group can be predetermined, Therefore methods described impliedly can be sent with signal.
Data cell combiner/decoder 23 can be by using the conventional method of the motion-vector prediction according to inter-frame mode Come the candidate data unit for determining to merge with current data unit.In detail, data cell combiner/decoder 23 can base Merging index information in relevant information is merged, left including all of multiple contacted with the left margin of current data unit The left candidate data unit group of adjacent data unit and including adjacent data unit in all of multiple contacted with coboundary The data cell that will merge with current data unit is determined in upper candidate data unit group.
Additionally, in addition to the left candidate data unit group and upper candidate data unit group of current data unit, data sheet First combiner/decoder 23 can be neighbouring in the upper left contacted including the turning with current data unit based on index information is merged In the turning candidate data unit group of data cell, upper right adjacent data unit and lower-left adjacent data unit determine will with it is current The data cell that data cell merges.
In detail, data cell combiner/decoder 23 can read merging index information, and can be using as left candidate's number According to the first candidate data unit of in unit group, the second candidate data as in upper candidate data unit group Unit or as in the candidate data unit group of turning the 3rd candidate data unit be defined as by with current data unit The adjacent data unit of merging.
Additionally, data cell combiner/decoder 23 can be in the case where the first candidate data unit be determined in left neighbour Searched among nearly data cell and determine a left adjacent data unit as the data that will merge with current data unit Unit, searches for and determines neighbouring on one in the case where the second candidate data unit is determined among upper adjacent data unit Data cell is used as the data cell that will merge with current data unit, and the feelings being determined in the 3rd candidate data unit Search in the adjacent data unit contacted with turning and determine an adjacent data unit as will be with current data list under condition The data cell that unit merges.
In this case, left adjacent data unit, on adjacent data unit and the proximity data list that contacts with turning Search for and determine that the method for merge with current data unit data cell can be predetermined among unit.For example, according to pre- Equipment, method, data cell combiner/decoder 23 can search for the neighbour that predictive mode is inter-frame mode among candidate data unit Nearly data cell, and the adjacent data unit can be defined as a data cell will merging with current data unit.
Selectively, according to presetting method, data cell combiner/decoder 23 can be searched among candidate data unit There is the adjacent data unit that same reference indexes with current data unit, and the adjacent data unit can be defined as quilt The data cell for merging.
It is due to determining that the every kind of method by the adjacent data unit merged with candidate data unit can be predetermined therefore described Method impliedly can be sent with signal.
Data cell combiner/decoder 23 can not perform mutually merging between a subregion for data cell.
The proximity data list that data cell combiner/decoder 23 can change in the shape according to current bay and position The data cell that will merge with current data unit is determined in the merging candidate set of unit.
Resolver/extractor 21 can extract the skip mode information for each predicting unit, and can extract for each The merging relevant information of subregion.Selectively, resolver/extractor 21 can extract the related letter of merging for each data cell Breath and skip mode information.Additionally, resolver/extractor 21 is extractable to be only used for the data cell with predetermined predictive mode Merge relevant information.
Resolver/extractor 21 can successively extract skip mode information, predicting unit information, the partition information of predicting unit And pooling information.Partition information may include whether the information and letter on divisional type of subregion are divided on predicting unit Breath.
Equipment 20 can merge or be performed between predicting unit data sheet by the execution data cell between coding unit Unit merges to decode video data.Additionally, equipment 20 can be according to the skip mode information and Direct Model information of coding Video data is optionally decoded.
Correspondingly, if based on the skip mode information of data cell, the predictive mode of data cell is not to skip pattern, Then resolver/extractor 21 is extractable indicates whether the Direct Model information of data cell is coded of skipping/Direct Model volume The coding unit whether code information, the generation of the merging of instruction coding unit are determined merges determination information and indication predicting list The predicting unit whether generation of the merging between unit is determined merges at least one of determination information.Additionally, data cell Combiner/decoder 23 can be based on the information extracted, and perform decoding is carried out by using both skip mode and Direct Model, or Coding unit or predicting unit can be based on, the video data to merging by data cell is decoded.
Data cell combiner/decoder 23 can be directed to the data cell merged with adjacent data unit, by according to pre- If rule determines the reference key and reference direction of the data cell with skip mode and observes the motion of adjacent data unit The reference key and reference direction of information, decode to video data.Due to determining the data cell with skip mode The rule of reference key and reference direction can be predetermined, therefore the rule impliedly can be sent with signal.
With the increase of video resolution, data volume also quickly increases, and the size of data cell increases, redundant data Increase, therefore the data cell with skip mode or Direct Model increases.However, because previous macro block merging method determines Whether only predictive mode is that the macro block of the inter-frame mode in addition to skip mode and Direct Model is merged, and by the macro block Merge with the adjacent macroblocks with fixed dimension and fixed position, therefore previous macro block merging method is applied to limited area Domain.
Equipment 10 and equipment 20 can be to performing number with various sizes, the data cell of variously-shaped and various predictive mode Merge according to unit, and data cell can be merged with the adjacent data unit with various positions.Therefore, because various data sheets The prediction relevant information of more different adjacent data units is shared by unit, therefore can come by referring to the peripheral information of wider range Removal redundant data, so as to improve video coding efficiency.
Fig. 3 is the diagram for showing the adjacent macroblocks that can merge with current macro according to prior art.
According to the merged block method according to prior art, the merging candidate of the contiguous block that will merge with current macro is included in Contiguous block in group should be that contiguous block is coded of with inter-frame mode and before current macro.Therefore, only with currently The coboundary of macro block and the neighbouring block of right margin can be included in merging candidate set.
The block of merging can constitute a region, and can be according to the region of the block for merging letter related to merging to coding information Breath is encoded.For example, the pooling information whether being performed on merged block, if merged block is performed, it indicates that current grand The merging block positional information which block in the upper contiguous block and left contiguous block of block is merged can be encoded.
According to the merged block method according to prior art, although multiple blocks contact the border of current macro, only contact The contiguous block of the upper left sampling of current block can be chosen so as to merge with current macro.
That is, it is neighbouring with the coboundary of the first current macro 31 and with the upper left sample contact of the first current macro 31 First on contiguous block 32 and it is neighbouring with the left margin of the first current macro 31 and with the upper left sample contact of the first macro block 31 One of second left contiguous block 33 can be chosen so as to merge with the first current macro 31.
Similarly, with the second of the upper left sample contact of the second current macro 35 on the left contiguous block 37 of contiguous block 36 and second In one optionally merge with the second current macro 35.
Fig. 4 and Fig. 5 are for explaining the neighbour in current data unit according to prior art and exemplary embodiment respectively The diagram for the method for data cell that will merge with current data unit is selected among nearly data cell.
Reference picture 4, according to the data cell merging method according to prior art, although adjacent data unit 42,43 and 44 Coboundary with current data unit 41 contacts, the left margin of adjacent data unit 45,46,47 and 48 and current data unit 41 Contact, but the data cell merged with current data unit 41 is restricted to the data cell 42 as upper adjacent data unit Or as the data cell 45 of left adjacent data unit.Further, since being only the proximity data list of inter-frame mode with predictive mode The merging of unit is possible, so if the predictive mode of adjacent data unit 42 and 44 is to skip pattern or Direct Model, then Adjacent data unit 42 and 44 is not considered as the data cell that will be merged.
The data cell merging method of equipment 10 and equipment 20 according to Fig. 5, the neighbour that can merge with current data unit 41 The merging candidate set of nearly data cell may include adjacent data unit 42,43 and 44 and left adjacent data unit 45,46,47 With 48 all.In this case, even if the predictive mode in current data unit 41 is to skip pattern or Direct Model and frame Between pattern, also can determine that whether current data unit 41 merges with adjacent data unit.
For example, in the upper merging candidate set 52 of the upper adjacent data unit 42,43 and 44 including current data unit 41 One can be confirmed as upper merging candidate A '.Similarly, including current data unit 41 left adjacent data unit 45,46,47 Can be confirmed as left merging candidate L ' with 48 left merging candidate set 55.Upper merging candidate A ' and left merging candidate L ' One of can be finalized be the adjacent data unit that will merge with current data unit 41.
Equipment 10 and equipment 20 can determine for one of upper merging candidate set 52 to be defined as upper merging candidate according to presetting method The method of A ' and the method that one of left merging candidate set 55 is defined as left merging candidate L '.Information on current method can quilt Impliedly sent with signal.Even if the information on current method is not encoded individually being searched for in upper merging candidate set 52 Upper merging candidate A ' is searched on left merging candidate L ', but equipment 10 and the discernable search of equipment 20 in left merging candidate set 55 Merge candidate A ' and the left presetting method for merging candidate L '.
For example, being referred to identical with current data unit 41 in upper merging candidate set 52 and left merging candidate set 55 The adjacent data unit of index information can be confirmed as upper merging candidate A ' and left merging candidate L '.Selectively, in upper merging Candidate set 52 and it is left merge candidate set 55 in predictive mode be inter-frame mode current data unit 41 upper left sampling most connect Near adjacent data unit can be confirmed as upper merging candidate A ' and left merging candidate L '.
Similarly, equipment 10 and equipment 2 can according to presetting method by one of upper merging candidate A ' and left merging candidate L ' most It is defined as the adjacent data unit that will merge with current data unit 41 eventually.
Fig. 6 and Fig. 7 are for explaining according to exemplary embodiment to prediction mode information, merging relevant information and prediction The block diagram of the order that relevant information is coded and decoded.
First, Fig. 6 is for explaining according to whether the predictive mode for considering current data unit is to skip pattern to determine First exemplary embodiment of the generation that data cell merges, to prediction mode information, merges relevant information letter related to prediction The block diagram of the method that breath is coded and decoded.
In operation 61, equipment 10 is encoded to the skip mode information " skip_flag " of current data unit.If worked as The predictive mode of preceding data cell is to skip pattern, then skip mode information " skip_flag " can be arranged to 1, if currently The predictive mode of data cell is not to skip pattern, then skip mode information " skip_flag " can be arranged to 0.
If the predictive mode for determining current data unit in operation 61 is to skip pattern, methods described proceeds to operation 62.In operation 62, pooling information " merging_flag " can not be encoded.If determining the pre- of current data unit in operation 61 Survey pattern is not to skip pattern, then methods described proceeds to operation 63.In operation 63, pooling information " merging_flag " is compiled Code.Can determine that predictive mode is to skip prediction direction and the reference key letter of the current data unit of pattern according to preset rules Breath.The prediction direction and reference index information of the current data unit for will merge with adjacent data unit, can observe or join According to the reference key and reference direction of the motion vector of adjacent data unit.
For example, if there is following rule:If current band is P bands, predictive mode is to skip the data of pattern The prediction direction of unit is arranged to List0 directions, if current band is B bands, predictive mode is to skip the number of pattern It is arranged to Bi directions according to the predictive mode of unit and reference key is arranged to 0, then according to the rule, predictive mode is The predictive coding of the data cell of skip mode is feasible.
If current data unit merges with adjacent data unit, the pooling information " merging_ of current data unit Flag " can be arranged to 1, and if current data unit does not merge with adjacent data unit, then the conjunction of current data unit And information " merging_flag " can be arranged to 0.In operation 64, if current data unit merges with adjacent data unit, Then because the auxiliary information of forecasting of the predictive coding for current data unit can observe the information of adjacent data unit, Huo Zheke It is used for the auxiliary information of forecasting of the predictive coding of current data unit, therefore current data from the acquisition of information of adjacent data unit The prediction direction and reference index information " Inter direction/Ref index " of unit can not be encoded.It is most in operation 65 Pipe current data unit merges with adjacent data unit, but motion vector difference information " mvd " is encoded.
In operation 66, if current data unit does not merge with adjacent data unit, the prediction side of current data unit Can be encoded to reference index information " Inter direction/Ref index ", and in operation 67, motion vector difference Information " mvd " can be encoded.For example, the prediction direction of current data unit may include list0 directions, List1 directions and Bi side To.
The method for such as operating 61 to 67, equipment 20 is extractable and reads the skip mode information of current data unit, and can Based on skip mode information extraction and read pooling information and prediction relevant information.
Fig. 7 be for explain according to consider current data unit predictive mode whether be to skip pattern and Direct Model come Determine the second exemplary embodiment of the generation that data cell merges, to prediction mode information, merge relevant information and prediction phase The block diagram of the method that pass information is encoded/decoded.
In operation 71, equipment 10 is encoded to the skip mode information " skip_flag " of current data unit.If Operation 71 determines that the predictive mode of current data unit is to skip pattern, then methods described proceeds to operation 72.In operation 72, close And information " merging_flag " can not be encoded.
If the predictive mode for determining current data unit in operation 71 is not to skip pattern, methods described proceeds to behaviour Make 73.In operation 73, Direct Model " direct_flag " is encoded.If the predictive mode of current data unit is direct mould Formula, then the Direct Model information " direct_flag " of current data unit can be arranged to 1, if current data unit is pre- Survey pattern is not Direct Model, then the Direct Model information " direct_flag " of current data unit can be arranged to 0.If The predictive mode that current data unit is determined in operation 73 is Direct Model, then methods described proceeds to operation 74.In operation 74, Pooling information " merging_flag " can not be encoded.
If the predictive mode that current data unit is determined in operation 73 is not Direct Model, methods described proceeds to behaviour Make 75.In operation 75, pooling information " merging_flag " is encoded.In operation 76, if current data unit and neighbouring number Merge according to unit, then the prediction direction and reference index information " Inter direction/Ref index " of current data unit Can not be encoded, and in operation 77, motion vector difference information " mvd " is encoded.In operation 78 and 79, if current data Unit does not merge with adjacent data unit, then the prediction direction of current data unit and reference index information " Inter Direction/Ref index " and motion vector difference information " mvd " can be encoded.
The method for such as operating 71 to 79, equipment 20 is extractable and reads the skip mode information of current data unit or direct Pattern information, and skip mode information or Direct Model information extraction can be based on and pooling information and prediction relevant information is read.
Fig. 8 and Fig. 9 are for explaining respectively according to the method for prior art and exemplary embodiment in current data list The diagram for the method for data cell that will merge with current data unit is selected among the extension adjacent data unit of unit.
Data cell merging method in the prior art of Fig. 8, the object merged with current data unit 81 is limited In upper adjacent data unit 82 and left adjacent data unit 85 with the upper left sample contact of current data unit 81.Namely Say, the adjacent data unit 89,91 and 93 that the upper left corner, right upper corner and lower-left corner with current data unit 81 are contacted is not It is included in the merging candidate set of current data unit 81.
The data cell merging method of Fig. 9 is similar with the method for motion vector prediction of inter-frame mode.In fig .9, can with work as Preceding data cell 81 merge adjacent data unit merging candidate set not only may include adjacent data unit 82,83 and 84 with And left adjacent data unit 85,86,87 and 88, may also include and the upper left corner of current data unit 81, right upper corner and a left side The adjacent data unit 89,91 and 93 of lower turning contact.
For example, one of upper merging candidate set 92 of upper adjacent data unit 82,83 and 84 including current data unit 81 One of can be confirmed as upper merging candidate A ', and the left merging candidate set 95 including left adjacent data unit 85,86,87 and 88 Left merging candidate L ' can be confirmed as.Additionally, including the upper left corner with current data unit 81, right upper corner and lower-left corner One of turning merging candidate set 96 of the adjacent data unit 89,91 and 93 of contact can be confirmed as turning and merge candidate C '.On It is that will be closed with current data unit 81 that merge candidate A ', left merging candidate L ' and turning merge one of candidate C ' can be finalized And adjacent data unit.
One of upper merging candidate set 92 is defined as the upper method for merging candidate A ', one of left merging candidate set 95 is determined Merge one of candidate set 96 for the method for left merging candidate L ', by turning and be defined as the method and most that turning merges candidate C ' Merging candidate A ', left mergings candidate L ' and turning on determining eventually and merge the method for one of candidate C ' can describe in accordance with such as reference picture 5 Preset rules.
In fig .9, because the direction of the candidate data unit that can merge with current data unit 81 includes upper and lower and turns Angle, therefore merging positional information can be expressed as merging index, be not type of sign 0 or 1.
Figure 10, Figure 11 and Figure 12 are for explaining according to various exemplary embodiments to prediction mode information, merging phase The block diagram of the order that pass information and prediction relevant information are coded and decoded.
Reference picture 10, equipment 10 can to being encoded for the skip mode information of each predicting unit and pooling information, Wherein, the predicting unit is the data cell for predictive coding.
In operation 101, equipment 10 can be encoded to the skip mode information " skip_flag " of predicting unit, in operation 102, equipment 10 can be encoded to the pooling information " merging_flag " of the predicting unit in addition to skip mode.In operation 103 and 104, equipment 10 pattern can not be to skip to predictive mode and the predicting unit that does not merge with adjacent data unit only One prediction mode information " Prediction info " and partition information " Partition info " are encoded.
Therefore, equipment 20 is extractable and reads for the skip mode information of each predicting unit and pooling information.Equipment 20 extractable predictive modes are not to skip unique predictive mode of pattern and the predicting unit not merged with adjacent data unit Information and partition information.
Reference picture 11, equipment 10 can be encoded to the skip mode information for each predicting unit, and can in order to More accurately the purpose of predictive coding and the pooling information of each subregion that divides predicting unit acquisition is encoded.
In operation 111, equipment 10 can be encoded to the skip mode information " skip_flag " of predicting unit, in operation 112, the prediction mode information " Prediction info " that equipment 10 can not be to skip the predicting unit of pattern to predictive mode is entered Row coding, in operation 113, equipment 10 can be encoded to partition information " Partition info ".
In operation 114, equipment 10 can be to not being to skip the conjunction of each subregion of the predicting unit of pattern for predictive mode And information " merging_flag " is encoded.In operation 115, equipment 10 can not be to skip the prediction list of pattern to predictive mode The unique motion information " Motion info " of the subregion not merged with adjacent data unit among the subregion of unit is encoded.
Therefore, equipment 20 can extract and read the skip mode information for each predicting unit, and can extract and read Take the pooling information in each subregion.Equipment 20 can extract predictive mode and not be to skip pattern and do not merge with adjacent unit The unique motion information of subregion.
Reference picture 12, equipment 10 can be encoded to the skip mode information for each predicting unit, and can met The pooling information for each subregion is encoded during predetermined condition.
In operation 121, equipment 10 can be encoded to the skip mode information " skip_flag " of predicting unit, in operation 122, the prediction mode information " Prediction info " that equipment 10 can not be to skip the predicting unit of pattern to predictive mode is entered Row coding, in operation 123, the equipment can be encoded to partition information " Partition info ".
In operation 124, equipment 10 determines whether predetermined condition is satisfied for each subregion of predicting unit.In operation 125, among predictive mode is not to skip the subregion of the predicting unit of pattern, only meet the data cell of the predetermined condition Pooling information " merging_flag " can be encoded.In operation 126, equipment 10 is not to skip the prediction of pattern to predictive mode Among the subregion of unit, meet the predetermined condition and the subregion that does not merge with adjacent data unit and be unsatisfactory for described pre- The unique motion information " Motion info " of the subregion of fixed condition is encoded.
The predetermined condition of the subregion for being encoded to pooling information may include that the predictive mode of subregion is predetermined prediction The situation of pattern.For example, pattern can not be to skip according to predictive mode but the condition of inter-frame mode (non-skip mode), prediction Pattern is not to skip pattern and Direct Model, but the bar of inter-frame mode (non-to skip inter-frame mode and non-immediate inter-frame mode) Part or predictive mode are the conditions of the inter-frame mode (inter-frame mode of case of non-partitioned) not divided by subregion, to the merging letter of subregion Breath is encoded,
In detail, in operation 124, if not being to skip pattern and Direct Model to predictive mode, but inter-frame mode Data cell perform data cell merge, then equipment 10 can determine that the prediction of the subregion of the predicting unit in addition to skip mode Whether pattern is not Direct Model but inter-frame mode.In operation 125, predictive mode is not the merging letter of the subregion of Direct Model Breath " merging_flag " can be encoded.In operation 126, predictive mode is not Direct Model and does not merge with adjacent data unit Subregion and predictive mode be that the unique motion information " Motion info " of subregion of Direct Model can be encoded.
Therefore, equipment 20 can extract and read the skip mode information for each predictive mode, and can extract and read Take the pooling information in each subregion.Equipment 20 can extract and read predictive mode and is not to skip pattern and meets predetermined condition But the unique motion information of the subregion not merged with adjacent data unit and the subregion for being unsatisfactory for the predetermined condition.
Figure 13 is the diagram for showing the adjacent data unit not merged with current bay according to exemplary embodiment.
For more accurately predictive coding, the data cell (that is, predicting unit) for predictive coding can be divided into two Individual or more subregion.For example, the width of the first predicting unit 131 can be divided into the first subregion 132 and the second subregion 133.
Because even if the first subregion 132 and the second subregion 133 are included in the first predicting unit 131, the He of the first subregion 132 Second subregion 133 also has different motion features, therefore can not perform number between the first subregion 132 and the second subregion 133 Merge according to unit.Therefore, equipment 10 can not know the first subregion 132 and the second subregion 133 in same first predicting unit 131 Between whether perform data cell merging.Additionally, for the merging index information of the second subregion 133, to may not include instruction left neighbouring The index of data cell.
Even if when the height of the second predicting unit 135 is divided into the 3rd subregion 136 and four subregions 137, due to Data cell merging should not be performed between 3rd subregion 136 and the 4th subregion 137, therefore equipment 10 can not be known at the 3rd point Whether data cell merging is performed between the subregion 137 of area 136 and the 4th.Additionally, for the merging index information of the 4th subregion 137 May not include the index for indicating upper adjacent data unit.
Figure 14 is to show the candidate data changed according to shape and the position of current bay according to exemplary embodiment The block diagram of unit.
Shape and position according to subregion, the position of the adjacent data unit that will be merged can change.If for example, prediction Unit 141 is divided into left subregion 142 and right subregion 143, then the adjacent data unit candidate that can merge with left subregion 142 can be with Be the data cell 144 neighbouring with the coboundary of left subregion 142 and left subregion 142 the neighbouring data cell 145 of left margin and The data cell 146 neighbouring with the right upper corner of left subregion 142.
Although right subregion 153 is contacted in left margin with left subregion 142, because left subregion 142 and right subregion 143 are same The subregion of predicting unit 141, therefore merging can not be performed between left subregion 142 and right subregion 143.Therefore, can be with right subregion The 143 adjacent data unit candidates for merging can be the data cell 146 neighbouring with the coboundary of right subregion 143 and with right subregion The neighbouring data cell 147 of 143 right upper corner.Additionally, may not include an instruction left side for the merging index information of right subregion 143 The index of upper adjacent data unit.Figure 15 be show according to exemplary embodiment can not with as the subregion with geometry Current bay merge adjacent data unit diagram.
In the predictive coding of equipment 10, predicting unit can not only be divided according to horizontal or vertical direction, can also be according to Any direction is divided into the subregion with various geometries.The prediction list for dividing and obtaining is performed by according to any direction Unit 148,152,156 and 160 is illustrated in fig .15.
According to the location and shape of the subregion with geometry, the subregion with geometry can not with the subregion Coboundary and left margin contact adjacent data unit merge.For example, predicting unit 148 two subregions 149 and 150 it In, subregion 150 can merge with the adjacent data unit 151 for contacting left margin.However, due to the proximity data contacted with coboundary Unit is included in the subregion 149 in same predicting unit 158, therefore subregion 150 can not merge with upper adjacent data unit. In this case, the merging index information of subregion 150 may not include the rope indicated as the subregion 149 of upper adjacent data unit Draw.
Similarly, among two subregions 153 and 154 of predicting unit 152, subregion 164 can be with left adjacent data unit 155 merge.However, the subregion 153 being included in due to upper adjacent data unit in same predicting unit 152, therefore subregion 154 Can not merge with upper adjacent data unit.
Similarly, among two subregions 157 and 158 of predicting unit 156, subregion 158 can be with upper adjacent data unit 159 merge.However, the subregion 157 being included in due to left adjacent data unit in same predicting unit 156, therefore subregion 158 Can not merge with left adjacent data unit.
Similarly, among two subregions 161 and 162 of predicting unit 160, due to being included in same predicting unit 160 In subregion 161 be subregion 162 upper adjacent data unit and left adjacent data unit, therefore subregion 162 can not be neighbouring with upper Data cell and left adjacent data unit merge.
As described in reference picture 13,14 and 15, if the shape or position according to data cell produce the neighbour that can not be merged Nearly data cell, then merge the index that index information may not include the adjacent data unit for indicating to be merged.
Additionally, equipment 10 can not be performed for extending current data unit and making current data unit another with what is previously existed The data cell that one data cell is overlapped merges.
If for example, predicting unit be divided into two subregions and the predetermined candidate data unit of the second subregion with First subregion has identical movable information, then may not be permitted merging between the second subregion and predetermined candidate data unit Perhaps.
For example, among first subregion 132 and the second subregion 133 of first predicting unit 131 of Figure 13, if second point The upper predicting unit in area 133 has identical movable information with the first subregion 132, then the first subregion 132 and the second subregion 133 Upper predicting unit can be excluded from the candidate data unit group of the second subregion 133.If because data cell merges held Row to cause the second subregion 133 with reference to the movable information of upper predicting unit, then this and with reference to the movable information of first subregion 132 Situation is identical.
Pooling information can be set by considering the predictive mode of adjacent data unit and the context modeling of divisional type Merge together with data cell and whether be performed.By the predictive mode and subregion of analyzing the adjacent data unit of current data unit The combination of type and in the case where current data unit and adjacent data unit are combined with each other as context model, context mould The index of type can be expressed as pooling information.
Table 1 shows the pooling information by context modeling according to exemplary embodiment.For the ease of explain, will with work as The object that preceding data cell merges is limited to left adjacent data unit and upper adjacent data unit.
Table 1
[table 1]
[table]
Here the subregion with arbitrary shape is optionally included with, such as by the height according to symmetrical ratio division subregion Degree or width and symmetric partitioning type 2N × 2N, 2N × N, N × 2N and N × N for obtaining, by according to such as 1:N or n:1 is not Asymmetric divisional type 2N × nU, 2N × nD, nL × 2N that symmetrical ratio divides the height or width of predicting unit and obtains and NR × 2N, or the geometric zoning type by the way that the height or width of predicting unit to be divided into various geometries and obtain.It is logical Cross according to 1:3 and 3:The height that 1 ratio divides predicting unit obtains asymmetric divisional type 2N × nU and 2N × nD respectively, and And by according to 1:3 and 3:1 ratio divide predicting unit width obtain respectively asymmetric divisional type nL × 2N and nR × 2N。
According to table 1, due to the prediction of both left adjacent data units and upper adjacent data unit in current data unit Data cell merging is not performed when pattern is frame mode, therefore the pooling information of current data unit is assigned to index 0, and Context model need not be distinguished according to divisional type.
Moreover, it is assumed that the predictive mode of left adjacent data unit and upper adjacent data unit is inter-frame mode, rather than jump Pattern or Direct Model are crossed, when the only one in left adjacent data unit and upper adjacent data unit merges with current data unit When, and when both left adjacent data unit and upper adjacent data unit merge with current data unit, can be according to data sheet What whether unit's merging was performed according to the divisional type of adjacent data unit combines to set the context model of pooling information. In this case, each pooling information can be assigned to context model index one of 1 to 6 according to table 1.
Moreover, it is assumed that predictive mode is to skip pattern and Direct Model, when left adjacent data unit and upper proximity data list When at least one of unit is to skip pattern or Direct Model, merging letter can be set according to the divisional type of adjacent data unit The context model of breath, and each pooling information can be assigned to context model index one of 7 to 9 according to table 1.
Therefore, based on context equipment 20 can model reading pooling information, and can analyze in current data unit and neighbouring Whether the predictive mode and divisional type of merging and adjacent data unit is performed between data cell.
Equipment 20 can be inferred currently by using the movable information of the adjacent data unit merged with current data unit The movable information of data cell.
If additionally, the shape for merging the data cell of the merging to be formed by data cell be rule square, Equipment 10 and equipment 20 can perform conversion to the data cell for merging.
Additionally, in equipment 10 and equipment 20, the adjacent data unit merged with current data unit can be shared on frame The information in interior prediction direction.The information of the prediction direction on the data cell for merging the merging to be formed by data cell Can not be encoded or decode according to data cell, but can only be encoded or decode once for the data cell for merging.
Figure 16 is to show to be confirmed as the proximity data that merges with current data unit according to the use of exemplary embodiment The diagram of the example of unit.
Equipment 10 and the expansible border by the adjacent data unit merged with current data unit 163 of equipment 20, and can The subregion of current data unit 164 is divided using the border of extension.If for example, current data unit 163 and left neighbouring number Merge according to unit 164,165 and 166, then the border of left adjacent data unit 164,165 and 166 can be scaled up to reach current number According to unit 163.Current data unit 163 can be divided according to the border of the extension of left adjacent data unit 165,165 and 166 It is subregion 167,168 and 169.
Figure 17 is to show the side encoded to video by using data cell merging according to exemplary embodiment The flow chart of method.
In operation 171, indicate coding for picture data cell and each data cell is performed including prediction The coding mode of the coding method of coding is determined.
Operation 172, according to data cell, determined based at least one of predictive mode and coding mode with least One generation of the merging of adjacent data unit.Data cell may include for the predicting unit of predictive coding and for predicting list The subregion of the Accurate Prediction coding of unit.
Can be contacted in the coboundary with current data unit multiple on adjacent data unit and with current data unit The data cell that will merge with current data unit is searched among the multiple left adjacent data unit of left margin contact.Additionally, can Among the adjacent data unit that the upper left corner with current data unit, right upper corner and lower-left corner are contacted search will with work as The data cell that preceding data cell merges.
In operation 173, can according to data cell, based on adjacent data unit merge determine predictive mode Information, merging relevant information and prediction relevant information, and to including prediction mode information, merging relevant information letter related to prediction The coding information of breath is encoded.
The merging relevant information that the data cell of pattern and Direct Model can be to skip to predictive mode is encoded.Cause This, can after being encoded to skip mode information or Direct Model information to be confirmed as by with predetermined proximity data list The merging relevant information of the data cell that unit merges is encoded.Merge relevant information may include to indicate in current data unit and The pooling information that whether merging is performed between adjacent data unit and the merging index information for indicating adjacent data unit.
If the skip mode information of predicting unit and merging relevant information both of which are encoded, can believe in skip mode Breath and the prediction mode information and divisional type information that merge after relevant information is encoded to predicting unit are encoded.
If the skip mode information of predicting unit is encoded and the merging relevant information of subregion is encoded, can be pre- Survey after the skip mode information of unit, prediction mode information and divisional type information be encoded, it is related to merging according to subregion Information is encoded.
Figure 18 is to show the side decoded to video by using data cell merging according to exemplary embodiment The flow chart of method.
In step 181, the bit stream to receiving is parsed, from the encoded video data of the bitstream extraction and volume Code information, and extract prediction mode information from coding information, merge relevant information and prediction relevant information.
Merging phase can be extracted based on the skip mode information of current data unit or the result of Direct Model information is read Pass information.For example, extractable predictive mode is not to skip the merging relevant information of the data cell of pattern.Selectively, can carry It is inter-frame mode to take predictive mode, rather than skip mode and the merging relevant information of the data cell of Direct Model.Can be from conjunction And relevant information read indicate whether performed between current data unit and adjacent data unit merging pooling information and Indicate the merging index information of adjacent data unit.
If extracting skip mode information for each predicting unit and merging relevant information, can be in skip mode information With the prediction mode information and divisional type information for merging extraction predicting unit after relevant information is extracted.
If extracting skip mode information and being extracted with partition level with predicting unit level and merge relevant information, can be in prediction After the skip mode information of unit, prediction mode information and divisional type information are extracted, merge related according to multi-subarea extracting Information.
In operation 182, based on prediction mode information and merging relevant information, predictive mode and volume are based on according to data cell At least one of pattern analyzes the generation for merging with least one adjacent data unit.By using proximity data list The prediction relevant information of unit carrys out pair data cell merged with adjacent data unit and performs inter prediction and motion compensation, and is based on Coding information, decodes according to the coding unit for determining to encoded video data.
Can based on pooling information and merge index information, contacted with coboundary multiple on adjacent data unit and with a left side The data cell that will merge with current data unit is determined among the multiple left adjacent data unit of borderless contact.Additionally, can be Among the adjacent data unit that upper left corner, right upper corner and lower-left corner with current data unit are contacted determine will with it is current The data cell that data cell merges.
Current data list can be reconstructed by using the motion related information of the data cell merged with current data unit The motion related information of unit.By using the motion compensation that motion related information is performed to current data unit, can recover current Data cell simultaneously can recover picture.
Explained according to one or more exemplary embodiments based on tree construction now with reference to Figure 19 to Figure 33 Coding unit, the apparatus and method encoded to video by using data cell merging and is decoded to video Apparatus and method.
Figure 19 is, based on the coding unit with tree construction, to be merged by using data cell according to exemplary embodiment Come the block diagram of equipment 100 encoded to video.
Equipment 100 includes maximum coding unit divide 110, coding unit determiner 120 and output unit 130.In order to It is easy to explain, based on the coding unit with tree construction, the equipment 100 encoded to video is merged by using data cell It is referred to as " for the equipment 100 encoded to video ".
Maximum coding unit divide 110 can be based on the maximum coding unit of the current picture for image to described current Picture is divided.If current picture is more than maximum coding unit, the view data of current picture can be divided at least One maximum coding unit.Maximum coding unit can have 32 × 32,64 × 64,128 × 128,256 × 256 equidimensions Data cell, wherein, the shape of data cell be wide and a height of 2 square it is square.View data can according to it is described at least One maximum coding unit is output to coding unit determiner 120.
Coding unit can be characterized by full-size and depth.Depth representing coding unit is from maximum coding unit by space The number of times of division, and with depth down, the deeper coding unit according to depth can be divided into most from maximum coding unit Lower Item unit.The depth of maximum coding unit is highest depth, and the depth of minimum coding unit is lowest depth.Due to it is every The size of the individual corresponding coding unit of depth reduces with the depth down of maximum coding unit, therefore, with greater depths phase The coding unit answered may include multiple coding units corresponding with more low depth.
As described above, the view data of current picture is divided into maximum coding list according to the full-size of coding unit Unit, each in the maximum coding unit may include the deeper coding unit being divided according to depth.Because maximum is encoded Unit is divided according to depth, therefore the view data of the spatial domain being included in maximum coding unit can be layered according to depth Classification.
The depth capacity and full-size of coding unit can be predefined, the depth capacity and full-size limit maximum The total degree that the height and width of coding unit are divided by layering.
120 couples of at least one obtained by the region according to depth division maximum coding unit of coding unit determiner Zoning is encoded, and determines the view data for exporting the final coding according at least one zoning Depth.In other words, coding unit determiner 120 is by the maximum coding unit according to current picture, according to according to depth Deeper coding unit is encoded to view data, and selects the depth with minimum coding error, to determine coding depth.Cause This, the encoded view data of coding unit corresponding with the coding depth for determining is by final output.Additionally, and coding depth Corresponding coding unit can be considered as being coded of coding unit.The coding depth of determination and according to determine coding depth The view data of coding is output to output unit 130.
It is single to maximum coding based on deeper coding unit corresponding with least one depth equal to or less than depth capacity View data in unit is encoded, and view data is encoded come comparing based on each in deeper coding unit As a result.After the encoding error of more deeper coding unit, the depth of minimum coding error is can be chosen with.Can be for each most Big coding unit selects at least one coding depth.
As coding unit is divided according to depth by layering, and as the quantity of coding unit increases, maximum coding unit Size be divided.Even additionally, coding unit corresponding with same depth in a maximum coding unit, also by respectively The encoding error for measuring the view data of each coding unit draws each in coding unit corresponding with same depth Divide to lower depth.Therefore, though when view data is included in a maximum coding unit, view data also according to Depth is divided into multiple regions, and encoding error can be different according to region in a maximum coding unit, therefore, compile Code depth region that can be in view data and it is different.Therefore, one or more be can determine that in a maximum coding unit Coding depth, and the view data of maximum coding unit can be divided according to the coding unit of at least one coding depth.
Therefore, coding unit determiner 120 can determine that the coding list with tree construction being included in maximum coding unit Unit.The coding unit of tree construction " have " include among included all deeper coding unit in maximum coding unit with it is true It is set to the corresponding coding unit of depth of coding depth.In the same area of maximum coding unit, the coding list of coding depth Unit can hierarchically be determined according to depth, and in different regions, the coding unit of coding depth can independently be determined.It is similar Ground, the coding depth in current region can be determined independently of the coding depth in another region.
Depth capacity is to divide the related index of number of times to from maximum coding unit to minimum coding unit.First is maximum Depth can represent the total number of division from maximum coding unit to minimum coding unit.Second depth capacity can be represented to be compiled from maximum Sum of the code unit to the depth level of minimum coding unit.For example, when the depth of maximum coding unit is 0, maximum coding is single The depth that unit is divided coding unit once can be arranged to 1, and maximum coding unit is divided the depth of coding unit twice Degree can be arranged to 2.Here, the coding unit of four times is divided if minimum coding unit is maximum coding unit, is existed 5 depth levels of depth 0,1,2,3 and 4, and therefore, the first depth capacity can be arranged to 4, and the second depth capacity can be set It is set to 5.
Can encode and convert according to maximum coding unit perform prediction.Can also according to maximum coding unit, based on according to etc. The deeper coding unit of the depth in depth capacity or the depth less than depth capacity carrys out perform prediction coding and converts.It is performed Conversion for being encoded to video may include frequency transformation, orthogonal transformation, integer transform etc..
Due to when maximum coding unit is divided according to depth deeper coding unit quantity increase, therefore to Depth down and all deeper coding unit that produces performs the coding for including predictive coding and converting.For the ease of describing, In maximum coding unit, the coding unit based on current depth is described predictive coding and conversion now.
Equipment 100 can differently select the size or shape of the data cell for being encoded to view data.In order to View data is encoded, such as predictive coding, conversion and the operation of entropy code is performed, now, can be made for all operations Identical data cell is used, or different data cells can be operated with for each.
For example, equipment 100 is not only alternatively used for the coding unit encoded to view data, also may be selected and coding The different data cell of unit is encoded with to the view data perform prediction in coding unit.
For the perform prediction coding in maximum coding unit, coding unit (that is, base corresponding with coding depth can be based on In the coding unit for again not being divided into coding unit corresponding with more low depth) perform prediction coding.Hereinafter, no longer by The coding unit for dividing and becoming the elementary cell for predictive coding is now referred to as " predicting unit ".By single to prediction Unit is divided obtained subregion (partition) and may include by least one of height and width to predicting unit Divided obtained data cell.
For example, when the coding unit of 2N × 2N (wherein, N is positive integer) is no longer divided, and become the pre- of 2N × 2N When surveying unit, the size of subregion can be 2N × 2N, 2N × N, N × 2N or N × N.The example of divisional type is included by pre- Surveying the height or width of unit is carried out the obtained symmetric partitioning of symmetrical division, is carried out by the height or width to predicting unit Asymmetric division (such as 1:N or n:1) subregion that is obtained, by predicting unit carried out subregion that geometry division obtained with And the subregion with arbitrary shape.
The predictive mode of predicting unit can be at least one of frame mode, inter-frame mode and skip mode.For example, Frame mode or inter-frame mode can be performed to the subregion of 2N × 2N, 2N × N, N × 2N or N × N.Additionally, can only to 2N × 2N's Subregion performs skip mode.A predicting unit in coding unit independently performs coding, so as to select have minimum volume The predictive mode of code error.
Equipment 100 can also be based not only on the coding unit for being encoded to view data, also based on different from coding The data cell of unit, conversion is performed to the view data in coding unit.
In order to perform conversion in coding unit, the converter unit with the size less than or equal to coding unit can be based on To perform conversion.For example, converter unit may include the data cell for frame mode and the converter unit for inter-frame mode.
With the coding unit with tree construction similarly, the converter unit in coding unit can be recursively divided into smaller The region of size so that converter unit can be independently determined in units of region.Therefore, can be according to transformed depth, according to tool The change for having tree construction brings and the residual error data in coding unit is divided.
Basic data cell as conversion is now referred to as " converter unit ".Instruction can be also set in converter unit Divided to reach the transformed depth of the division number of times of converter unit by the height and width to coding unit.For example, In the current coded unit of 2N × 2N, when the size of converter unit is also 2N × 2N, transformed depth can be 0, be compiled currently Each in the height and width of code unit is divided into two moieties, and 4^1 converter unit is divided into altogether, from And the size of converter unit, when being N × N, transformed depth can be 1, each in the height and width of current coded unit Four moieties are divided into, 4^2 converter unit is divided into altogether, so that the size of converter unit is N/2 × N/2 When, transformed depth can be 2.For example, converter unit can be set according to hierarchical tree structure, wherein, according to dividing for transformed depth Layer characteristic, the converter unit of transformed depth higher is divided into four converter units of lower transformed depth.
Coding information according to coding unit corresponding with coding depth not only needs the information on coding depth, also needs Information that will be relevant with predictive coding and conversion.Therefore, coding unit determiner 120 not only determines there is minimum coding error Coding depth, also determines the divisional type in predicting unit, the predictive mode according to predicting unit and the conversion list for converting The size of unit.
Later with reference to Figure 21 to Figure 31 describe in detail in maximum coding unit according to exemplary embodiment with tree The coding unit of structure and the method for determining subregion.
Coding unit determiner 120 can be measured according to depth by using the rate-distortion optimization based on Lagrange multiplier Deeper coding unit encoding error.
Output unit 130 exports the view data of maximum coding unit and on according to coding depth in the bitstream The information of coding mode, wherein, described image data are based at least one coding depth determined by coding unit determiner 120 It is encoded.
Can be encoded to obtain encoded view data by the residual error data to image.
Information on the coding mode according to coding depth may include the information on coding depth, on predicting unit In divisional type, predictive mode and converter unit size information.
The information on coding depth, the letter on coding depth can be defined by using the division information according to depth Breath indicates whether that the coding unit to more low depth rather than current depth performs coding.If the current depth of current coded unit Degree is coding depth, then the view data in current coded unit is encoded and is output, therefore, division information can be defined as Current coded unit more low depth is not divided to.Selectively, if the current depth of current coded unit is not deep coding Degree, then the coding unit to more low depth performs coding, therefore, division information can be defined as dividing current coded unit to obtain Obtain the coding unit of more low depth.
If current depth is not coding depth, performed to being divided into the coding unit of coding unit of more low depth Coding.Due to the coding unit in a coding unit of current depth in the presence of at least one more low depth, therefore to each The coding unit of more low depth repeats coding, therefore, the coding unit with same depth can be directed to and recursively perform volume Code.
Due to determining the coding unit with tree construction for a maximum coding unit, and for the volume of coding depth Code unit determines the information at least one coding mode, therefore, can determine at least for a maximum coding unit One information of coding mode.Further, since view data according to depth by layering divide, therefore maximum coding unit image The coding depth of data can be different according to position, therefore, can be set on coding depth and coding mode for view data Information.
Therefore, output unit 130 can by the coding information on corresponding coding depth and coding mode distribute to including At least one of coding unit, predicting unit and minimum unit in maximum coding unit.
Minimum unit is to be divided into 4 parts of rectangle data lists for being obtained by by the minimum coding unit for constituting lowest depth Unit.Selectively, minimum unit can be maximum rectangular data unit, wherein, the maximum rectangular data unit may include In maximum coding unit in included all coding units, predicting unit, zoning unit and converter unit.
For example, the coding information and root according to coding unit can be divided into by the coding information that output unit 130 is exported It is predicted that the coding information of unit.Coding information according to coding unit may include the size on predictive mode and on subregion Information.Coding information according to predicting unit may include to estimate direction, the reference on inter-frame mode on inter-frame mode Image index, the information on motion vector, the chromatic component on frame mode and the interpolation method on frame mode.This Outward, the information on the maximum sized information of coding unit that is defined according to picture, band or GOP and on depth capacity Can be inserted into the head of bit stream or SPS (sequence parameter set).
In device 100, deeper coding unit can be drawn by by the height or width of the coding unit of greater depths It is divided into two coding units for being obtained.In other words, when the size in the coding unit of current depth is 2N × 2N, more low depth The size of coding unit be N × N.Additionally, the coding unit with the current depth of the size of 2N × 2N may include most 4 The coding unit of more low depth.
Therefore, equipment 100 can by based on consider current picture feature determined by maximum coding unit size and Depth capacity, determines that the coding unit with optimum shape and optimum size has to be formed for each maximum coding unit The coding unit of tree construction.Further, since can be come to each most by using any one in various predictive modes and conversion Big coding unit performs coding, therefore it is contemplated that the feature of the coding unit of various picture sizes determines forced coding pattern.
Equipment 100 can additionally perform data cell merging method with located adjacent one another and with the related letter of similar prediction Prediction relevant information is shared between the data cell of breath.The coding unit determiner 120 of equipment 100 may include the coding of equipment 10 Unit determiner 11 and data merge determiner 13, and the output unit 130 of equipment 100 may include the coding information of equipment 10 Determiner 15.
Therefore, the coding unit determiner 120 of equipment 100 can be determined whether to the coding unit with tree construction, prediction Data cell between unit and subregion perform adjacent data unit merges, and output unit 130 is executable including single on coding The coding of the merging relevant information in the coding information of unit.
Output unit 130 can will merge relevant information and the coding information on coding unit and on current picture The maximum sized information of coding unit is inserted into the head on current picture, PPS or SPS.
Even if coding unit determiner 120 can be analyzed in the current bay of the coding unit with tree construction or current predictive In the case that the predictive mode of unit is to skip pattern or Direct Model, with the number that adjacent data unit shares prediction relevant information According to the possibility that unit merges.
Coding unit determiner 120 may include the adjacent data unit that will merge with current data unit or current bay The neighbouring multiple left adjacent data unit of the left margin of all and current prediction unit or current bay in candidate set and all Adjacent data unit in the multiple neighbouring with coboundary.
The coding unit with tree construction can be also based on, be referred to and current predictive list according to scanning sequency or decoding order Unit or the neighbouring lower-left adjacent data unit of lower-left corner of current bay.Therefore, except current prediction unit or current bay Merging candidate set in all of multiple left adjacent data units and upper adjacent data unit outside, coding unit determiner 120 may also include and upper left corner, right upper corner and the neighbouring data cell of lower-left corner.
Further, since determining the possibility that data cell merges based on the predictive mode of current prediction unit or current bay Property, therefore prediction mode information and the coding of pooling information be closely related.For example, output unit 130 can be carried out to coding information Coding so that information or directly is skipped based on the current prediction unit for the coding unit with tree construction or current bay Information sets merging relevant information.
Because the coding unit with tree construction that is made up of equipment 100 includes having various predictive modes and variously-shaped Predicting unit and subregion, therefore may be with current predictive with various predictive modes and variously-shaped predicting unit or subregion The coboundary and left margin contact of unit or current bay.Coding unit determiner 120 can search for current data unit and with The coboundary of current prediction unit or current bay and left margin contact it is multiple various adjacent to predicting unit or adjacent partition it Between perform the possibility that data cell merges, and can determine that the object that will be merged.
Therefore, because current prediction unit or current bay are based on the coding unit with tree construction and have various chis The shared prediction relevant information of the adjacent data unit of very little, shape and position, therefore can be by using the peripheral information of wider range To remove redundant data, and video coding efficiency can be improved.
Figure 20 is, based on the coding unit with tree construction, to be merged by using data cell according to exemplary embodiment Come the block diagram of equipment 200 decoded to video.
Equipment 200 includes receiver 210, view data and coded information extractor 220 and image data decoder 230.For the ease of explaining, based on the coding unit with tree construction, video is solved by using data cell merging The equipment 200 of code is referred to as " for the equipment 200 decoded to video ".
For the various terms (such as coding unit, depth, predicting unit, converter unit) of the various operations of equipment 200 Definition and the information on various coding modes it is identical with those described above by reference to Figure 19 and equipment 100.
Receiver 210 receives and parses through the bit stream of encoded video.View data and coded information extractor 220 from The encoded view data of the bitstream extraction of parsing each coding unit, and the view data output that will be extracted is to picture number According to decoder 230, wherein, the coding unit has the tree construction according to maximum coding unit.View data and coding information Extractor 220 can extract the maximum chi of the coding unit on current picture from the head on current picture, PPS or SPS Very little information.
Additionally, view data and coded information extractor 220 bitstream extraction analytically are on according to maximum volume The coding depth and the information of coding mode of the coding unit of the tree construction of code unit.Extract on coding depth and coding mould The information of formula is output to image data decoder 230.In other words, the view data in bit stream is divided into maximum coding Unit, so that image data decoder 230 is decoded for each maximum coding unit to view data.
Can be set for the information at least one coding unit corresponding with coding depth on being compiled according to maximum Code unit coding depth and the information of coding mode, and the information on coding mode may include on coding depth phase The information of the divisional type of the corresponding coding unit answered, the size on predictive mode and converter unit.Additionally, deep on coding The coding information of degree and coding mode may also include the merging relevant information on current prediction unit or current bay.
By view data and coded information extractor 220 extract on the coding depth and volume according to maximum coding unit The information of pattern is the information on following coding depth and coding mode:The coding depth and coding mode are confirmed as When encoder (such as equipment 100) repeats coding according to maximum coding unit to each the deeper coding unit according to depth When produce minimum coding error.Therefore, equipment 200 can be by according to the coding depth and coding mode for producing minimum coding error View data is decoded to recover image.
Due to the coding information on coding depth and coding mode can be assigned to corresponding coding unit, predicting unit and Predetermined unit of data in minimum unit, therefore view data and coded information extractor 220 can carry according to predetermined unit of data Take the information on coding depth and coding mode.It is assigned the pre- of information of the identical on coding depth and coding mode Determining data cell can be inferred to be the data cell being included in identical maximum coding unit.
Image data decoder 230 is by being based on the coding depth and the letter of coding mode according to maximum coding unit Breath decodes to recover current picture to the view data in each maximum coding unit.In other words, image data decoding Device 230 can be based on extracting on each among the coding unit with tree construction that is included in each maximum coding unit The information of the divisional type, predictive mode and converter unit of coding unit, to be decoded to encoded view data.Decoding Treatment may include to predict (prediction includes infra-frame prediction and motion compensation) and inverse transformation.
Image data decoder 230 can be based on the divisional type of the predicting unit on the coding unit according to coding depth With the information of predictive mode, subregion and predictive mode according to each coding unit perform infra-frame prediction or motion compensation.
Additionally, in order to perform inverse transformation according to maximum coding unit, image data decoder 230 can include closing by reading In the converter unit with tree construction of the information of the size of the converter unit of the coding unit according to coding depth, based on being used for The converter unit of each coding unit performs inverse transformation.
Image data decoder 230 can determine current maximum coding unit by using the division information according to depth At least one coding depth.If division information indicates view data to be no longer divided in current depth, current depth is to compile Code depth.Therefore, image data decoder 230 can be by using the prediction on each coding unit corresponding with coding depth The information of the size of the divisional type, predictive mode and converter unit of unit, to being encoded with each in current maximum coding unit The encoded data of corresponding at least one coding unit of depth are decoded, and export the image of current maximum coding unit Data.
In other words, can be distributed by the predetermined unit of data being viewed as in coding unit, predicting unit and minimum unit Coding information set collect the data cell of the coding information for including identical division information, the data cell of collection can quilt It is considered as the data cell that will be decoded with identical coding mode by image data decoder 230.
Additionally, equipment 200 can be by using data cell merging method, by using current prediction unit or current bay The prediction relevant information of peripheral data unit recover current prediction unit or current bay.Therefore, the receiver of equipment 200 210 and view data and coded information extractor 220 may include the resolver/extractor 21 of equipment 20, and equipment 200 figure As data decoder 230 may include that the data cell of equipment 20 merges determiner 23.
View data and coded information extractor 220 can be from the information extraction prediction mode information on coding mode and conjunctions And relevant information.View data and coded information extractor 220 can be based between prediction mode information and merging relevant informations Close relation, it is determined that the prediction mode information in the information on coding mode extract and read merge relevant information can Can property.For example, view data and coded information extractor 220 can be based on the current predictive for the coding unit with tree construction Unit or the skip mode information or direct information of current bay, extract and merge relevant information.Additionally, pooling information and merging rope Fuse breath can be extracted as merging relevant information.
The image data decoder 230 of equipment 200 can be based on the information formation on coding mode and coding depth to be had Each coding unit among the coding unit of tree construction, and the coding unit with tree construction includes thering is various prediction moulds Formula and variously-shaped predicting unit and subregion.
The image data decoder 230 of equipment 200 can search for current data unit and with current prediction unit or subregion Coboundary and left margin contact it is various neighbouring between predicting unit or adjacent partition whether it is executable merge, and can determine that by The object being merged.Can determine or infer current by referring to the neighbouring predicting unit or the prediction relevant information of subregion that merge Predicting unit or the prediction relevant information of current bay.
Equipment 200 can be obtained on producing minimum coding error when performing and encoding for each maximum coding unit recurrence At least one coding unit coding information, and can be used described information to decode current picture.That is, can The coding unit with tree construction to being confirmed as forced coding unit in each maximum coding unit is decoded.
According to the prediction relevant information and merging relevant information that are set based on close relation, can be by referring to proximity data list The prediction relevant information of unit, to by based on sharing the neighbouring number with various sizes and shape according to the coding unit of tree construction The data encoded according to the prediction relevant information of unit are accurately decoded.
The coding unit of tree construction, pre- is had according to the determination of exemplary embodiment now with reference to Figure 21 to Figure 31 descriptions The method for surveying unit and converter unit.
Figure 21 is the diagram for explaining the concept of the coding unit according to exemplary embodiment.
The size of coding unit may be expressed as width × height, and can be 64 × 64,32 × 32,16 × 16 and 8 ×8.64 × 64 coding unit can be divided into 64 × 64,64 × 32,32 × 64 or 32 × 32 subregion, 32 × 32 coding Unit can be divided into 32 × 32,32 × 16,16 × 32 or 16 × 16 subregion, and 16 × 16 coding unit can be divided into 16 × 16,16 × 8,8 × 16 or 8 × 8 subregion, 8 × 8 coding unit can be divided into 8 × 8,8 × 4,4 × 8 or 4 × 4 point Area.
In video data 310, resolution ratio is 1920 × 1080, and the full-size of coding unit is 64 and depth capacity It is 2.In video data 320, resolution ratio is 1920 × 1080, and the full-size of coding unit is 64 and depth capacity is 3. In video data 330, resolution ratio is 352 × 288, and the full-size of coding unit is 16 and depth capacity is 1.In Figure 11 The depth capacity for showing represents the division total degree from maximum coding unit to minimum decoding unit.
If high resolution or data volume are big, the full-size of coding unit can be with larger, so as to not only improve coding Efficiency, also accurately reflects the feature of image.Therefore, with the He of video data 310 than the higher resolution of video data 330 The full-size of the coding unit of video data 320 can be 64.
Because the depth capacity of video data 310 is 2, therefore, because being divided twice by maximum coding unit, depth Deepened to two-layer, thus video data 310 coding unit 315 may include maximum coding unit that major axis dimension is 64 and Major axis dimension is 32 and 16 coding unit.Simultaneously as the depth capacity of video data 330 is 1, therefore, because by right Maximum coding unit is divided once, and depth is deepened to one layer, therefore the coding unit 335 of video data 330 may include major axis The coding unit that size is 16 maximum coding unit and major axis dimension is 8.
Because the depth capacity of video data 320 is 3, therefore, because being drawn in three times by maximum coding unit, depth Deepened to 3 layers, thus video data 320 coding unit 325 may include maximum coding unit that major axis dimension is 64 and Major axis dimension is 32,16 and 8 coding unit.With depth down, detailed information can be accurately showed.Figure 22 is according to example The block diagram of the image encoder 400 based on coding unit of property embodiment.
The operation of the coding unit determiner 120 of the execution equipment 100 of image encoder 400 is compiled with to view data Code.In other words, intra predictor generator 410 performs infra-frame prediction, motion to the coding unit in present frame 405 in intra mode Estimator 420 and motion compensator 425 in inter mode by using present frame 405 and reference frame 495, to present frame 405 In coding unit perform interframe estimate and motion compensation.
Pass through the and of converter 430 from the data of intra predictor generator 410, exercise estimator 420 and motion compensator 425 output Quantizer 440 is outputted as the conversion coefficient for quantifying.The conversion coefficient of quantization passes through inverse DCT 460 and the quilt of inverse converter 470 The data in spatial domain are reverted to, the data in the spatial domain of recovery are entered by removing module unit 480 and loop filtering unit 490 Reference frame 495 is outputted as after row post processing.The conversion coefficient of quantization can be outputted as bit stream by entropy coder 450 455。
In order to by the application of image encoder 400, in device 100, (that is, frame in is pre- for all elements of image encoder 400 Survey device 410, exercise estimator 420, motion compensator 425, converter 430, quantizer 440, entropy coder 450, inverse DCT 460th, inverse converter 470, remove module unit 480 and loop filtering unit 490) consider each maximum coding unit depth capacity While, operation is performed based on each coding unit among the coding unit with tree construction.
Specifically, intra predictor generator 410, exercise estimator 420 and motion compensator 425 are considering that current maximum coding is single While the full-size and depth capacity of unit, it is determined that the subregion of each coding unit among the coding unit with tree construction And predictive mode, converter 430 determines the converter unit in each coding unit among the coding unit with tree construction Size.
Figure 23 is the block diagram of the image decoder 500 based on coding unit according to exemplary embodiment.
Resolver 510 parses the encoded view data and the required pass of decoding that will be decoded from bit stream 505 In the information of coding.Encoded view data is outputted as the number of inverse quantization by entropy decoder 520 and inverse DCT 530 According to the data of inverse quantization are recovered as the view data in spatial domain by inverse converter 540.
Intra predictor generator 550 performs frame in pre- to coding unit in intra mode for the view data in spatial domain Survey, motion compensator 560 performs motion compensation to coding unit in inter mode by using reference frame 585.
Can be by removing module unit by the view data in the spatial domain of intra predictor generator 550 and motion compensator 560 570 and loop filtering unit 580 post-processed after be outputted as recover frame 595.Additionally, by going the He of module unit 570 The view data that loop filtering unit 580 is post-processed can be outputted as reference frame 585.
In order to be decoded to view data in the image data decoder 230 of equipment 200, image decoder 500 can Perform the operation performed after resolver 510.
In order to image decoder 500 is applied in the device 200, all elements (that is, resolver of image decoder 500 510th, entropy decoder 520, inverse DCT 530, inverse converter 540, intra predictor generator 550, motion compensator 560, remove module unit 570 and loop filtering unit 580) be based on performing operation with the coding unit of tree construction for each maximum coding unit.
Specifically, intra predictor generator 550 and motion compensator 560 are based on each in the coding unit with tree construction Subregion and predictive mode perform operation, the size that inverse converter 540 is based on the converter unit of each coding unit performs operation.
Figure 24 is to show the deeper coding unit and the diagram of subregion according to depth according to exemplary embodiment.
Equipment 100 and equipment 200 consider the feature of image using hierarchical coding unit.Can be adaptive according to the feature of image Answer ground to determine maximum height, Breadth Maximum and the depth capacity of coding unit, or coding unit can be arranged differently than by user Maximum height, Breadth Maximum and depth capacity.Can be determined according to the deeper of depth according to the predetermined full-size of coding unit The size of coding unit.
In the hierarchy 600 of the coding unit according to exemplary embodiment, the maximum height and maximum of coding unit Width is 64, and depth capacity is 3.In this case, depth capacity is represented from maximum coding unit to minimum code The division total degree of unit.Due to depth along hierarchy 600 the longitudinal axis deepen, therefore deeper coding unit height and width It is divided.Additionally, being shown as the basis of the predictive coding for each deeper coding unit along the transverse axis of hierarchy 600 Predicting unit and subregion.
In other words, coding unit 610 is the maximum coding unit in hierarchy 600, wherein, depth is 0, size (that is, highly multiplying width) is 64 × 64.Depth along the longitudinal axis deepen, exist size for 32 × 32 and depth be 1 coding unit 620th, size be 16 × 16 and depth be 2 coding unit 630, size for 8 × 8 and depth be 3 coding unit 640.Size For 8 × 8 and depth is that 3 coding unit 640 is minimum coding unit.
The predicting unit and subregion of coding unit are arranged according to depth along transverse axis.In other words, if size is 64 × 64 And depth is that 0 coding unit 610 is predicting unit, then predicting unit can be divided into be included in coding unit 610 point Area, i.e. size for 64 × 64 subregion 610, size for 64 × 32 subregion 612, size for 32 × 64 subregion 614 or size It is 32 × 32 subregion 616.
Similarly, size is 32 × 32 and depth is that 1 predicting unit of coding unit 620 can be divided into and be included in volume Code unit 620 in subregion, i.e. size for 32 × 32 subregion 620, size for 32 × 16 subregion 622, size be 16 × 32 Subregion 624 and size for 16 × 16 subregion 626.
Similarly, size is 16 × 16 and depth is that 2 predicting unit of coding unit 630 can be divided into and be included in volume Subregion in code unit 630, i.e. be included in size in coding unit 630 for 16 × 16 subregion, size for 16 × 8 point Area 632, size for 8 × 16 subregion 634 and size for 8 × 8 subregion 636.
Similarly, size is 8 × 8 and depth is that 3 predicting unit of coding unit 640 can be divided into and be included in coding Subregion in unit 640, i.e. be included in size in coding unit 640 for 8 × 8 subregion, size for 8 × 4 subregion 642, Size for 4 × 8 subregion 644 and size for 4 × 4 subregion 646.
In order to determine at least one coding depth of the coding unit for constituting maximum coding unit 610, the coding of equipment 100 Unit determiner 120 is directed to the coding unit corresponding with each depth being included in maximum coding unit 610 and performs coding.
With in depth down, including same range and same size data deeper coding unit according to depth Quantity increases.For example, it is desired to four cover with the corresponding coding unit of depth 2 and are included in a coding corresponding with depth 1 Data in unit.Therefore, in order to compare the coding result according to depth of identical data, with the corresponding coding unit of depth 1 It is encoded with the corresponding coding unit of depth 2 with four.
Perform coding to be directed to the current depth among depth, can by the transverse axis along hierarchy 600, for work as Each predicting unit in the preceding corresponding coding unit of depth performs coding to be directed to current depth selection minimum coding error. Selectively, basis can be compared by deepening to perform coding for each depth along the longitudinal axis of hierarchy 600 with depth The minimum coding error of depth, so as to search for minimum coding error.In coding unit 610 with minimum coding error depth and Subregion can be chosen as the coding depth and divisional type of coding unit 610.
Figure 25 is for explaining the relation between the coding unit 710 and converter unit 720 according to exemplary embodiment Diagram.
Equipment 100 or equipment 200 are directed to each maximum coding unit, according to less than or equal to maximum coding unit The coding unit of size is encoded or decoded to image.Can be selected based on the data cell of no more than corresponding coding unit It is used for the size of the converter unit of conversion during coding.
For example, in equipment 100 or equipment 200, if the size of coding unit 710 is 64 × 64, can be by using Size performs conversion for 32 × 32 converter unit 720.
Additionally, can by the size less than 64 × 64 for 32 × 32,16 × 16,8 × 8 and 4 × 4 converter unit in Each performs conversion, and, for the data of 64 × 64 coding unit 710 are encoded, minimum is then can be chosen with come to size The converter unit of encoding error.
Figure 26 is the coding information for explaining the coding unit corresponding with coding depth according to exemplary embodiment Diagram.
The output unit 130 of equipment 100 can pair each coding unit corresponding with coding depth on divisional type The information 820 of information 800, the information 810 on predictive mode and the size on converter unit is encoded and sent, as Information on coding mode.
Information 800 indicates the shape on the subregion for being divided and being obtained by the predicting unit to current coded unit Information, wherein, the subregion is the data cell for being predicted coding to current coded unit.For example, size is 2N It is the subregion 804, chi of 2N × N that the current coded unit CU_0 of × 2N can be divided into the subregion 802, size that size is 2N × 2N Very little subregion 806 and size for N × 2N is any one in the subregion 808 of N × N.Here, the information on divisional type 800 are set to indicate that subregion 806 and size that the subregion 804, size that size is 2N × N is N × 2N are the subregion 808 of N × N One of.
Information 810 indicates the predictive mode of each subregion.For example, information 810 may indicate that the subregion to being indicated by information 800 The pattern of the predictive coding of execution, i.e. frame mode 812, inter-frame mode 814 or skip mode 816.
Information 820 indicates the converter unit being based on when current coded unit is performed and converted.For example, converter unit can Being that the first frame in converter unit 822, the second frame in converter unit 824, the first inter-frame transform unit 826 or the second frame in are converted Unit 828.
The view data and coded information extractor 220 of equipment 200 can extract and use information 800,810 and 820 is to enter Row decoding.
Although being not shown in fig. 26, the information on coding mode may include to merge relevant information, can be based on pass The related letter of merging is set in the information 810 of predictive mode (such as inter-frame mode, frame mode, skip mode or Direct Model) Breath.For example, if the information 810 on predictive mode is the information on skip mode, merging relevant information can be chosen Property ground set.Only when the information 810 on predictive mode is rather than skip mode and the letter of Direct Model on inter-frame mode During breath, merging relevant information can be set.
Figure 27 is the diagram of the deeper coding unit according to depth according to exemplary embodiment.
Division information may be used to indicate that the change of depth.Division information indicates whether the coding unit of current depth is drawn It is divided into the coding unit of more low depth.
For being 0 to depth and size is for the coding unit 900 of 2N_0 × 2N_0 is predicted the predicting unit 910 of coding May include the subregion of following divisional type:Size is the subregion of 2N_0 × N_0 for the divisional type 912, size of 2N_0 × 2N_0 Type 914, size is the divisional type 918 of N_0 × N_0 for the divisional type 916 and size of N_0 × 2N_0.Fig. 9 only shows By the divisional type 912 to 918 that predicting unit 910 is symmetrically divided and obtained, but divisional type not limited to this, prediction The subregion of unit 910 may include asymmetric subregion, the subregion with predetermined shape and the subregion with geometry.
According to divisional type, to a size be the subregion of 2N_0 × 2N_0, two sizes for 2N_0 × N_0 subregion, Two sizes repeat predictive coding for the subregion of N_0 × N_0 for the subregion and four sizes of N_0 × 2N_0.Can be to size It is pre- under for the subregion execution frame mode and inter-frame mode of 2N_0 × 2N_0, N_0 × 2N_0,2N_0 × N_0 and N_0 × N_0 Survey coding.The predictive coding under skip mode is only performed for the subregion of 2N_0 × 2N_0 to size.
Compare the error of the coding including the predictive coding in divisional type 912 to 918, determine in divisional type minimum Encoding error.If minimum in the middle encoding error of one of divisional type 912 to 916, predicting unit 910 can not be divided into more Low depth.
If encoding error is minimum in divisional type 918, depth changes to 1 to operate 920 pairs of divisional types from 0 918 are divided, and are 1 to depth and size is for the coding unit 930 of N_0 × N_0 repeats coding, to search for minimum volume Code error.
For being 1 to depth and size is for the coding unit 930 of 2N_1 × 2N_1 (=N_0 × N_0) is predicted coding Predicting unit 940 may include the subregion of following divisional type:Size is 2N_ for the divisional type 942, size of 2N_1 × 2N_1 The divisional type 944, size of 1 × N_1 is the divisional type of N_1 × N_1 for the divisional type 946 and size of N_1 × 2N_1 948。
If encoding error is minimum in divisional type 948, depth changes to 2 to operate 950 pairs of divisional types from 1 948 are divided, and are 2 to depth and size is for the coding unit 960 of N_2 × N_2 repeats coding, to search for minimum volume Code error.
When depth capacity is d-1, the division operation according to depth can be performed, and when depth is changed into d-1, and draw Point information can be encoded, during one in being 0 to d-2 until depth.In other words, when coding is performed, until in operation 970 In coding unit corresponding with depth d-2 be divided after depth when being d-1, for being that d-1 and size are 2N_ (d- to depth 1) predicting unit 990 that the coding unit 980 of × 2N_ (d-1) is predicted coding may include the subregion of following divisional type:Chi The very little divisional type 992, size for 2N_ (d-1) × 2N_ (d-1) is the divisional type 994, size of 2N_ (d-1) × N_ (d-1) For the divisional type 996 and size of N_ (d-1) × 2N_ (d-1) are the divisional type 998 of N_ (d-1) × N_ (d-1).
Can to divisional type 992 to 998 among a size for 2N_ (d-1) × 2N_ (d-1) subregion, two sizes For the subregion of 2N_ (d-1) × N_ (d-1), two sizes be the subregion of N_ (d-1) × 2N_ (d-1), four sizes be N_ (d-1) The subregion of × N_ (d-1) repeats predictive coding, to search for the divisional type with minimum coding error.
Even if when divisional type 998 has minimum coding error, because depth capacity is d-1, therefore depth is d-1's Coding unit CU_ (d-1) is no longer divided into more low depth, constitutes the coding of the coding unit of current maximum coding unit 900 Depth is confirmed as d-1, and the divisional type of coding unit 900 can be confirmed as N_ (d-1) × N_ (d-1).Further, since Depth capacity is d-1, and the minimum coding unit 980 with lowest depth d-1 is no longer divided into more low depth, therefore not It is provided for the division information of coding unit 980.
Data cell 999 can be directed to " minimum unit " of current maximum coding unit.Minimum unit can be passed through The rectangular data unit for minimum coding unit 980 being divided into 4 parts and being obtained.By repeating coding, equipment 100 can pass through Compare the encoding error according to the depth of coding unit 900 to select the depth with minimum coding error, to determine that coding is deep Degree, and corresponding divisional type and predictive mode are set to the coding mode of coding depth.
So, the minimum coding error according to depth is compared in all depth of 1 to d, with minimum coding error Depth can be confirmed as coding depth.Coding depth, the divisional type of predicting unit and predictive mode can be used as on coding moulds The information of formula and be encoded and send.Further, since coding unit is divided into coding depth from depth 0, therefore only encode deep The division information of degree is arranged to 0, and the division information of the depth in addition to coding depth is arranged to 1.
The view data and coded information extractor 220 of equipment 200 are extractable and use the coding on coding unit 900 The information of depth and predicting unit, decodes with to subregion 912.Equipment 200 can be by using the division information according to depth The depth that division information is 0 is defined as coding depth, and information using the coding mode on respective depth is carried out Decoding.
Figure 28 to Figure 30 is for explaining coding unit 1010 according to exemplary embodiment, predicting unit 1060 and conversion The diagram of the relation between unit 1070.
Coding unit 1010 is corresponding with the coding depth determined by equipment 100 in maximum coding unit, is tied with tree The coding unit of structure.Predicting unit 1060 is the subregion of the predicting unit of each in coding unit 1010, converter unit 1070 is the converter unit of each in coding unit 1010.
When the depth of the maximum coding unit in coding unit 1010 is 0, the depth of coding unit 1012 and 1054 is 1, the depth of coding unit 1014,1016,1018,1028,1050 and 1052 is 2, coding unit 1020,1022,1024, 1026th, 1030,1032 and 1048 depth is 3, and the depth of coding unit 1040,1042,1044 and 1046 is 4.
In predicting unit 1060, some coding units 1014,1016,1022,1032,1048,1050,1052 and 1054 It is divided into the subregion for predictive coding.In other words, the divisional type in coding unit 1014,1022,1050 and 1054 Size with 2N × N, the divisional type in coding unit 1016,1048 and 1052 has the size of N × 2N, coding unit 1032 divisional type has the size of N × N.The predicting unit and subregion of coding unit 1010 are single less than or equal to each coding Unit.
According to the data cell less than coding unit 1052 to the picture number of the coding unit 1052 in converter unit 1070 Converted or inverse transformation according to performing.Additionally, the coding unit 1014,1016,1022,1032,1048,1050 in converter unit 1070 With 1052 in terms of the size and dimension with predicting unit 1060 in coding unit 1014,1016,1022,1032,1048,1050 It is different with 1052.In other words, equipment 100 and equipment 200 can respectively perform frame in the data cell in identical coding unit Prediction, estimation, motion compensation, conversion and inverse transformation.
Therefore, to each in the coding unit with hierarchy in each region of maximum coding unit recursively Coding is performed, to determine forced coding unit, so as to the coding unit with recurrence tree construction can be obtained.Coding information may include Division information on coding unit, the information on divisional type, the information on predictive mode and on converter unit The information of size.Table 2 shows the coding information that can be set by equipment 100 and equipment 200.
Table 2
[table 2]
The exportable coding information on the coding unit with tree construction of output unit 130 of equipment 100, equipment 200 View data and coded information extractor 220 can from receive volume of the bitstream extraction on the coding unit with tree construction Code information.
Division information indicates whether current coded unit is divided into the coding unit of more low depth.If current depth d Division information be 0, then the depth that current coded unit is no longer divided into more low depth is coding depth, thus can for compile Code depth defines the information of the size on divisional type, predictive mode and converter unit.If current coded unit according to draw Point information is further divided into, then the coding unit of four more low depths to dividing independently performs coding.
Predictive mode can be in frame mode, inter-frame mode and skip mode.Can be in all divisional types Define frame mode and inter-frame mode, only skip mode defined in the divisional type in size for 2N × 2N.
Information on divisional type may indicate that and symmetrically divided and obtained by the height or width to predicting unit Size be the symmetric partitioning type of 2N × 2N, 2N × N, N × 2N and N × N, and by the height or width to predicting unit The size for carrying out asymmetric division and obtaining is the asymmetric divisional type of 2N × nU, 2N × nD, nL × 2N and nR × 2N.Can lead to Cross according to 1:3 and 3:The height of 1 pair of predicting unit is divided asymmetric for 2N × nU and 2N × nD to obtain size respectively Divisional type, and can be by according to 1:3 and 3:The width of 1 pair of predicting unit divided obtaining size respectively for nL × 2N and The asymmetric divisional type of nR × 2N.
The size of converter unit can be arranged to the two types under frame mode and the two types under inter-frame mode.Change Sentence is talked about, if the division information of converter unit is 0, the size of converter unit can be 2N × 2N, and this is present encoding list The size of unit.If the division information of converter unit is 1, conversion can be obtained by being divided to current coded unit single Unit.If additionally, size is symmetric partitioning type, the chi of converter unit for the divisional type of the current coded unit of 2N × 2N Very little can be N × N, if the divisional type of current coded unit is asymmetric divisional type, the size of converter unit can be with It is N/2 × N/2.
Coding information on the coding unit with tree construction may include coding unit corresponding with coding depth, prediction At least one of unit and minimum unit.Coding unit corresponding with coding depth may include:Comprising identical coding information At least one of predicting unit and minimum unit.
Therefore, determine whether adjacent data unit is included in and coding by comparing the coding information of adjacent data unit In the corresponding identical coding unit of depth.Additionally, being determined by using the coding information of data cell corresponding to coding depth Corresponding coding unit, so as to can determine that the distribution of the coding depth in maximum coding unit.
Therefore, current coded unit is predicted if based on the coding information of adjacent data unit, then can directly reference and Use the coding information of the data cell in the deeper coding unit neighbouring with current coded unit.
Selectively, current coded unit is predicted if based on the coding information of adjacent data unit, then uses data The coding information of unit can refer to the neighbouring coding unit for searching and come to search for the data cell neighbouring with current coded unit Prediction current coded unit.
Figure 31 is for explaining coding unit, predicting unit or subregion and converter unit according to the coding mode information of table 2 Between relation diagram.
Maximum coding unit 1300 includes the and of coding unit 1302,1304,1306,1312,1314,1316 of coding depth 1318.Here, because coding unit 1318 is the coding unit of coding depth, therefore division information can be arranged to 0.On chi The very little information for the divisional type of the coding unit 1318 of 2N × 2N can be arranged to one of following divisional type:Size be 2N × The divisional type 1322, size of 2N is N for the divisional type 1326, size that the divisional type 1324, size of 2N × N is N × 2N The divisional type 1328, size of × N is the divisional type 1334, size of 2N × nD for the divisional type 1332, size of 2N × nU For the divisional type 1336 and size of nL × 2N are the divisional type 1338 of nR × 2N.
The division information (TU dimension marks) of converter unit is manipulative indexing, therefore converter unit corresponding with manipulative indexing Size can be changed according to the predicting unit type or divisional type of coding unit.
For example, when divisional type is arranged to symmetrically (that is, divisional type 1322,1324,1326 or 1328), if TU Dimension mark is 0, then it is the converter unit 1342 of 2N × 2N to set size, if TU dimension marks are 1, setting size is N The converter unit 1344 of × N.
On the other hand, when divisional type is arranged to asymmetric (that is, divisional type 1332,1334,1336 or 1338), If TU dimension marks are 0, it is the converter unit 1352 of 2N × 2N to set size, if TU dimension marks are 1, sets chi Very little is the converter unit 1354 of N/2 × N/2.
Reference picture 18, TU dimension marks are the marks with value 0 or 1, but TU dimension marks are not limited to 1 bit, in TU chis Very little to mark while 0 increase, converter unit can be divided with tree construction by layering.TU dimension marks are used as converting rope The example drawn.
In this case, can be by being used together the TU dimension marks of converter unit and the full-size of converter unit The size of the converter unit for actually having used is represented with minimum dimension.Equipment 100 can to size information of maximum conversion unit, Size information of minimum conversion unit and maximum TU dimension marks are encoded.To size information of maximum conversion unit, minimum conversion The result that unit size information and maximum TU dimension marks are encoded can be inserted into SPS.Equipment 200 can be become by using maximum Unit size information, size information of minimum conversion unit and maximum TU dimension marks is changed to decode video.
For example, if the size of current coded unit is 64 × 64 and maximum converter unit size is 32 × 32, when When TU dimension marks are 0, the size of converter unit can be 32 × 32, and when TU dimension marks are 1, the size of converter unit can To be 16 × 16, when TU dimension marks are 2, the size of converter unit can be 8 × 8.
As another example, if the size of current coded unit be 32 × 32 and minimum converter unit size be 32 × 32, then when TU dimension marks are 0, the size of converter unit can be 32 × 32.Here, because the size of converter unit may 32 × 32 can not be less than, therefore TU dimension marks may not be arranged to the value in addition to 0.
As another example, if it is 1, TU that the size of current coded unit is 64 × 64 and maximum TU dimension marks Dimension mark can be 0 or 1.Here, TU dimension marks may not be arranged to the value in addition to 0 or 1.
Therefore, it is if being defined on maximum TU dimension marks when TU dimension marks are 0 " MaxTransformSizeIndex ", minimum converter unit size is " MinTransformSize ", and converter unit size It is " RootTuSize " that the current minimum conversion that can be determined in current coded unit then can be then defined by equation (1) Unit size " CurrMinTuSize ":
CurrMinTuSize=max (MinTransformSize, RootTuSize/ (2 ∧ MaxTransformSizeIndex))……(1)
Compared with the current minimum converter unit size " CurrMinTuSize " that can be determined in current coded unit, when Converter unit size " RootTuSize " when TU dimension marks are 0 may indicate that the maximum converter unit chi that can be selected in systems It is very little.In equation (1), " RootTuSize/ (2^MaxTransformSizeIndex) " is represented when TU dimension marks are 0, is become Converter unit size when unit size " RootTuSize " has been divided number of times corresponding with maximum TU dimension marks is changed, " MinTransformSize " represents minimum transform size.Therefore, " RootTuSize/ (2^ MaxTransformSizeIndex) " and in " MinTransformSize " less value can be can be in current coded unit The current minimum converter unit size " CurrMinTuSize " for determining.
Maximum converter unit size RootTuSize can change according to the type of predictive mode.
If for example, current prediction mode is inter-frame mode, can then be determined by using following equation (2) “RootTuSize”.In equation (2), " MaxTransformSize " represents maximum converter unit size, and " PUSize " is represented Current prediction unit size.
RootTuSize=min (MaxTransformSize, PUSize) ... (2)
That is, if current prediction mode were inter-frame mode, the converter unit size when TU dimension marks are 0 " RootTuSize " can be less value in maximum converter unit size and current prediction unit size.
If the predictive mode of current bay unit is frame mode, can be determined by using following equation (3) “RootTuSize”.In equation (3), " PartitionSize " represents the size of current bay unit.
RootTuSize=min (MaxTransformSize, PartitionSize) ... (3)
That is, if current prediction mode were frame mode, the converter unit size when TU dimension marks are 0 " RootTuSize " can be maximum converter unit size and current bay unit size in less value.
However, the type of predictive mode in zoning unit and the current maximum converter unit size that changes " RootTuSize " is only example, and not limited to this.
Figure 32 is shown according to exemplary embodiment based on the coding unit with tree construction, by using data cell The flow chart of the method for merging to encode video.
In operation 1210, the current picture of video is divided into maximum coding unit.In operation 1220, for current picture Each maximum coding unit, view data can be encoded as the coding unit according to depth, can be according to coding depth selection simultaneously It is determined that producing the depth of minimum coding error, and can determine that the coding unit of the depth by being confirmed as coding depth is constituted Coding unit with tree construction.The view data according to encoded maximum coding unit according to the coding unit for determining can It is output.
In operation 1230, it may be determined whether predicting unit or subregion to the coding unit with tree construction perform neighbouring number Merge according to the data cell between unit.Prediction relevant information can be shared between the data cell for merging.Even if with tree In the case that the current prediction unit of the coding unit of structure or the predictive mode of current bay are to skip pattern or Direct Model, Also can analyze for sharing the necessity that the data cell of prediction relevant information merges with adjacent data unit.
In operation 1230, the information of the coding mode on the coding unit with tree construction can be encoded as including merging Relevant information, wherein, the merging relevant information includes pooling information and merges index information.Based on the coding with tree construction The information on coding mode of cell encoding and the view data of maximum coding unit can be output in the bitstream.
Figure 33 is shown according to exemplary embodiment based on the coding unit with tree construction, by using data cell The flow chart of the method for merging to decode video.
In operation 1310, the bit stream of encoded video is received and parsed through.In operation 1320, according to maximum coding unit The current picture that bitstream extraction analytically is encoded for each coding unit according to the coding unit with tree construction The encoded view data of view data, and extract the information on coding depth and coding mode.Can be from encoding deeply The information extraction of degree and coding mode merges relevant information.Can determine to extract and read the related letter of merging based on prediction mode information The possibility of breath.For example, the current prediction unit or the skip mode of current bay of the coding unit with tree construction can be based on Information or direct information extract merging relevant information.Additionally, pooling information and merging index information can be extracted as merging phase Pass information.
In operation 1330, the divisional type of the predicting unit on the coding unit with tree construction, predictive mode and change The information for changing unit can be read based on coding depth and the information of coding mode on maximum coding unit, and can quilt Decoded for the view data to maximum coding unit.
Additionally, the object that will be merged can be searched among multiple adjacent data units neighbouring with current data unit, And data cell can be determined based on relevant information is merged to merge.Can by share or with reference to merge neighbouring predicting unit or The prediction relevant information of subregion infers the prediction relevant information of current prediction unit or current bay to perform current prediction unit Or the motor-function evaluation of current bay.By according to the coding unit with tree construction including motor-function evaluation Decoding, can recover the view data of maximum coding unit and can recover current picture.
In equipment 100 and equipment 200, due to can be according to tree construction to various predictive modes, various sizes and shape The possibility that the predictive mode and subregion of shape perform the data cell merging for sharing mutual prediction relevant information is examined, Therefore merging is performed between the adjacent data unit with various positions, so that prediction relevant information can be shared.Therefore, Due to redundant data can be removed by using the peripheral information of wider range, therefore can improve what view data was encoded Efficiency.
Further, since the close relation between the possibility and various predictive modes of consideration merging is come to prediction mode information Layering and continuous coding and decoding are carried out with relevant information is merged, therefore the efficiency encoded to information can be improved.
One or more exemplary embodiments can be written as computer program and can be situated between being recorded using computer-readable Realized on the general purpose digital computer of matter operation described program.The example of computer readable recording medium storing program for performing includes magnetic storage medium (for example, ROM, floppy disk, hard disk etc.) and optical record medium (for example, CD-ROM or DVD).
Although below having specifically illustrated and having described exemplary embodiment, one of ordinary skill in the art will manage Solution, in the case where the spirit and scope of the present inventive concept being defined by the following claims are not departed from, can herein carry out form With the various changes in details.Exemplary embodiment is considered merely as descriptive sense, rather than the purpose for limitation.Cause This, the scope of present inventive concept is limited by the detailed description of exemplary embodiment, but is limited by appended claims Calmly, all difference and in the scope are to be interpreted as being included in the invention.

Claims (1)

1. a kind of equipment decoded to video, the equipment includes:
Resolver, from bit stream obtain indicate for coding unit coding mode whether be to skip pattern for coding unit Skip flag, when skip flag indicates to be to skip pattern for the coding mode of coding unit, from bit stream obtain indicate For the merging index of the block among the candidate blocks group of skip mode, when skip flag indicates the coding mode for coding unit When not being to skip pattern, the information on divisional type and the pooling information for subregion are obtained from bit stream, when used for subregion Pooling information indicate to have obtained the information of forecasting of the subregion among at least one subregion from the information of forecasting of adjacent data unit When, the merging index of current bay is obtained from bit stream, and use the fortune of the block indicated by the merging index as current bay Dynamic information determines the movable information of current bay, wherein, the merging index of current bay indicates the candidate blocks group of current bay Among block;
Data decoder, inter prediction is performed by using the movable information on coding unit,
Wherein:
During the information of forecasting of the subregion among obtained at least one subregion from the information of forecasting of adjacent data unit, current point The candidate blocks group in area includes at least one block among the contiguous block of current bay,
When the information on divisional type is obtained from bit stream, based on the information on divisional type, including the subregion At least one subregion be determined from coding unit.
CN201510203206.9A 2010-07-09 2011-07-07 Method and apparatus to Video coding and the method and apparatus to video decoding Active CN104869404B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US36282910P 2010-07-09 2010-07-09
US61/362,829 2010-07-09
US36795210P 2010-07-27 2010-07-27
US61/367,952 2010-07-27
KR20110006486A KR101484281B1 (en) 2010-07-09 2011-01-21 Method and apparatus for video encoding using block merging, method and apparatus for video decoding using block merging
KR10-2011-0006486 2011-01-21
CN201180043660.2A CN103155563B (en) 2010-07-09 2011-07-07 By using the method and apparatus that video is encoded by merged block and method and apparatus video being decoded by use merged block

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201180043660.2A Division CN103155563B (en) 2010-07-09 2011-07-07 By using the method and apparatus that video is encoded by merged block and method and apparatus video being decoded by use merged block

Publications (2)

Publication Number Publication Date
CN104869404A CN104869404A (en) 2015-08-26
CN104869404B true CN104869404B (en) 2017-06-23

Family

ID=45611872

Family Applications (6)

Application Number Title Priority Date Filing Date
CN201510206248.8A Active CN104869408B (en) 2010-07-09 2011-07-07 Method for decoding video
CN201510204652.1A Active CN104869405B (en) 2010-07-09 2011-07-07 Method and apparatus to Video coding and to the decoded method and apparatus of video
CN201510203206.9A Active CN104869404B (en) 2010-07-09 2011-07-07 Method and apparatus to Video coding and the method and apparatus to video decoding
CN201180043660.2A Active CN103155563B (en) 2010-07-09 2011-07-07 By using the method and apparatus that video is encoded by merged block and method and apparatus video being decoded by use merged block
CN201510204900.2A Active CN104869407B (en) 2010-07-09 2011-07-07 Method For Decoding Video
CN201510204802.9A Active CN104869406B (en) 2010-07-09 2011-07-07 Method and apparatus to Video coding and the method and apparatus to video decoding

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201510206248.8A Active CN104869408B (en) 2010-07-09 2011-07-07 Method for decoding video
CN201510204652.1A Active CN104869405B (en) 2010-07-09 2011-07-07 Method and apparatus to Video coding and to the decoded method and apparatus of video

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201180043660.2A Active CN103155563B (en) 2010-07-09 2011-07-07 By using the method and apparatus that video is encoded by merged block and method and apparatus video being decoded by use merged block
CN201510204900.2A Active CN104869407B (en) 2010-07-09 2011-07-07 Method For Decoding Video
CN201510204802.9A Active CN104869406B (en) 2010-07-09 2011-07-07 Method and apparatus to Video coding and the method and apparatus to video decoding

Country Status (24)

Country Link
EP (2) EP2924996B1 (en)
JP (5) JP5738991B2 (en)
KR (7) KR101484281B1 (en)
CN (6) CN104869408B (en)
AU (1) AU2011274722B2 (en)
BR (5) BR122020014007B1 (en)
CA (5) CA2886724C (en)
CY (1) CY1118484T1 (en)
DK (1) DK2580912T3 (en)
ES (5) ES2614203T3 (en)
HR (5) HRP20170129T1 (en)
HU (5) HUE033265T2 (en)
LT (5) LT2858366T (en)
MX (1) MX2013000345A (en)
MY (5) MY156223A (en)
PH (4) PH12015500917B1 (en)
PL (4) PL2858366T3 (en)
PT (5) PT2924995T (en)
RS (4) RS57674B1 (en)
RU (6) RU2013105501A (en)
SG (6) SG10201503383PA (en)
SI (4) SI2897365T1 (en)
TR (1) TR201813132T4 (en)
ZA (1) ZA201300578B (en)

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2756419C (en) 2009-03-23 2016-05-17 Ntt Docomo, Inc. Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program
DK2924995T3 (en) 2010-07-09 2018-10-01 Samsung Electronics Co Ltd PROCEDURE FOR VIDEO DECODING, BY USING BLOCK COLLECTION
KR101484281B1 (en) * 2010-07-09 2015-01-21 삼성전자주식회사 Method and apparatus for video encoding using block merging, method and apparatus for video decoding using block merging
CA3011221C (en) 2010-07-20 2019-09-03 Ntt Docomo, Inc. Video prediction encoding and decoding for partitioned regions while determining whether or not to use motion information from neighboring regions
EP4270957A3 (en) * 2010-11-04 2024-06-19 GE Video Compression, LLC Picture coding supporting block merging and skip mode
US11284081B2 (en) 2010-11-25 2022-03-22 Lg Electronics Inc. Method for signaling image information, and method for decoding image information using same
PL4156684T3 (en) 2010-11-25 2024-06-10 Lg Electronics Inc. Video decoding method, video encoding method, storage medium
JP2012209911A (en) * 2010-12-20 2012-10-25 Sony Corp Image processor and image processing method
US10171813B2 (en) 2011-02-24 2019-01-01 Qualcomm Incorporated Hierarchy of motion prediction video blocks
KR101210892B1 (en) * 2011-08-29 2012-12-11 주식회사 아이벡스피티홀딩스 Method for generating prediction block in amvp mode
WO2013109124A1 (en) * 2012-01-19 2013-07-25 삼성전자 주식회사 Method and device for encoding video to limit bidirectional prediction and block merging, and method and device for decoding video
AU2012200345B2 (en) * 2012-01-20 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding the significance map residual coefficients of a transform unit
KR101895389B1 (en) * 2012-11-27 2018-10-18 연세대학교 산학협력단 Method and Apparatus for image encoding
CN104104964B (en) 2013-04-09 2019-03-12 乐金电子(中国)研究开发中心有限公司 A kind of depth image interframe encode, coding/decoding method, encoder and decoder
US9497485B2 (en) 2013-04-12 2016-11-15 Intel Corporation Coding unit size dependent simplified depth coding for 3D video coding
RU2654129C2 (en) 2013-10-14 2018-05-16 МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи Features of intra block copy prediction mode for video and image coding and decoding
CN105659602B (en) 2013-10-14 2019-10-08 微软技术许可有限责任公司 Coder side option for the intra block duplication prediction mode that video and image encode
WO2015054812A1 (en) 2013-10-14 2015-04-23 Microsoft Technology Licensing, Llc Features of base color index map mode for video and image coding and decoding
US10469863B2 (en) 2014-01-03 2019-11-05 Microsoft Technology Licensing, Llc Block vector prediction in video and image coding/decoding
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
US10542274B2 (en) 2014-02-21 2020-01-21 Microsoft Technology Licensing, Llc Dictionary encoding and decoding of screen content
EP3253059A1 (en) 2014-03-04 2017-12-06 Microsoft Technology Licensing, LLC Block flipping and skip mode in intra block copy prediction
WO2015149698A1 (en) * 2014-04-01 2015-10-08 Mediatek Inc. Method of motion information coding
CN105493505B (en) 2014-06-19 2019-08-06 微软技术许可有限责任公司 Unified intra block duplication and inter-frame forecast mode
US20160029022A1 (en) * 2014-07-25 2016-01-28 Mediatek Inc. Video processing apparatus with adaptive coding unit splitting/merging and related video processing method
AU2014408228B2 (en) 2014-09-30 2019-09-19 Microsoft Technology Licensing, Llc Rules for intra-picture prediction modes when wavefront parallel processing is enabled
KR20220162877A (en) * 2014-10-31 2022-12-08 삼성전자주식회사 Apparatus and method for video video encoding and decoding using high precision skip encoding
US9591325B2 (en) 2015-01-27 2017-03-07 Microsoft Technology Licensing, Llc Special case handling for merged chroma blocks in intra block copy prediction mode
WO2016178485A1 (en) * 2015-05-05 2016-11-10 엘지전자 주식회사 Method and device for processing coding unit in image coding system
US10659783B2 (en) 2015-06-09 2020-05-19 Microsoft Technology Licensing, Llc Robust encoding/decoding of escape-coded pixels in palette mode
EP3349458A4 (en) * 2015-11-24 2018-10-24 Samsung Electronics Co., Ltd. Encoding sequence encoding method and device thereof, and decoding method and device thereof
CN108293115A (en) * 2015-11-24 2018-07-17 三星电子株式会社 Method and apparatus for being encoded/decoded to image
KR102471208B1 (en) * 2016-09-20 2022-11-25 주식회사 케이티 Method and apparatus for processing a video signal
KR102353778B1 (en) * 2016-10-11 2022-01-20 한국전자통신연구원 Method and apparatus for encoding/decoding image and recording medium for storing bitstream
KR20180045530A (en) * 2016-10-26 2018-05-04 디지털인사이트 주식회사 Video coding method and apparatus using arbitrary block partition
US20190335170A1 (en) * 2017-01-03 2019-10-31 Lg Electronics Inc. Method and apparatus for processing video signal by means of affine prediction
CN116886898A (en) * 2017-01-16 2023-10-13 世宗大学校产学协力团 Video decoding/encoding method and method for transmitting bit stream
KR102390413B1 (en) * 2017-03-03 2022-04-25 에스케이텔레콤 주식회사 Apparatus and Method for Video Encoding or Decoding
KR102591095B1 (en) 2017-09-28 2023-10-19 삼성전자주식회사 Method and Apparatus for video encoding and Method and Apparatus for video decoding
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
WO2019194514A1 (en) * 2018-04-01 2019-10-10 엘지전자 주식회사 Image processing method based on inter prediction mode, and device therefor
US11671619B2 (en) 2018-07-02 2023-06-06 Intellectual Discovery Co., Ltd. Video coding method and device using merge candidate
CN112889289A (en) * 2018-10-10 2021-06-01 三星电子株式会社 Method for encoding and decoding video by using motion vector differential value and apparatus for encoding and decoding motion information
WO2020084474A1 (en) 2018-10-22 2020-04-30 Beijing Bytedance Network Technology Co., Ltd. Gradient computation in bi-directional optical flow
WO2020084476A1 (en) 2018-10-22 2020-04-30 Beijing Bytedance Network Technology Co., Ltd. Sub-block based prediction
WO2020085953A1 (en) * 2018-10-25 2020-04-30 Huawei Technologies Co., Ltd. An encoder, a decoder and corresponding methods for inter prediction
WO2020094049A1 (en) * 2018-11-06 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Extensions of inter prediction with geometric partitioning
WO2020098647A1 (en) 2018-11-12 2020-05-22 Beijing Bytedance Network Technology Co., Ltd. Bandwidth control methods for affine prediction
CN113056914B (en) 2018-11-20 2024-03-01 北京字节跳动网络技术有限公司 Partial position based difference calculation
CN113170097B (en) 2018-11-20 2024-04-09 北京字节跳动网络技术有限公司 Encoding and decoding of video encoding and decoding modes
US20200169757A1 (en) * 2018-11-23 2020-05-28 Mediatek Inc. Signaling For Multi-Reference Line Prediction And Multi-Hypothesis Prediction
GB2580084B (en) * 2018-12-20 2022-12-28 Canon Kk Video coding and decoding
CN111355958B (en) * 2018-12-21 2022-07-29 华为技术有限公司 Video decoding method and device
JP2022521554A (en) 2019-03-06 2022-04-08 北京字節跳動網絡技術有限公司 Use of converted one-sided prediction candidates
JP7307192B2 (en) 2019-04-02 2023-07-11 北京字節跳動網絡技術有限公司 Derivation of motion vectors on the decoder side
US11616966B2 (en) * 2019-04-03 2023-03-28 Mediatek Inc. Interaction between core transform and secondary transform
CN117425015A (en) * 2019-04-09 2024-01-19 北京达佳互联信息技术有限公司 Method, apparatus and storage medium for video encoding
JP6931038B2 (en) * 2019-12-26 2021-09-01 Kddi株式会社 Image decoding device, image decoding method and program
CN115955565B (en) * 2023-03-15 2023-07-04 深圳传音控股股份有限公司 Processing method, processing apparatus, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090033847A (en) * 2009-03-04 2009-04-06 엘지전자 주식회사 Method for predicting motion vector
JP2009239852A (en) * 2008-03-28 2009-10-15 Fujifilm Corp Image processing apparatus and image processing method
CN101647279A (en) * 2007-01-24 2010-02-10 Lg电子株式会社 A method and an apparatus for processing a video signal
CN101682769A (en) * 2007-04-12 2010-03-24 汤姆森特许公司 Method and apparatus for context dependent merging for skip-direct modes for video encoding and decoding

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5608458A (en) * 1994-10-13 1997-03-04 Lucent Technologies Inc. Method and apparatus for a region-based approach to coding a sequence of video images
KR100223170B1 (en) * 1996-07-22 1999-10-15 윤종용 The optimization method and apparatus for region partition and motion estimation in moving image encoder
US6043846A (en) * 1996-11-15 2000-03-28 Matsushita Electric Industrial Co., Ltd. Prediction apparatus and method for improving coding efficiency in scalable video coding
CN1153451C (en) * 1996-12-18 2004-06-09 汤姆森消费电子有限公司 A multiple format video signal processor
JP2004208259A (en) * 2002-04-19 2004-07-22 Matsushita Electric Ind Co Ltd Motion vector calculating method
US7720154B2 (en) * 2004-11-12 2010-05-18 Industrial Technology Research Institute System and method for fast variable-size motion estimation
KR100772873B1 (en) * 2006-01-12 2007-11-02 삼성전자주식회사 Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
JP4763549B2 (en) * 2006-08-18 2011-08-31 富士通セミコンダクター株式会社 Inter-frame prediction processing apparatus, image encoding apparatus, and image decoding apparatus
CA2674438C (en) * 2007-01-08 2013-07-09 Nokia Corporation Improved inter-layer prediction for extended spatial scalability in video coding
RU2472305C2 (en) * 2007-02-23 2013-01-10 Ниппон Телеграф Энд Телефон Корпорейшн Method of video coding and method of video decoding, devices for this, programs for this, and storage carriers, where programs are stored
BRPI0810517A2 (en) * 2007-06-12 2014-10-21 Thomson Licensing METHODS AND APPARATUS SUPPORTING MULTIPASS VIDEO SYNTAX STRUCTURE FOR SECTION DATA
EP2210421A4 (en) * 2007-10-16 2013-12-04 Lg Electronics Inc A method and an apparatus for processing a video signal
CN102210153A (en) * 2008-10-06 2011-10-05 Lg电子株式会社 A method and an apparatus for processing a video signal
JP5368631B2 (en) * 2010-04-08 2013-12-18 株式会社東芝 Image encoding method, apparatus, and program
CN106162171B (en) * 2010-04-13 2020-09-11 Ge视频压缩有限责任公司 Decoder and method, encoding method
KR101484281B1 (en) * 2010-07-09 2015-01-21 삼성전자주식회사 Method and apparatus for video encoding using block merging, method and apparatus for video decoding using block merging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101647279A (en) * 2007-01-24 2010-02-10 Lg电子株式会社 A method and an apparatus for processing a video signal
CN101682769A (en) * 2007-04-12 2010-03-24 汤姆森特许公司 Method and apparatus for context dependent merging for skip-direct modes for video encoding and decoding
JP2009239852A (en) * 2008-03-28 2009-10-15 Fujifilm Corp Image processing apparatus and image processing method
KR20090033847A (en) * 2009-03-04 2009-04-06 엘지전자 주식회사 Method for predicting motion vector

Also Published As

Publication number Publication date
JP5738991B2 (en) 2015-06-24
JP5873196B2 (en) 2016-03-01
PH12015500917A1 (en) 2015-06-29
CN104869405A (en) 2015-08-26
ES2614203T3 (en) 2017-05-30
LT2924995T (en) 2018-10-10
PH12015500916A1 (en) 2015-06-29
KR20140100928A (en) 2014-08-18
KR20140099855A (en) 2014-08-13
MY178325A (en) 2020-10-08
MY189716A (en) 2022-02-28
RU2575990C1 (en) 2016-02-27
HUE041712T2 (en) 2019-05-28
PH12015500918B1 (en) 2015-06-29
PT2858366T (en) 2017-02-13
EP2924996A1 (en) 2015-09-30
PH12015500916B1 (en) 2015-06-29
JP5945022B2 (en) 2016-07-05
RU2014119390A (en) 2015-11-20
HUE033266T2 (en) 2017-11-28
CA2886721C (en) 2016-10-18
RU2575982C1 (en) 2016-02-27
RS55677B1 (en) 2017-06-30
SG10201503381WA (en) 2015-06-29
BR122020014021A2 (en) 2020-10-20
CN103155563A (en) 2013-06-12
RS55668B1 (en) 2017-06-30
BR122020014007B1 (en) 2022-02-01
PH12015500917B1 (en) 2015-06-29
RU2586035C2 (en) 2016-06-10
KR20140066146A (en) 2014-05-30
SG10201503379SA (en) 2015-06-29
SI2858366T1 (en) 2017-03-31
SG10201503383PA (en) 2015-06-29
JP2015136145A (en) 2015-07-27
KR101559875B1 (en) 2015-10-13
KR101524643B1 (en) 2015-06-03
PL2858366T3 (en) 2017-04-28
KR101559876B1 (en) 2015-10-13
BR122020014021B1 (en) 2022-05-31
CA2886960C (en) 2017-05-02
KR101484281B1 (en) 2015-01-21
SG186970A1 (en) 2013-02-28
CN104869408B (en) 2019-12-10
MX2013000345A (en) 2013-03-20
CA2886721A1 (en) 2012-01-12
HRP20170129T1 (en) 2017-03-24
CN104869406A (en) 2015-08-26
JP2015100136A (en) 2015-05-28
HRP20170169T1 (en) 2017-03-24
PH12015500919A1 (en) 2015-06-29
KR20120005932A (en) 2012-01-17
AU2011274722B2 (en) 2015-05-21
RU2577182C1 (en) 2016-03-10
PT2897365T (en) 2017-02-13
RS55650B1 (en) 2017-06-30
ES2688033T3 (en) 2018-10-30
PL2897365T3 (en) 2017-04-28
JP5945021B2 (en) 2016-07-05
CA2886724C (en) 2016-05-03
CA2886964C (en) 2016-10-25
CN104869408A (en) 2015-08-26
HUE031789T2 (en) 2017-07-28
AU2011274722A1 (en) 2013-01-31
HUE033265T2 (en) 2017-11-28
EP3442230B1 (en) 2021-04-21
PH12015500919B1 (en) 2015-06-29
BR122020014018A2 (en) 2020-10-13
ES2631031T3 (en) 2017-08-25
DK2580912T3 (en) 2017-02-13
CA2804780A1 (en) 2012-01-12
BR122020014007A2 (en) 2020-10-13
BR112013000555B1 (en) 2022-02-15
HRP20170168T1 (en) 2017-03-24
CA2886960A1 (en) 2012-01-12
KR101653274B1 (en) 2016-09-01
LT2858366T (en) 2017-02-27
SG10201503378VA (en) 2015-06-29
MY156223A (en) 2016-01-29
SG196797A1 (en) 2014-02-13
BR112013000555A2 (en) 2020-09-24
JP2015100137A (en) 2015-05-28
CA2886724A1 (en) 2012-01-12
CN104869404A (en) 2015-08-26
BR122020014015B1 (en) 2022-02-01
KR20140101327A (en) 2014-08-19
JP2015136146A (en) 2015-07-27
CY1118484T1 (en) 2017-07-12
EP3442230A1 (en) 2019-02-13
KR20140101713A (en) 2014-08-20
CA2804780C (en) 2018-01-09
CN104869406B (en) 2017-10-17
LT2580912T (en) 2017-02-10
EP2924996B1 (en) 2018-09-12
HRP20181469T1 (en) 2018-11-16
SI2897365T1 (en) 2017-03-31
RS57674B1 (en) 2018-11-30
PT2580912T (en) 2017-02-06
BR122020014015A2 (en) 2020-10-13
HUE041270T2 (en) 2019-05-28
LT2924996T (en) 2018-10-10
CN103155563B (en) 2016-09-21
PL2924996T3 (en) 2018-12-31
PH12015500918A1 (en) 2015-06-29
PL2580912T3 (en) 2017-06-30
CA2886964A1 (en) 2012-01-12
TR201813132T4 (en) 2018-09-21
SI2580912T1 (en) 2017-03-31
BR122020014018B1 (en) 2022-02-01
LT2897365T (en) 2017-02-27
KR101529995B1 (en) 2015-06-19
KR101529996B1 (en) 2015-06-19
ES2688031T3 (en) 2018-10-30
SI2924996T1 (en) 2018-10-30
CN104869405B (en) 2018-04-27
PT2924995T (en) 2018-10-19
PT2924996T (en) 2018-10-24
JP5873195B2 (en) 2016-03-01
HRP20181468T1 (en) 2018-11-16
RU2013105501A (en) 2014-10-20
ZA201300578B (en) 2015-11-25
ES2613610T3 (en) 2017-05-24
JP2013530658A (en) 2013-07-25
MY178332A (en) 2020-10-08
CN104869407B (en) 2017-04-12
MY178329A (en) 2020-10-08
RU2575983C1 (en) 2016-02-27
CN104869407A (en) 2015-08-26
KR20140100929A (en) 2014-08-18

Similar Documents

Publication Publication Date Title
CN104869404B (en) Method and apparatus to Video coding and the method and apparatus to video decoding
CN103220523B (en) For the equipment that video is decoded
CN102577383B (en) Hierarchy based on coding unit is for the method and apparatus encoding video and for the method and apparatus being decoded video
CN102742277B (en) The method and apparatus that video is decoded by the method and apparatus that video is encoded based on the scanning sequency of hierarchical data unit and the scanning sequency based on hierarchical data unit
CN102474613B (en) Consider that there is the method and apparatus that video is encoded and decodes by the scanning sequency of the coding unit of hierarchy
CN103765894B (en) Method and apparatus for coding video, and method and apparatus for decoding video accompanied by inter prediction using collocated image
CN105812801A (en) Videoo decoding device
CN104980754A (en) Method and apparatus for encoding and decoding video
CN104980739A (en) Method And Apparatus For Video Encoding Using Deblocking Filtering, And Method And Apparatus For Video Decoding Using The Same
CN106713910A (en) Method and apparatus for decoding image
CN105681802A (en) Video decoding device
CN104365100A (en) Video encoding method and device and video decoding method and device for parallel processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant