CN101395921B - Method and apparatus for decoding/encoding a video signal - Google Patents

Method and apparatus for decoding/encoding a video signal Download PDF

Info

Publication number
CN101395921B
CN101395921B CN200780008152.4A CN200780008152A CN101395921B CN 101395921 B CN101395921 B CN 101395921B CN 200780008152 A CN200780008152 A CN 200780008152A CN 101395921 B CN101395921 B CN 101395921B
Authority
CN
China
Prior art keywords
layer
information
prediction
inter
flag information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200780008152.4A
Other languages
Chinese (zh)
Other versions
CN101395921A (en
Inventor
全柄文
朴胜煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority claimed from PCT/KR2007/005808 external-priority patent/WO2008060125A1/en
Publication of CN101395921A publication Critical patent/CN101395921A/en
Application granted granted Critical
Publication of CN101395921B publication Critical patent/CN101395921B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of decoding a current layer using inter-layer prediction is disclosed. The present invention includes obtaining a first flag information indicating whether a current block of the current layer is coded using the inter-layer prediction, obtaining a quality identification information identifying a quality of the current block, obtaining a second flag information based on the first flag information and the quality identification information, the second flag information indicating whether a reference block is included in a specific slice of a reference layer, and decoding the current block based on the second flag information.

Description

The method and the device that are used for the decoding/encoding vision signal
Technical field
The present invention relates to a kind of scheme of coding/decoding vision signal.
Background technology
In general, the compression coding/decoding is meant and perhaps is stored in a series of signal processing technology of storage medium to it with suitable form through the transmitting digitized information of communication line.The object of compressed encoding has audio frequency, video, literal etc., especially is that the scheme of object execution compressed encoding is called the video sequence compression with the video.Usually, the characteristic of video sequence is to have spatial redundancy and time redundancy.
Especially, scalable-video-coded bit stream can partly and optionally be decoded.For instance, the basic layer of the decoder decodable code of low complex degree, and, can extract the bit stream of low data rate in order to be transmitted through network with limited capacity amount.In order further little by little to generate high-resolution image, need improve the quality of image stage by stage.
Summary of the invention
Technical problem
Especially, scalable-video-coded bit stream can partly and optionally be decoded.For instance, the basic layer of the decoder decodable code of low complex degree, and, can extract the bit stream of low data rate in order to be transmitted through network with limited capacity amount.In order further little by little to generate high-resolution image, need improve the quality of image stage by stage.
Technological means
Therefore, The present invention be directed to a kind of scheme of coding/decoding vision signal, it has fully avoided one or more problems of causing because of the restriction of correlation technique and shortcoming.
An object of the present invention is to provide a kind of method of the coding/decoding efficient when being used to improve the coding/decoding vision signal.
Another object of the present invention provides under a kind of zone in enhancement layer and the not corresponding situation of reference layer and makes the minimized method of the transmission of Information relevant with inter-layer prediction.
Another object of the present invention provides and a kind ofly makes the minimized method of the transmission of Information relevant with inter-layer prediction through the configuration information of confirming the bit stream behind the scalable video.
Another object of the present invention provides a kind of through whether confirming to represent that the information of inter-layer prediction makes the minimized method of the transmission of Information relevant with inter-layer prediction.
An object of the present invention is to provide a kind of through confirming that quality identification information makes the minimized method of the transmission of Information relevant with inter-layer prediction.
Another object of the present invention provides a kind of method that improves the coding/decoding efficient of vision signal through definition expression band boundaries information processed.
Further purpose of the present invention provides a kind of method that improves the coding/decoding efficient of vision signal through the configuration information of confirming the bit stream behind the scalable video in place.
Beneficial effect
Correspondingly, the present invention provides following effect or advantage.
At first, whether the present invention can use inter-layer prediction to be predicted through the current block of inspection enhancement layer.Current block at above-mentioned enhancement layer under the quilt situation of predicting, just need not transmit the coding/decoding information that is used for inter-layer prediction through using inter-layer prediction.Therefore, the present invention can improve coding/decoding efficient.Secondly, through going up the configuration information of confirming the bit stream behind the scalable video in place, make that the transmission of Information relevant with inter-layer prediction minimizes.For instance, whether represent the information and/or the quality identification information of inter-layer prediction the transmission of Information relevant with inter-layer prediction to be minimized through identification.And the present invention can make parallel processing become possibility through the information processed of definition expression band boundaries.Can improve the coding/decoding efficient of vision signal significantly through the whole bag of tricks of using above-mentioned explanation.
Description of drawings
The included accompanying drawing of the present invention is used to provide to further understanding of the present invention, and they are bonded to this part that has also constituted this specification, and these accompanying drawings show embodiments of the invention, and are used to explain principle of the present invention with specification.
In the accompanying drawing:
Fig. 1 is the schematic block diagram according to scalable video of the present invention system;
Fig. 2 and Fig. 3 are respectively the structure chart and the image that is used to explain this configuration information that rises to the configuration information of the scalable sequence in the scalable-video-coded bit stream according to an embodiment of the invention;
Fig. 4 is basic layer after the sampling according to an embodiment of the invention and the figure that cuts (cropping) relation between the enhancement layer;
Fig. 5 is respectively figure according to an embodiment of the invention and through the macroblock prediction of the inter-layer prediction grammer relevant with sub-macroblock prediction with Fig. 6;
Fig. 7 is figure according to an embodiment of the invention and through the relevant grammer of the residual prediction of inter-layer prediction;
Whether Fig. 8 carries out the structure chart that inter-layer prediction is carried out the grammer of block elimination filtering for the basis that is used for according to an embodiment of the invention;
Whether Fig. 9 carries out the structure chart of grammer that inter-layer prediction is represented the offset information of reference picture and the position difference between present image behind the up-sampling for basis according to an embodiment of the invention;
Whether Figure 10 carries out the structure chart that inter-layer prediction obtains to represent whether to limit the grammer of the flag information of piece (intra-block) in the frame that uses in the reference layer for basis according to an embodiment of the invention;
Whether Figure 11 carries out the structure chart that inter-layer prediction obtains the grammer of adaptive prediction information for the basis that is used for according to an embodiment of the invention.
Preferred forms
Other characteristics of the present invention and advantage will be illustrated in the following description, and its part can maybe can obtain through embodiment of the present invention by being understood from explanation.The object of the invention and other advantages can realize through the structure of being specifically noted in specification and claim and the accompanying drawing and obtain.
In order to realize that these and other advantages and basis are as comprising and broadly described the object of the invention; According to of the present invention a kind of through using the decode method of current layer of inter-layer prediction to comprise: obtain first flag information, this first flag information representes that whether the current block of this current layer is through using this inter-layer prediction to be encoded; Obtain quality identification information, this quality identification information is used to discern the quality of this current block; Obtain second flag information based on this first flag information and this quality identification information, this second flag information representes whether reference block is comprised in the particular bands of reference layer; With this current block of decoding based on this second flag information.
Preferably, the screen proportion or the spatial resolution of this reference layer and this current layer are different, from the same video signal of this reference layer current layer of encoding.
Preferably, when this reference block is comprised in the particular bands of this reference layer, through using this current block of piece decoding in the frame in this reference layer.
Preferably, when this reference block was crossed at least two bands in this reference layer, this current block was marked as piece in the frame that does not use in this reference layer.
In order to realize these and other advantages and according to the object of the invention, comprise according to the method for a kind of encoded video signal of the present invention: whether inspection is included in the particular bands of this reference layer with reference block in the corresponding reference layer of current block; Based on the result of this inspection, generate the information processed of expression for band boundaries; Based on the information processed of this expression for band boundaries, the bit stream of coding current layer.
It is understandable that above general description and following detailed description all be example with indicative, and can provide and further specify of the presently claimed invention.
The working of an invention mode
Be elaborated referring now to the preferred embodiment of the present invention, its example is represented in the accompanying drawings.
At first, consider between spatial redundancy, time redundancy, scalable redundancy, visual angle redundant to the compression coding/decoding of video signal data.The compression coding/decoding of considering scalable redundancy is one embodiment of the present of invention.But technical conceive of the present invention is applicable to redundancy between time redundancy, spatial redundancy, visual angle etc.Further, " coding/decoding " of indication comprises two notions of Code And Decode in this specification, can make an explanation neatly according to technical conceive of the present invention and technical scope.
In the bit sequence configuration of vision signal; Exist and be referred to as NAL (NetworkAbstraction Layer; Network abstract layer) independently layer structure; It is positioned at the VCL (Video Code Layer, video coding layer) that carries out moving image encoding process itself and transmission also between the following level system of memory encoding information.Cataloged procedure generates and is output as the VCL data, and it was mapped as the NAL unit before transmission or storage.Each NAL unit comprises the video data of compression or corresponding to the data RBSP of header (Raw Byte Sequence Payload, raw byte sequence payload, the result data of moving image compression).
The NAL unit consists essentially of NAL head and RBSP two parts.Comprise expression in the NAL head and whether comprise information (nal_unit_type) as the type of the flag information (nal_ref_idc) of the band of the reference picture of this NAL unit and expression NAL unit.Initial data in RBSP after the store compressed.And for the lengths table that makes RBSP is shown the multiple of 8 bits, part adds RBSP tail bit (RBSP trailing bit) at the end of RBSP.The type of NAL unit has IDR (Instantaneous Decoding Refresh; Instantaneous decoding refresh) image; SPS (SequenceParameter Set, sequence parameter set), PPS (Picture Parameter Set; Picture parameter set) and SEI (Supplemental Enhancement Information, supplemental enhancement information) etc.
Therefore, if the information (nal_unit_type) of the type of expression NAL unit is expressed as scalable video coded slice, can improve coding/decoding efficient through increasing the various configuration informations relevant with above-mentioned scalable coding/decoding.For instance; Can increase the current access unit of expression and whether be the flag information of instantaneous decoding refresh (below be called IDR) access unit; The dependency identification information of representation space scalability; Quality identification information, whether expression uses the flag information (no_inter_layer_pred_flag) of inter-layer prediction, priority identification information etc.It will be elaborated with reference to figure 2.
In standard,, stipulated for various abridged tables and other requirement of level in order to buy target product with suitable expense.In this case, decoder must satisfy according to determined requirement in corresponding abridged table and the rank.Similarly, defined " abridged table " and " rank " two conceptions of species and come representative function or parameter, it is used to represent the manageable size that is compressed the scope of sequence of decoder.Can discern bit stream based on predetermined abridged table through profile identifier (profile_idc).Profile identifier be meant the expression bit stream based on the sign of abridged table.For instance, H.264/AVC in, if profile identifier is 66, its expression bit stream based on baseline profile; If profile identifier is 77, its expression bit stream is based on main abridged table; If profile identifier is 88, its expression bit stream is based on extended profile.In addition, above-mentioned profile identifier can be included in sequential parameter and concentrates.
Therefore; In order to handle scalable sequence; Need whether the bit stream of identification input is the abridged table of scalable sequence, if be identified as the abridged table with scalable sequence, being necessary to increase the additional information that grammer makes at least one be used for scalable sequence can be transmitted.Here the abridged table of scalable sequence as additional aspects H.264/AVC, representes to be used to handle the profile mode of telescopic video.Because for traditional AVC technology, SVC is an additional aspects, so, and unconditionally increase grammer and compare, it is more effective as the additional information that is used for the SVC pattern to increase grammer.For instance, when the profile identifier of AVC is expressed as the abridged table of scalable sequence,, can improve coding/decoding efficient if increase information about scalable sequence.
Below be used to provide the various embodiment of effective vision signal coding/decoding method with explanation.
Fig. 1 is the schematic block diagram according to telescopic video coding/decoding of the present invention system.
In order to be provided for the sequence optimized at various communication environments and various terminals, the sequence that offers the terminal should be diversified.If be provided to corresponding terminal for the optimized sequence in each terminal, represent that single sequence source prepared to be used for various combinations of parameters values, these parameters comprise transmission frame number, the resolution of per second, the bit number of each pixel etc.Therefore, providing of sequence optimized applied burden to content supplier.Therefore, content supplier is encoded to original series the compressed sequence data of high bit rate.When receiving the sequence of requests of being made by the terminal, content supplier's decoding original series is encoded to the sequence data of the series processing ability that is suitable for the terminal to it, and offers the terminal to the data behind this coding then.Because this code conversion is accompanied by coding-decoding-cataloged procedure, so in the process of sequence is provided, can not avoid generation time to postpone.Therefore, need complicated hardware equipment and algorithm in addition.
On the other hand, scalable video (SVC) is a kind of being used for the encoding scheme of optimum picture quality encoded video signal so that the partial sequence of the image sequence that is produced is shown as sequence through decoding.Here, partial sequence can be represented the sequence that constituted by select frame off and on by from whole sequence.For by SVC image encoded sequence, through the usage space scalability, sequence size can be used for low bit rate by reduction.And, but also the service quality scalability reduces the picture quality of sequence.Here, the image sequence with small-size screen and/or low number of pictures per second can be called as basic layer, and the sequence with relative large scale screen and/or high relatively number of pictures per second can be called as be enhanced or enhancement layer.
Realize that with the mode with the processing section sequence of receiving only the sequence of low image quality representes by the above-mentioned scalable scheme image encoded sequence of mentioning.Yet if the bit rate step-down, picture quality just degradation is a lot.In order to solve the image quality issues of degradation, can independently auxiliary picture sequence be provided for low bit rate, for instance, have the image sequence of small-size screen and/or low number of pictures per second.Such auxiliary sequencel can be called as basic layer, and main picture sequence can be called as be enhanced or enhancement layer.
When description was used for the various embodiment of inter-layer prediction, the present invention had used the notion that comprises basic layer and enhancement layer.For example, enhancement layer can have and basic different spatial resolution of layer or screen proportion.And enhancement layer can have and the different picture quality of basic layer.In detail for instance, basic layer can be a reference layer, and enhancement layer can be a current layer.Basic layer and the enhancement layer hereinafter explained only are exemplary, and it does not constitute the restriction that the present invention is explained.
Below specify telescopic video coding/decoding system.At first, scalable coding/decoding system comprises encoder 102 and decoder 110.Encoder 102 comprises basic layer coding unit 104, enhancement layer coding unit 106 and Multiplexing Unit 108.And decoder can comprise dequantisation unit 112, basic layer decoder unit 114 and enhancement layer decoder unit 116.Basic layer coding unit 104 can produce elementary bit stream through the sequence signal X (n) of compression input.Enhancement layer coding unit 106 can produce enhancement layer bit-stream through sequence signal X (n) that uses input and the information that is produced by basic layer coding unit 104.And Multiplexing Unit 108 can produce scalable bitstream through using basic layer bit stream and enhancement layer bit-stream.
The scalable bitstream that is produced is sent to decoder 110 through particular channel, and the scalable bitstream that is transmitted can be divided into enhancement layer bit-stream and basic layer bit stream by the dequantisation unit 112 of decoder 110.Basic layer decoder unit 114 receives basic layer bit stream and is basic layer bit stream decoding sequence signal and the residual sum movable information of interblock between macro block.Here, can carry out corresponding decoding based on the single loop coding/decoding method.
Enhancement layer decoder unit 116 receives enhancement layer bit-stream, and with reference to the basic layer of output sequence signal Xe (n) that decodes that rebuilds by basic layer decoder unit 114.Here, output sequence signal Xb (n) will be have than after the low picture quality of output sequence signal Xe (n) or the sequence signal of resolution.
Therefore, each of enhancement layer coding unit 106 and enhancement layer decoder unit 116 is all through using inter-layer prediction to carry out coding.Inter-layer prediction is represented the sequence signal through movable information that uses basic layer and/or texture information prediction enhancement layer.Here, texture information can represent to belong to the view data or the pixel value of macro block.For example, in inter-layer prediction method, fundamental forecasting pattern (intra baseprediction mode) or residual prediction mode in the frame are arranged.The fundamental forecasting pattern can represent to be used for to predict the pattern based on the piece of the enhancement layer of the respective regions of basic layer in the frame.Here, the respective regions in the basic layer can be represented the zone with the interlayer pattern-coding.Simultaneously, residual prediction mode can be used the respective regions with residual error data, and this residual error data is the image difference in the basic layer.In two kinds of situations, the respective regions in the above-mentioned basic layer can be enlarged or dwindle through sampling is used for inter-layer prediction.Sampled representation changes image resolution ratio.And sampling can comprise resampling, down-sampling, up-sampling etc.For instance, can in sample, resample with inter-layer prediction.And, can come to produce again pixel data to reduce image resolution ratio through using downsampling filter, this can be called as down-sampling.And, can generate some additional pixel data to improve image resolution ratio through using up-sampling filter, this can be called as up-sampling.Resampling can comprise down-sampling and two notions of up-sampling.Among the present invention, can come this term of correct interpretation " sampling " according to scope and the technological thought of corresponding embodiment of the present invention.
Simultaneously, for the identical sequence content, for different purposes or basic layer of purpose generation and enhancement layer, and they are different each other at aspects such as spatial resolution, frame rate, bit rates.When the inter-layer prediction encoded video signal, non-dyadic case, promptly enhancement layer is not 2 integer to the ratio of basic layer on spatial resolution, can be called as extending space scalability (ESS).For instance, when through inter-layer prediction with enhancement layer coding when having the vision signal of 16: 9 (level: vertical) ratios, basic layer possibly take place be encoded as this situation of image with 4: 3 ratios.Under this situation,,, can not cover the whole zone of enhancement layer even basic layer is enlarged to be used for inter-layer prediction because basic layer is encoded by the state that cuts (cropping state) that part cuts according to raw video signal.Therefore, since not corresponding zone in by the basic layer of up-sampling, the subregion of enhancement layer, the information of the basic layer behind the up-sampling that is used for inter-layer prediction just can not be used in this subregion.That is to say that this expression inter-layer prediction is not suitable for this subregion.Under this situation, the coded message that is used to inter-layer prediction possibly not be transmitted.Specify relevant specific embodiment below with reference to Fig. 5 to Figure 11.
Fig. 2 and Fig. 3 are respectively the structure chart of the configuration information of the scalable sequence that joins scalable-video-coded bit stream according to an embodiment of the invention, and are used to describe the image of this configuration information.
Fig. 2 shows the topology example of NAL unit, and the feasible configuration information about scalable sequence in this NAL unit adds wherein.At first, the NAL unit can mainly comprise NAL unit header and RBSP (raw byte sequence payload: the result data of moving image compression).The NAL unit header can comprise whether expression NAL unit comprises the information (nal_unit_type) of the identifying information (nal_ref_idc) and the expression NAL cell type of the band of reference picture.And, also the extended area that restrictedly comprises the NAL unit header can be arranged.For example in fact, if the information of expression NAL cell type and scalable video are associated or represent prefix NAL unit, the NAL unit can comprise the extended area of this NAL unit header so.Particularly, if nal_unit_type equals 20 or 14, the NAL unit can comprise the extended area of NAL unit header.And, can whether be next extended area that joins the NAL unit header about the configuration information of scalable sequence of flag information (svc_mvc_flag) of SVC bit stream according to discerning it.
In another example, if the information of expression NAL cell type is the information of expression subset sequence parameter, RBSP can comprise the information about subset sequence parameter.Particularly, if nal_unit_type equals 15, RBSP can comprise the information about subset sequence parameter, about information of slice layer etc.Under this situation, according to profile information, subset sequence parameter can comprise the extended area of sequence parameter set.For instance, if profile information (profile_idc) is the abridged table relevant with scalable coding, subset sequence parameter can comprise the extended area of sequence parameter set so.Perhaps, according to profile information, sequence parameter set can comprise the extended area of sequence parameter set.The extended area of sequence parameter set can comprise the de-blocking filter that is used to control inter-layer prediction characteristic information and be used for the information etc. of the parameter correlation of up-sampling process.About the various configuration informations of scalable sequence, for example, can be included in the extended area of NAL unit header, the extended area and the configuration information in the slice layer of sequence parameter set will specify hereinafter.
At first; From the extended area of sequence parameter set, can obtain flag information (inter_layer_deblocking_filter_control_present_flag), this flag information representes whether to exist the information of characteristic that is used for the de-blocking filter of inter-layer prediction for control.And, can acquired information (extended_spatial_scalability) from the extended area of sequence parameter set, this information representation is used for the position of information of the parameter correlation of up-sampling process.Particularly, for instance,, can be illustrated in and do not have any parameter that is used for the up-sampling process in sequence parameter set or the slice header if extended_spatial_scalability equals 0.If extended_spatial_scalability equals 1, can be illustrated in sequential parameter and concentrate existence to be used for the parameter of up-sampling process.If extended_spatial_scalability equals 2, can be illustrated in the slice header and to have the parameter that is used for the up-sampling process.Hereinafter will specify the parameter that is used for the up-sampling process with reference to figure 9.
Whether expression uses the information of inter-layer prediction 4. can be meant the flag information whether the expression inter-layer prediction is used in the decoding to encoding strip thereof.Flag information can obtain from the extended area of NAL head.For instance, if flag information is set as 1, can represent not use inter-layer prediction.If flag information is set as 0, can use or not use inter-layer prediction according to the encoding scheme in the macro block.This is because the inter-layer prediction in the macro block can use or not use.
3. quality identification information representes to discern the information of the quality of NAL unit.To configuration information be described with reference to figure 3.For instance, single image can be encoded into the different layer of phase mutual mass.Among Fig. 3, the layer on Spa_Layer0 and the Spa_Layer1 can be encoded as the different layer of phase mutual mass.Particularly, suppose that the information of identification NAL element quality is named as quality_id, layer B1, B2 ..., B10 can be set to quality_id and equal 0.And, layer Q1, Q2 ..., Q10 can be set to quality_id and equal 1.In other words, layer B1, B2 ..., B10 can represent to have the layer of lowest image quality.These are called as primary image.Layer Q1, Q2 ..., Q10 is corresponding to comprising a layer B1, B2 ..., the layer of B10, and have than layer B1, B2 ..., the picture quality that B10 is good.And quality identification information can be defined according to variety of way.For instance, quality identification information can be expressed as 16 ranks.
The information of representation space scalability is meant the information of the dependence of the relevant NAL of expression identification unit.With reference to figure 3 configuration information is described.For instance, dependence can change according to spatial resolution.Among Fig. 3, the layer among Spa_Layer0 and the Spa_Layer1 has equal resolution.Layer among the Spa_Layer0 can comprise through the layer among the Spa_Layer1 is carried out the image that down-sampling obtains.Particularly, for instance, suppose that identification is represented as dependency_id about the information of the dependence of NAL unit, the dependency_id between the layer among the Spa_Layer0 equals 0.And the dependency_id between the layer among the Spa_Layer1 equals 1.Dependency identification information can be defined according to variety of way.Therefore, the NAL unit that has the identification dependency information of equal values can represent that (dependency representation) representes with dependence.
Simultaneously, can define single layer according to the information and the quality identification information of identification dependence.In this case, the information and the NAL unit of quality identification information that have the identification dependence of equal values can represent that (layer representation) representes with layer.
The identifying information of express time scalability is meant identification other information of time stage about the NAL unit.Can the time rank be described according to classification B picture structure.For example in fact, and the layer among the Spa_Layer0 (B1, Q1) and layer (B3 Q3) can have identical time rank Tem_Layer0.If layer (B5, Q5) reference level (B1, Q1) with layer (B3, Q3), so layer (B5, Q5) can have than layer (B1, Q1) and layer (B3, the time rank Tem_Layer1 that time rank Tem_Layer0 Q3) is high.Likewise, if (B7, Q7) (B1, Q1) (B5, Q5), layer (B7 Q7) has than layer (B5, the time rank Tem_Layer2 that time rank Tem_Layer1 Q5) is high reference level layer so with layer.Can there be identical time class value all NAL unit in single access unit.In the situation of IDR access unit, time class value can be changed into 0.
The flag information whether expression is used as reference picture with reference to primary image be illustrated in the inter-layer prediction process whether be used as reference picture with reference to primary image or in inter-layer prediction process decoded picture whether be used as reference picture.For with the NAL unit in one deck, promptly have the NAL unit of the information of identical identification dependence, flag information can have identical value.
Priority tag information is meant the information of the priority of identification NAL unit.Through using priority tag information to provide that extensibility is possible between interlayer extensibility or image.For instance, be possible through using priority tag information to come various times and other sequence of space grade to be provided to the user.Therefore, the user can only watch the sequence on special time and the space, perhaps only watches the sequence according to different restrictive conditions.Precedence information can be configured according to different modes according to its reference conditions.Priority information can be not based on specific reference and by random arrangement.And precedence information can be decided by decoder.
And the configuration information in the extended area of NAL unit header can comprise whether the current access unit of expression is the flag information of IDR access unit.
The various information that are used for inter-layer prediction can be included in slice layer.For instance; Can comprise in the expression up-sampling process for the information processed of band boundaries 5.; The information relevant with the operation of de-blocking filter 6.; The information relevant with the phase shift of carrier chrominance signal 7., the offset information of the position difference between basic layer of expression and the enhancement layer 8. and expression whether have an adaptive prediction information 9..Above-mentioned information can be obtained from slice header.
As information 6. the example relevant with the operation of de-blocking filter; Can have the method for expression de-blocking filter information (disable_deblocking_filter_idc), the necessary offset information of block elimination filtering (inter_layer_slice_alpha_c0_offset_div2, inter_layer_slice_beta_offset_div2).
As information 7. the example relevant with the phase shift of carrier chrominance signal; Can be relevant for the level and vertical phase shift (scaled_ref_layer_left_offset of the chromatic component of the image that is used for inter-layer prediction; Scaled_ref_layer_top_offset; Scaled_ref_layer_right_offset, information scaled_ref_layer_bottom_offset).
Offset information example 8. as the position difference between presentation layer; Can there be expression to be used for the information (scaled_ref_layer_left_offset of the upper and lower, left and right position difference of reference picture and present image behind the up-sampling of inter-layer prediction; Scaled_ref_layer_top_offset; Scaled_ref_layer_right_offset, scaled_ref_layer_bottom_offset).
Information processed example 5. as the macro block of representing the band boundaries place in the basic layer up-sampling process; Can there be expression to be present in when the corresponding Intra-coded blocks (intra-coded block) in the basic layer under the situation of at least two bands of basic layer, the information (constrained_intra_resampling_flag) whether current macro can not be predicted through the corresponding Intra-coded blocks of using in the basic layer.
And whether whether expression exist the information of adaptive prediction 9. can be illustrated in to exist in slice header and the macroblock layer and predict the information that is associated.According to the information of representing whether to exist adaptive prediction, can determine to use the adaptive forecasting method of which kind of type.To be elaborated to it with reference to Figure 11 after a while.
Fig. 4 is the figure about basic layer after the sampling and the relation of cutting (croppingrelation) between the enhancement layer 1.
In scalable video, can check whether the current block of enhancement layer can use inter-layer prediction.For instance, can check whether the zone corresponding to all pixels in the current block is present in the basic layer.As the result of checking process,, so just needn't transmit the coded message that is used for inter-layer prediction if the current block of enhancement layer is not used to inter-layer prediction.Therefore, can improve code efficiency.
Therefore, can define a function, whether its current block that can check enhancement layer has used inter-layer prediction.For instance, whether function in_crop_window () can be defined as the zone that is used for checking corresponding to all pixels of current block and be present in the basic layer.Suppose that the macro index on the horizontal direction on the enhancement layer is set to mbIdxX, and the macro index on the vertical direction is set to mbIdxY, if satisfy following condition, function in_crop_window () can return value " TRUE " (or " 1 ").
mbIdxX≥(ScaledBaseLeftOffset+15)/16
mbIdxX≤(ScaledBaseLeftOffset+ScaledBaseWidth-1)/16
mbIdxY≥(ScaledBaseTopOffset+15)/16
mbIdxY≤(ScaledBaseTopOffset+ScaledBaseHeight-1)/16
Can derive mbIdxX through using the macroblock number on macroblock address and the horizontal direction.Can be through application macro block adaptive frame-field (macroblock adaptive frame-field) and derive mbIdxY with diverse ways according to whether.For instance, if used macro block adaptive frame-field, can be through considering that macro block is to deriving mbIdxY.When consider macro block to the time, suppose that the index of top macro block is set to mbIdxY0, the index of bottom macro block is set to mbIdxY1.MbIdxY0 can derive from the image after expression is used for the up-sampling of inter-layer prediction and the offset information of the upper position difference between the present image and the macroblock number information of horizontal direction.In this situation, the value of horizontal macroblock number information can be two field picture or field picture (field picture) and difference according to present image.MbIdxY1 can derive from the image after expression is used for the up-sampling of inter-layer prediction and the offset information of the upper position difference between the present image and the macroblock number information on the vertical direction.Simultaneously, if there is not application macro block adaptive frame-field, mbIdxY0 and mbIdxY1 can be set to equal values.
ScaledBaseLeftOffset representes offset information, and its expression is used for image and the position difference in the left side between the present image behind the up-sampling of inter-layer prediction.ScaledBaseTopOffset is used for image and the position difference of the top between the present image behind the up-sampling of inter-layer prediction for expression.ScaledBaseWidth representes the horizontal width of the image behind the up-sampling.And ScaledBaseHeight representes the vertical height of the image behind the up-sampling.
If the ungratified words of each in the above-mentioned condition, function in_crop_window () can return one " FALSE " (or " 0 ") value.
When corresponding to the pixel of at least one pixel in the current block not in the basic layer behind up-sampling the time; That is to say; When function in_crop_window (CurrMbAddr) returns " FALSE " value and the information that is associated of inter-layer prediction be not used to current block, and this information can not be transmitted.Therefore, according to embodiments of the invention,, can omit the transmission of Information relevant with the inter-layer prediction that is used for current block if identify the non-existent words of respective regions in the basic layer through in_crop_window (CurrMbAddr).
According to embodiments of the invention, explained below through using function in_crop_window () to carry out the situation of coding.
At first, when identifying zone corresponding to current block through in_crop_window (CurrMbAddr) and be present in the basic layer, enhancement layer coding unit 106 is through using the texture and/or the movable information inter-layer prediction of basic layer.Under this situation, movable information can comprise reference index information, and motion vector information is cut apart (partition) information etc.
When texture that is set to relevant block at the texture and/or the movable information of current block and/or movable information; Or be when the texture of relevant block and/or movable information are derived at the texture of current block and/or movable information; Enhancement layer coding unit 106 joins indication indication information complete or derived information in the data flow of enhancement layer, and should add notice to decoder 110.But when identifying zone corresponding to current block through in_crop_window (CurrMbAddr) and be not present in basic layer, enhancement layer coding unit 106 is inter-layer prediction and produce enhancement layer not.Simultaneously, if when decoder 110 confirms that through in_crop_window (CurrMbAddr) zone corresponding to current block is not present in basic layer, indication information is not transmitted in decoder 110 decisions.
Fig. 5 is respectively the macro block syntax graph relevant with sub-macroblock prediction according to an embodiment of the invention and through inter-layer prediction with Fig. 6.
When inter-layer prediction, be transferred to decoder with the relevant information of inter-layer prediction of the strip data of current NAL.For example in fact, in the prediction of the motion vector of the current block of enhancement layer, can obtain the sign (motion_prediction_flag_lx) whether expression use the motion vector of basic layer from macroblock layer.According to one embodiment of present invention, decoder learns with the mode of inspection in_crop_window (CurrMbAddr) whether the information that is associated with inter-layer prediction transmits (510,610) by encoder.For example in fact, according to in_crop_window (CurrMbAddr), if be not present in basic layer corresponding to the zone of current block, sign motion_prediction_flag_10/11 will can not transmit (520/530,620/630) in bit stream.
And the sign the adaptive_motion_prediction_flag whether information that expression and motion vector prediction are associated is present in macroblock layer can be obtained from the strip data of current NAL.According to one embodiment of present invention, through the mode of inspection adaptive_motion_prediction_flag and in_crop_window (CurrMbAddr), encoder can not transmit the information (510) that is associated with inter-layer prediction.For example in fact; According to in_crop_window (CurrMbAddr); If be not present in basic layer, perhaps according to adaptive_motion_prediction_flag, if the information that is associated with motion vector prediction is not present in macro block corresponding to the zone of current block; Can not transmit sign motion_prediction_flag_10/11 (520/530,620/630).Above-mentioned technological thought can be applicable to the sub-macroblock prediction shown in Fig. 6 equally.
Therefore, if after the above-mentioned two kinds of information of identification, satisfy above-mentioned two conditions, the information that transmission and inter-layer prediction are associated.Therefore, can improve code efficiency.
Fig. 7 is the relevant syntax graph of residual prediction according to an embodiment of the invention and through inter-layer prediction.
In the situation of inter-layer prediction, the information relevant with inter-layer prediction in the strip data of current NAL is transferred to decoder.For instance, in the situation of the residual signals of predicting current block, can obtain the sign residual_prediction_flag (740) whether expression use the residual signals of basic layer from macroblock layer.Under this situation, can know basic layer through layer expression information (layer representation information).According to one embodiment of present invention, through confirming the mode of in_crop_window (CurrMbAddr), encoder can not transmit the information relevant with inter-layer prediction.
For example in fact; Can obtain above-mentioned residual_prediction_flag (710) according to the information of the type of strip of the information adaptive_residual_prediction_flag of expression existence of the information relevant in the macro block and current block with the prediction of residual signals.Above-mentioned residual_prediction_flag also can be obtained according to base_mode_flag.Above-mentioned base_mode_flag representes whether the type (mb_type) of current macro is derived (720) from the respective regions of basic layer.Also can obtain residual_prediction_flag according to the type and the function in_crop_window (CurrMbAddr) of current macro.For example in fact, when the type of macro block and sub-macro block is not frame mode (MbPartPredType (mb_type, 0)!=Intra_16x16 (8x8 and 4x4)), and the value of in_crop_window (CurrMbAddr) (its expression is present in the basic layer corresponding to the zone of current macro) when " true ", residual_prediction_flag (730) can be obtained.If the type of current macro is not a frame mode or when not being present in basic layer (in_crop_window (CurrMbAddr)=0) corresponding to the zone of current macro, do not carry out residual prediction.And above-mentioned encoder 102 generates enhancement layer and does not comprise residual_prediction_flag.
If above-mentioned residual_prediction_flag is set to 1, then predict the residual signals of current block from the residual signals of basic layer.If residual_prediction_flag is set to 0, then do not carry out inter-layer prediction and the coded residual signal.If in macroblock layer, there is not residual_prediction_flag, it can be derived according to hereinafter.In fact, have only when satisfying following condition fully for example, residual_prediction_flag can be derived as preset value (default_residual_prediction_flag).At first, base_mode_flag should be set to 1 or the type of current macro should not be frame mode.Secondly, in_crop_window (CurrMbAddr) should be set to 1.The 3rd, whether expression uses the sign no_inter_layer_pred_flag of inter-layer prediction should be set to 0.The 4th, type of strip should not be the EI band.Otherwise, derivation draws 0.
Through in_crop_window (CurrMbAddr); When confirming that zone corresponding to the current sequence piece is not present in the basic layer; Enhancement layer decoder unit 116 decision motion prediction sign (motion_prediction_flag) information are not present in macro block or the sub-macro block, and only come the reconstruction video signal and do not carry out inter-layer prediction through the data bit flow that uses enhancement layer.If the syntactic element that is used for residual prediction is not included in the data bit flow of enhancement layer, residual prediction flag residual_prediction_flag can be derived in enhancement layer decoder unit 116.When so carrying out, can consider whether be present in the basic layer through in_crop_window (CurrMbAddr) corresponding to the zone of current block.If in_crop_window (CurrMbAddr) is for to be set to 0, enhancement layer decoder unit 116 can be confirmed not to be present in the basic layer corresponding to the zone of current sequence piece.In this case, deriving residual_prediction_flag is 0, and can only not carry out residual prediction through the residual signals that uses basic layer through the data reconstruction video signal that uses enhancement layer then.
Fig. 8 is whether basis according to an embodiment of the invention exists inter-layer prediction to carry out the syntactic structure figure of block elimination filtering.
At first, according to one embodiment of present invention, encoder can not transmit the information relevant with inter-layer prediction through the configuration information of the bit stream behind the inspection scalable video.The configuration information of the bit stream behind the scalable video can obtain from the extended area of NAL head.For example in fact, can whether use the information (no_inter_layer_pred_flag) of inter-layer prediction to obtain the information relevant (810) according to expression with de-blocking filter with quality identification information (quality_id).Example as the information relevant with the operation of de-blocking filter; The information (disable_deblocking_filter_idc) that the method for operation of expression de-blocking filter can be arranged; The required offset information of block elimination filtering (slice_alpha_c0_offset_div2, slice_beta_offset_div2) etc.
At first, can obtain to represent the information of the operation of de-blocking filter based on the information of the characteristic that is used to control de-blocking filter.In this case, mentioned as the description of Fig. 2, the information that is used for controlling the characteristic of de-blocking filter can obtain from the extended area of sequence parameter set.For example in fact; As the information of the characteristic that is used to control de-blocking filter, the flag information (inter_layer_deblocking_filter_control_present_flag) (820) whether the information of characteristic that can have expression control to be used for the de-blocking filter of inter-layer prediction exists.Therefore, can obtain the information (830) of the method for operation of expression de-blocking filter according to above-mentioned flag information.
Particularly, if disable_deblocking_filter_idc equals 0, can carry out filtering for all block edges of the luminance and chrominance information of present image so.If disable_deblocking_filter_idc equals 1, can all block edges of present image not carried out filtering.Disable_deblocking_filter_idc equals 2, then all block edges outside the block edge with overlapping band boundaries is carried out filtering.If disable_deblocking_filter_idc equals 3, then the block edge of overlapping band boundaries carries out filtering to having not, and then the block edge with overlapping band boundaries is carried out filtering.If disable_deblocking_filter_idc equals 4, only the block edge of luminance signal is carried out filtering and can the block edge of carrier chrominance signal do not carried out filtering.If disable_deblocking_filter_idc equals 5, then all block edges to the luminance signal outside the block edge with overlapping band boundaries carry out filtering, and can the block edge of carrier chrominance signal not carried out filtering.If disable_deblocking_filter_idc equals 6, can be not the block edge of carrier chrominance signal not be carried out filtering, but can be only carry out filtering the block edge of luminance signal.When for having after the block edge of the luminance signal of overlapping band boundaries does not carry out filtering, can carry out filtering to the block edge of luminance signal with overlapping band boundaries.
Information based on the method for operation of representing de-blocking filter can obtain the required offset information of block elimination filtering.For instance, if disable_deblocking_filter_idc equals 1, all block edges are not carried out block elimination filtering.Therefore, have only when the value of disable_deblocking_filter_idc is not set to 1, could obtain the necessary offset information of block elimination filtering (840).Employed offset information (850) when for example in fact, in macro block, visiting the de-blocking filter table when inter_layer_slice_alpha_c0_offset_div2 and inter_layer_slice_beta_offset_div2 can be meant inter-layer prediction.Therefore, can carry out block elimination filtering through using the offset information that is obtained.
Fig. 9 is whether basis according to an embodiment of the invention exists inter-layer prediction to obtain to represent the syntactic structure figure of the offset information of reference picture and the position difference between present image behind the up-sampling.
According to one embodiment of present invention, encoder can pass through the configuration information of inspection scalable-video-coded bit stream, and does not transmit the information relevant with inter-layer prediction.The configuration information of scalable-video-coded bit stream can obtain from the extended area of NAL head.For instance, can whether use the information (no_inter_layer_pred_flag) of inter-layer prediction and the information (910) that quality identification information (quality_id) obtained and be used for the parameter correlation of up-sampling process according to expression.As with the example of the information of the parameter correlation that is used for the up-sampling process, have information (930), the offset information of the position difference between presentation video (940) etc. about the phase shift of carrier chrominance signal.And, the information that can obtain and be used for the parameter correlation of up-sampling process from the extended area and the slice header of sequence parameter set.
As example about the information (930) of the phase shift of carrier chrominance signal; Existence is about the information (ref_layer_chroma_phase_x_plus1) of the horizontal phase shift of the chromatic component of the image that is used for inter-layer prediction, about the information (ref_layer_chroma_phase_y_plus1) of its vertical phase change.Example as the offset information (940) of the position difference between presentation video; There is offset information (scaled_ref_layer_left_offset; Scaled_ref_layer_top_offset; Scaled_ref_layer_right_offset, scaled_ref_layer_bottom_offset), its expression be used for behind the up-sampling of inter-layer prediction image and the left side between present image,, the right side and bottom position difference.
Can obtain and be used for the information of the parameter correlation of up-sampling process based on the information (extended_spatial_scalability) of position of information that expression is used for the relating to parameters of up-sampling process.For example in fact, if above-mentioned extended_spatial_scalability is set to 0, the information that can represent and be used for the parameter correlation of up-sampling process so neither is present in the concentrated slice header that also is not present in of sequential parameter.If extended_spatial_scalability is set to 1, the information that can represent and be used for the parameter correlation of up-sampling process is not present in slice header, concentrates but be present in sequential parameter.If extended_spatial_scalability is set to 2, the information that can represent and be used for the parameter correlation of up-sampling process is not present in sequential parameter to be concentrated, but is present in the slice header.Therefore, if extended_spatial_scalability is set to 2, in can control strip taking the lead with the information (920) that is used for the parameter correlation of up-sampling process.And, if extended_spatial_scalability is set to 1, in then can the control sequence parameter set with the information that is used for the parameter correlation of up-sampling process.
About the information (930) of the phase shift of carrier chrominance signal, and the offset information (940) of the position difference between expression reference picture and present image is used to the up-sampling process.
Figure 10 is whether basis according to an embodiment of the invention exists inter-layer prediction to obtain the syntactic structure figure whether expression is limited in the flag information of the interior piece (intra-block) of use frame in the basic layer.
According to one embodiment of present invention, through the configuration information of inspection scalable-video-coded bit stream, encoder can not transmit the information relevant with inter-layer prediction.The configuration information of above-mentioned scalable-video-coded bit stream can obtain from the extended area of NAL head.For example in fact, can whether use the information (no_inter_layer_pred_flag) of inter-layer prediction and the information processed (1010) that quality identification information (quality_id) obtains to represent the band boundaries in the up-sampling process according to expression.As the example of the information processed of representing band boundaries, exist expression whether to limit the information (constrained_intra_resampling_flag) of piece in the frame that uses in basic layer for the current block in the enhancement layer.Whether limit the information of using piece in the frame through the definition expression, can improve the decoding speed when carrying out parallel processing.Can obtain expression from slice header and whether limit the information of using piece in the frame.
Because can obtain the information whether expression limits the use of piece in the frame,, also need check with the corresponding basic layer reference block of current block whether be included in the particular bands of basic layer even its value is set to 1 from slice header.Therefore, when constrained_intra_resampling_flag is set to 1, can confirm whether be included in the particular bands of basic layer with the reference block of the corresponding basic layer of current block.For example in fact, when at least two bands in basic layer reference block and the basic layer overlapped mutually, current block was marked as the interior piece of the frame that does not use in the basic layer.Particularly, can not be through using fundamental forecasting pattern (the intra-base prediction mode) current block of encoding in the frame.The fundamental forecasting pattern can represent to predict based on the respective regions of basic layer the pattern of the current block of enhancement layer in the frame.Under this situation, the respective regions of basic layer is represented the piece with the frame mode coding.When the respective regions of basic layer is incorporated into the particular bands of basic layer, can be through the interior piece (intra-block) of the frame that the uses basic layer current block of decoding.Under this situation, current block can be marked as the fundamental forecasting pattern (intra-base prediction mode) in the frame of having used.
If above-mentioned constrained_intra_resampling_flag is set to 1, then represent to be restricted with reference to the information (disable_deblocking_filter_idc) of the method for operation of the described de-blocking filter of Fig. 8.In fact, disable_deblocking_filter_idc can only be set to 1,2 or 4 for example.
If above-mentioned constrained_intra_resampling_flag is set to 0; Even the relevant block in the basic layer overlaps mutually with two bands in the basic layer at least, also can use piece (intra-block) in the frame of basic layer to come the current block of decoding enhancement layer.
The embodiment that more than describes goes for carrier chrominance signal, also is applicable to luminance signal according to same way as equally.
Figure 11 is according to an embodiment of the invention being used for according to whether existing inter-layer prediction to obtain the syntax graph of adaptive prediction information.
According to one embodiment of present invention, through confirming the configuration information of scalable-video-coded bit stream, encoder can not transmit the information relevant with inter-layer prediction.Can obtain the configuration information of scalable-video-coded bit stream from the extended area of NAL head.For example in fact, can whether use the information no_inter_layer_pred_flag of inter-layer prediction to obtain adaptive prediction information (1110) based on expression.Adaptive prediction information can represent whether be present in the relevant position with the relevant grammer of prediction.For example; Exist expression and the relevant grammer of prediction whether to be present in the information adaptive_prediction_flag of slice header and macroblock layer; Represent whether the grammer relevant with motion prediction is present in the information adaptive_motion_prediction_flag of macroblock layer; Whether the grammer relevant with residual prediction with expression is present in the information adaptive_residual_prediction_flag of macroblock layer, or the like.
When whether using the information of inter-layer prediction to come inter-layer prediction, at first can obtain to represent whether to exist the flag information slice_skip_flag (1120) of strip data according to expression.Through confirming that there is the information of strip data in expression,, can determine whether derive the information in the macro block for inter-layer prediction.Information according to the existence of representing above-mentioned strip data if strip data is present in (1130) in the band, can obtain adaptive prediction sign adaptive_prediction_flag (1140).And, can obtain to represent whether the grammer relevant with residual prediction is present in the information adaptive_residual_prediction_flag of macroblock layer (1180).According to above-mentioned adaptive prediction sign, can obtain expression from the relevant block of basic layer and how derive and represent the information default_base_mode_flag (1150) of information such as predicted motion information whether.When movable information etc. is not a relevant block from basic layer when being obtained by prediction (1155), can obtain to represent whether the grammer relevant with motion prediction is present in the information adaptive_motion_prediction_flag (1160) in the macroblock layer.If the grammer relevant with motion prediction is not present in macroblock layer (1165), can obtain to represent how to infer the information default_motion_prediction_flag (1170) of motion prediction flag information.
The information adaptive_residual_prediction_flag that can whether be present in macroblock layer at the information adaptive_motion_prediction_flag grammer relevant with residual prediction that macroblock layer uses the expression grammer relevant with motion prediction whether to be present in macroblock layer with expression.For example, can obtain the sign motion_prediction_flag_lx whether expression uses the motion vector of basic layer based on above-mentioned adaptive_motion_prediction_flag.And, can obtain the sign residual_prediction_flag whether expression uses the residual signals of basic layer based on above-mentioned adaptive_residual_prediction_flag.
As indicated above, can be applicable to decoder/encoder of the present invention and be provided to the broadcast transmitter/receiver that is used for such as the multimedia broadcasting of DMB (DMB) etc., to be used for decoded video signal, data-signal etc.And above-mentioned multimedia broadcasting emittor/receiver can comprise mobile communication terminal.
A kind of application decoding/coding method of the present invention is reserved as the program that computer is carried out, and is stored in the computer readable recording medium storing program for performing.And the multi-medium data with data structure of the present invention can be stored in the computer readable recording medium storing program for performing.Computer readable recording medium storing program for performing comprises all types of memory devices that are used for storage computation machine system readable data.Computer readable recording medium storing program for performing comprises ROM, RAM, CD-ROM, tape, floppy disk, optical disc memory apparatus etc., and comprises the equipment realized by the carrier wave transmission of internet (for example, through).And, be stored in the computer-readable medium or by the bit stream that coding method generated and be transmitted through wired.
Industrial applicibility
Although the present invention is described and explains that clearly those skilled in the art can carry out various modifications and variation to it, and does not break away from spirit of the present invention or category with reference to its preferred embodiment.Therefore, the present invention covers modification of the present invention and the variation of providing in the scope of claims and equivalent thereof.

Claims (8)

1. the method for a decoded video signal, said method comprises:
Obtain first flag information, said first flag information representes whether inter-layer prediction is used in the decoding to the current band in the enhancement layer;
Obtain quality identification information, said quality identification information is used to discern the picture quality of said current band;
Obtain second flag information based on said first flag information and said quality identification information; Said second flag information representes whether limit the fundamental forecasting pattern in the frame of using for the current block in the enhancement layer; The fundamental forecasting pattern is to use the pattern of predicting current block with the reference block of frame mode coding in the said frame, and said reference block is by said current block reference; With
Based on the said second flag information said current block of decoding.
2. method according to claim 1, wherein, said basic layer is different with the picture quality or the spatial resolution of said enhancement layer.
3. method according to claim 1 wherein, also comprises:
When second flag information representes that the fundamental forecasting pattern is used in the frame in restriction, check whether said reference block is included in the band in the basic layer,
Wherein, when said reference block is comprised in the said basic layer band, use the said current block of fundamental forecasting mode decoding in the frame.
4. method according to claim 1 wherein, also comprises:
When second flag information representes that the fundamental forecasting pattern is used in the frame in restriction, check whether said reference block is included in the band in the basic layer,
Wherein, under the situation that at least two bands in said reference block and basic layer overlap, said current block is marked as the fundamental forecasting pattern in the frame of not using.
5. method according to claim 1, wherein, said vision signal is received with broadcast singal.
6. method according to claim 1, wherein, said vision signal is received through Digital Media.
7. the device of a decoded video signal, said device comprises:
The first enhancement layer decoder unit; It is configured to obtain first flag information; Said first flag information representes whether inter-layer prediction is used in the decoding to the current band in the enhancement layer; And the acquisition quality identification information, said quality identification information is used to discern the picture quality of said current band; And
The second enhancement layer decoder unit, it is configured to obtain second flag information based on said first flag information and said quality identification information, and based on said second flag information current block in the said enhancement layer of decoding,
Wherein said second flag information representes whether limit the fundamental forecasting pattern in the frame of using for the current block in the enhancement layer; The fundamental forecasting pattern is to use the pattern of predicting current block with the reference block of frame mode coding in the said frame, and said reference block is by said current block reference.
8. device according to claim 7 wherein, also comprises:
When second flag information represented that the fundamental forecasting pattern is used in the frame in restriction, the said second enhancement layer decoder unit checked whether said reference block is included in the band in the basic layer,
Wherein, when said reference block is comprised in the said basic layer band, use the said current block of fundamental forecasting mode decoding in the frame.
CN200780008152.4A 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal Expired - Fee Related CN101395921B (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US85953206P 2006-11-17 2006-11-17
US60/859,532 2006-11-17
KR10-2006-0132282 2006-12-22
KR1020060132282 2006-12-22
KR20060132282 2006-12-22
US89705107P 2007-01-24 2007-01-24
US60/897,051 2007-01-24
PCT/KR2007/005808 WO2008060125A1 (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal

Publications (2)

Publication Number Publication Date
CN101395921A CN101395921A (en) 2009-03-25
CN101395921B true CN101395921B (en) 2012-08-22

Family

ID=40494907

Family Applications (3)

Application Number Title Priority Date Filing Date
CN200780008152.4A Expired - Fee Related CN101395921B (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal
CN 200780008172 Pending CN101395922A (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal
CN200780008342.6A Active CN101401430B (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN 200780008172 Pending CN101395922A (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal
CN200780008342.6A Active CN101401430B (en) 2006-11-17 2007-11-19 Method and apparatus for decoding/encoding a video signal

Country Status (1)

Country Link
CN (3) CN101395921B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2718447C (en) * 2009-04-28 2014-10-21 Panasonic Corporation Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
JP5233897B2 (en) * 2009-07-31 2013-07-10 ソニー株式会社 Image processing apparatus and method
JP5344238B2 (en) * 2009-07-31 2013-11-20 ソニー株式会社 Image encoding apparatus and method, recording medium, and program
BR112012026391B1 (en) 2010-04-13 2020-12-15 Ge Video Compression, Llc HERITAGE IN ARRANGEMENT SAMPLE IN MULTITREE SUBDIVISION
CN106358045B (en) 2010-04-13 2019-07-19 Ge视频压缩有限责任公司 Decoder, coding/decoding method, encoder and coding method
TWI815295B (en) 2010-04-13 2023-09-11 美商Ge影像壓縮有限公司 Sample region merging
CN106412608B (en) 2010-04-13 2019-10-08 Ge视频压缩有限责任公司 For decoding, generating the method and decoder of data flow
CN106028045B (en) * 2010-04-13 2019-06-25 Ge视频压缩有限责任公司 The method of decoding data stream, the method and its decoder for generating data flow
CN101977316B (en) * 2010-10-27 2012-07-25 无锡中星微电子有限公司 Telescopic coding method
EP3057326A1 (en) * 2011-06-10 2016-08-17 MediaTek, Inc Method and apparatus of scalable video coding
KR20130034566A (en) * 2011-09-28 2013-04-05 한국전자통신연구원 Method and apparatus for video encoding and decoding based on constrained offset compensation and loop filter
CN102723104A (en) * 2012-07-04 2012-10-10 深圳锐取信息技术股份有限公司 Multimedia recorded broadcast system based on moving picture experts group 4 (MP4) file packaging format
US9432664B2 (en) * 2012-09-28 2016-08-30 Qualcomm Incorporated Signaling layer identifiers for operation points in video coding
KR20140087971A (en) * 2012-12-26 2014-07-09 한국전자통신연구원 Method and apparatus for image encoding and decoding using inter-prediction with multiple reference layers
WO2014107066A1 (en) * 2013-01-04 2014-07-10 삼성전자 주식회사 Scalable video encoding method and apparatus using image up-sampling in consideration of phase-shift and scalable video decoding method and apparatus
US9270991B2 (en) * 2013-01-07 2016-02-23 Qualcomm Incorporated Inter-layer reference picture generation for HLS-only scalable video coding
EP2941873A1 (en) * 2013-01-07 2015-11-11 VID SCALE, Inc. Motion information signaling for scalable video coding
EP2804375A1 (en) 2013-02-22 2014-11-19 Thomson Licensing Coding and decoding methods of a picture block, corresponding devices and data stream
US9584808B2 (en) * 2013-02-22 2017-02-28 Qualcomm Incorporated Device and method for scalable coding of video information
US9578339B2 (en) * 2013-03-05 2017-02-21 Qualcomm Incorporated Parallel processing for video coding
US9998735B2 (en) * 2013-04-01 2018-06-12 Qualcomm Incorporated Inter-layer reference picture restriction for high level syntax-only scalable video coding
US9813723B2 (en) * 2013-05-03 2017-11-07 Qualcomm Incorporated Conditionally invoking a resampling process in SHVC
WO2015008477A1 (en) * 2013-07-14 2015-01-22 Sharp Kabushiki Kaisha Tile alignment signaling and conformance constraints
US9813736B2 (en) * 2013-09-27 2017-11-07 Qualcomm Incorporated Inter-view dependency type in MV-HEVC
KR20150046742A (en) 2013-10-22 2015-04-30 주식회사 케이티 A method and an apparatus for encoding and decoding a multi-layer video signal
KR20150050409A (en) 2013-10-29 2015-05-08 주식회사 케이티 A method and an apparatus for encoding and decoding a multi-layer video signal
CN105981386B (en) * 2013-12-06 2019-02-26 华为技术有限公司 Picture decoding apparatus, picture coding device and coded data converting device
CN106105208B (en) * 2014-01-09 2020-04-07 三星电子株式会社 Scalable video encoding/decoding method and apparatus
WO2017063168A1 (en) * 2015-10-15 2017-04-20 富士通株式会社 Image coding method and apparatus, and image processing device
CN107181953B (en) * 2017-03-31 2019-09-17 北京奇艺世纪科技有限公司 A kind of determination method and device of boundary filtering strength

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1650348A (en) * 2002-04-26 2005-08-03 松下电器产业株式会社 Device and method for encoding, device and method for decoding

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6907079B2 (en) * 2002-05-01 2005-06-14 Thomson Licensing S.A. Deblocking filter conditioned on pixel brightness
KR100679035B1 (en) * 2005-01-04 2007-02-06 삼성전자주식회사 Deblocking filtering method considering intra BL mode, and video encoder/decoder based on multi-layer using the method
CN100345450C (en) * 2005-01-31 2007-10-24 浙江大学 Deblocking filtering method and apparatus of video frequency or image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1650348A (en) * 2002-04-26 2005-08-03 松下电器产业株式会社 Device and method for encoding, device and method for decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Julien Reichel.Joint Scalable Video Model JSVM-7.《Joint Video Team(JVT) of ISO/IEC MPEG & ITU-T VCEG》.2006, *

Also Published As

Publication number Publication date
CN101401430B (en) 2012-02-29
CN101401430A (en) 2009-04-01
CN101395921A (en) 2009-03-25
CN101395922A (en) 2009-03-25

Similar Documents

Publication Publication Date Title
CN101395921B (en) Method and apparatus for decoding/encoding a video signal
CN101888559B (en) Method and apparatus for decoding/encoding a video signal
CN101888555B (en) Method and apparatus for decoding/encoding a video signal
JP5063684B2 (en) Video signal decoding / encoding method and apparatus
EP2984839B1 (en) Coding concept allowing efficient multi-view/layer coding
CN105052144B (en) Inter-layer prediction method for scalable video
CN101422046A (en) Method and apparatus for decoding/encoding a video signal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120822

Termination date: 20181119