CN101473655A - Method and apparatus for processing a vedeo signal - Google Patents

Method and apparatus for processing a vedeo signal Download PDF

Info

Publication number
CN101473655A
CN101473655A CN200780023130.5A CN200780023130A CN101473655A CN 101473655 A CN101473655 A CN 101473655A CN 200780023130 A CN200780023130 A CN 200780023130A CN 101473655 A CN101473655 A CN 101473655A
Authority
CN
China
Prior art keywords
image
information
current block
motion vector
anchor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200780023130.5A
Other languages
Chinese (zh)
Other versions
CN101473655B (en
Inventor
具汉书
全炳文
朴胜煜
全勇俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority claimed from PCT/KR2007/002964 external-priority patent/WO2007148906A1/en
Publication of CN101473655A publication Critical patent/CN101473655A/en
Application granted granted Critical
Publication of CN101473655B publication Critical patent/CN101473655B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An apparatus for processing a video signal and method thereof are disclosed, by which duplication of inter-view pictures is eliminated to decode the video signal, by which a global motion vector of a current picture is generated based on relevance between inter-view pictures to decode the video signal, and by which motion information for a current picture is obtained based on relevance between inter-view pictures to perform motion compensation. The present invention includes extracting attribute information for a current block or attribute information for a current picture from the video signal, extracting motion skip information for the current block, and generating motion information for the current block using motion information for a reference block according to the attribute information and the motion skip information.

Description

Be used to handle the method and apparatus of vision signal
Technical field
The present invention relates to vision signal and handle, relate in particular to a kind of method and device thereof that is used to handle vision signal.Though the present invention is applicable to range of application widely, is particularly useful for decoded video signal.
Background technology
Usually, the compression coding/decoding means a series of signal processing technology, transmits digital information by communication line, perhaps to be fit to the form store digital information of storage medium.The target of compression coding/decoding comprises audio frequency, video, character or the like.In particular, be called as video compression at the performed compression coding/decoding technology of video.Usually, video is characterised in that and has spatial redundancy and time redundancy.
Summary of the invention
Technical problem
Yet if spatial redundancy and time redundancy are not eliminated fully, the compression ratio during encoded video signal is lowered.
If spatial redundancy and time redundancy excessively are eliminated, then can't generate the required information of decoded video signal, thereby reduce the reconstruction rate.
In the multi-view point video signal (multi-view video signal), because image (inter-view pictures) great majority only have because of the caused difference in the position of camera between viewpoint, they tend to have very high correlation and repeatability.If between viewpoint image repeat be not eliminated fully or excessively be eliminated, then compression ratio or reconstruction rate are lowered.
Therefore, the invention reside in provides a kind of method and device thereof that is used to handle vision signal, and it has overcome the restriction of prior art and one or more problem that defective is caused significantly.
The object of the present invention is to provide a kind of method and device thereof that is used to handle vision signal, the repetition of eliminating image between viewpoint is with decoded video signal.
Another object of the present invention is to provide a kind of method and device thereof that is used to handle vision signal, according to the global motion vector of the generation of the correlation between image present image between viewpoint, with decoded video signal.
A further object of the present invention is to provide a kind of method and device thereof that is used to handle vision signal, according to the movable information of the acquisition of the correlation between image present image between viewpoint, to carry out motion compresses.
Further feature of the present invention and advantage will be set forth in the following description, and partly apparent in the following description, perhaps draw by practice of the present invention.Purpose of the present invention and other advantage will and partly be achieved and obtain in conjunction with the accompanying drawings by the structure that specializes in specification that the present invention put down in writing and the claim.
Beneficial effect
The invention provides following effect or advantage.
At first, in video signal coding, have the motion vector of high correlation between the present invention can omit partially, thereby improve compression ratio.
The second, in video signal coding, the present invention can omit the movable information with high duplication, thereby improves compression ratio.
The 3rd, even the movable information of current block is not transmitted, the present invention can calculate and very similar another movable information of the movable information of current block, thereby improves the reconstruction rate.
Description of drawings
Fig. 1 is the schematic diagram of the example of the global motion vector corresponding with band;
Fig. 2 be with image in object or the schematic diagram of the example of the corresponding global motion vector of background;
Fig. 3 to Fig. 5 is the schematic diagram of the multiple transfer approach of global motion vector;
Fig. 6 is the schematic diagram according to the example of the grammer of method shown in Figure 4;
Fig. 7 is the schematic diagram according to the example of the grammer of method shown in Figure 5;
Fig. 8 is for explaining the conceptual schematic view of video signal processing method according to an embodiment of the invention;
Fig. 9 is the block diagram of apparatus for processing of video signals according to an embodiment of the invention;
Figure 10 is the flow chart of video signal processing method according to an embodiment of the invention;
Figure 11 is for explaining the conceptual schematic view of video signal processing method according to another embodiment of the present invention;
Figure 12 is the block diagram of apparatus for processing of video signals according to another embodiment of the present invention;
Figure 13 is the more detailed block diagram of movable information acquiring unit shown in Figure 12;
Figure 14 is the flow chart of video signal processing method according to another embodiment of the present invention;
Figure 15 is the detail flowchart that movable information shown in Figure 14 generates step (S300);
Figure 16 and Figure 17 are the schematic diagram of example of the grammer of motion skip mode; And
Figure 18 and Figure 19 are the schematic diagram of example of the implication of motion skip flag information.
Optimal mode
In order to obtain these purposes of the present invention and other advantage, as the concrete and extensive ground of describing, A kind of method for the treatment of vision signal according to the present invention may further comprise the steps: believe from video Extract the attribute information of current block or the attribute information of present image in number; Extract described current block Motion skip information; And according to described attribute information and described motion skip information, use The movable information of reference block generates the movable information of described current block.
In order to obtain these purposes of the present invention and other advantage, of the present invention a kind of for the treatment of The method of vision signal may further comprise the steps: extract at least one global motion from vision signal Vector, the image on corresponding first territory that includes thereon current block obtains on first territory The time information of image, and at least one global motion vector that uses image on first territory with The global motion vector of time Information generation present image, wherein this global motion vector corresponding to At least one piece in the image on first territory.
In order further to obtain these purposes of the present invention and other advantage, a kind of usefulness of the present invention May further comprise the steps in the method for processing vision signal: extraction unit extracts at least one overall situation fortune Dynamic vector, the image on corresponding first territory that comprises thereon current block, and use and carry recently At least one global motion vector of getting generates the global motion vector of present image, and wherein this is complete Office's motion vector is corresponding at least one piece in the image on first territory.
In order further to obtain these purposes of the present invention and other advantage, a kind of usefulness of the present invention May further comprise the steps in the method for processing vision signal: be the bitstream extraction corresponding with current block Priority information if this bit stream is resolved according to priority information, extracts that at least one is complete Office's motion vector, the image on corresponding first territory that comprises thereon current block, and use extremely A few global motion vector generates the global motion vector of present image, wherein this global motion Vector is corresponding at least one piece in the image on first territory.
In order further to obtain these purposes of the present invention and other advantage, a kind of usefulness of the present invention May further comprise the steps in the method for processing vision signal: search for the reference block corresponding with current block, From the movable information of vision signal acquisition reference block, use the movable information of reference block to generate current The movable information of piece uses the movable information of current block to calculate the predicted value of current block, and makes Predicted value with current block is rebuild current block, and wherein the residing position of reference block is from current block Identical position is moved the global motion vector of present image and is got.
Be understandable that general description as mentioned above and subsequently described detailed explanation all are Representative and indicative explanation, and be in order to explain that further right of the present invention will Ask.
Implement pattern of the present invention
Example in conjunction with the accompanying drawings elaborates to preferable embodiment of the present invention now.
At first, the movable information among the present invention should be interpreted as comprising a conception of species of the movable information of the movable information of direction between viewpoint (inter-view direction) and time orientation.In addition, motion vector (motion vector) should be interpreted as comprising that the difference of direction between viewpoint is offset a conception of species of the motion excursion (motion offset) of (disparity offset) and time orientation.
First territory is not to be limited to time orientation.Image on first territory is not to be limited to the one group of image that comprises identical viewpoint.Second territory is not to be limited to direction between viewpoint (perhaps direction in space).Image on second territory is not to be limited to have the identical time one group of image of (temporal instance).
That below describe explanation is global motion vector or global disparity vector (globalmotion vector or global disparity vector; To call GDV in the following text) notion, and when the global motion vector of present image (or band) is not transmitted, use the global motion vector of another moving image to derive the method for the global motion vector of present image.Next, below describe the movable information of explaining (macro block (mb) type, reference picture index, motion vector etc.) and be not transmitted, just the method for the movable information of the movable information of use adjacent viewpoint (neighboring view) generation current block when motion skip mode at current block.
1. the derivation of global motion vector (GDV)
1.1 the notion of global motion vector, type and transfer approach
Compare with the corresponding motion vector of regional area (for example, macro block, piece, pixel etc.), global motion vector or global disparity vector (hereinafter referred to as GDV) are the motion vector corresponding to the whole zone that comprises regional area.In this example, whole zone can be corresponding to single band, single image or whole sequence.Partly in the example, whole zone can be corresponding at least more than one subject area or background.
Simultaneously, motion vector can have the value of a pixel cell or 1/4 pixel cell.But global motion vector (GDV) can have the value of a pixel cell or 1/4 pixel cell, the perhaps value of Unit 4 * 4,8 * unit 8 or macroblock unit.
Global motion vector can be transmitted in several ways.Each band or each image that can be in the image transmit global vector.Under the situation of anchor image (anchor picture), can only transmit global motion vector for each band.Only when the viewpoint correlation that has non-anchor image (view dependency), each band that can be the anchor image transmits global motion vector.
In below describing, explain the notion of global motion vector with reference to figure 1 and Fig. 2.And, explain the method for multiple transmission global motion vector with reference to figure 3 to Fig. 5.
Fig. 1 is the example corresponding to the global motion vector of band, and Fig. 2 be with image in object or the example of the corresponding global motion vector of background.
Please refer to Fig. 1, motion vector (mv) corresponding to macro block (a1, a2, a3 ...).And global motion vector (GDV) is and the corresponding motion vector of band (A).
Please refer to Fig. 2, subject area (A1, A2 ...) be present in the single band (A).Subject area (A1, A2 ...) can be designated as position, upper left corner top_left[n] or position, lower right corner bottom_right[n].Have global motion vector GDV[1] and GDV[n] with the corresponding objects zone (A1, A2 ...).Have global motion vector GDV[0] with the background of correspondence except this subject area.
Fig. 3 to Fig. 5 is the schematic diagram of the method for various transmission global motion vectors.
Please refer to Fig. 3 (a), for each image transmits global motion vector GDV[i] (wherein i presentation video index) (in the example of bi-directional predicted (biprediction), two global motion vectors).
Please refer to Fig. 3 (b), if comprise at least one band in the image, for each band transmits global motion vector GDV[i] [j] (wherein j represents the bar tape index) (in the bi-directional predicted example, two global motion vectors).
Please refer to Fig. 4, in the example of the anchor image in image, transmit global motion vector GDV[a n] [j] (a wherein nExpression anchor image index) (in the bi-directional predicted example, two global motion vectors).In the example of non-anchor image, do not transmit global motion vector.Fig. 6 is the example by the grammer of the example of method transmission global motion vector shown in Figure 4.
Please refer to Fig. 6, type of strip information slice_type is included among the band header mvc_header ().Image under the current band is under the situation of anchor image, if type of strip is P, then transmits single global motion vector global_disparity_mb_10[compIdx].Under the same terms,, then transmit two global motion vector global_disparity_mb_11[compIdx] if type of strip is B.
Please refer to Fig. 5, the same with method shown in Figure 4, transmit global motion vector GDV[a under the situation of anchor image n] [j].For all viewpoint S0, S1, S2 and S3 is not all will transmit, when only the viewpoint (S1, S3) under the anchor image locates to exist the viewpoint correlation view_dependency_non_anchor of non-anchor image (, when the image of non-anchor image reference different points of view), transmit global motion vector.In viewpoint (S0, the S2) example for the viewpoint correlation view_dependency_non_anchor that does not have non-anchor image, do not transmit global motion vector.Method shown in Figure 5 can be effective to global motion vector does not only have to be used as other usage as motion skip mode situation.This will be explained in the description to motion skip mode below.Fig. 7 represents to transmit by method shown in Figure 5 the example of the grammer of global motion vector.
Please refer to Fig. 7, the grammer almost previous grammer with shown in Figure 6 is the same.Wherein the difference of Cun Zaiing is, the viewpoint correlation view_dependency_non_anchor of non-anchor image is increased to the condition that is used for transmitting global motion vector.And increase the condition of the viewpoint correlation view_dependency_anchor of anchor image, whether with reference to the image of different points of view, this can be a necessary condition of extracting the global motion vector of anchor image with indication anchor image.The details of viewpoint correlation will be in ' 2.1 motion skip mode deterministic processes ' in explained.
Transmit the situation of global motion vector by Fig. 4 or method shown in Figure 5, be the words of non-anchor image when just not transmitting global motion vector, the process of using the global motion vector of anchor image to derive the global motion vector of non-anchor image is explained in the following description.
1.2 the extraction of global motion vector and derivation
Fig. 8 is the concept explanation figure of video signal processing method according to an embodiment of the invention, and Fig. 9 is the block diagram of the apparatus for processing of video signals of the embodiment of the invention, and Figure 10 is the flow chart of the video signal processing method of the embodiment of the invention.
Please refer to Fig. 8, because current block (current macro) is corresponding to non-anchor image, the global motion vector of present image is not transmitted.On the contrary, the global motion vector GDV of temporal forward direction adjacent anchor image AWith the global motion vector GDV of temporal back to the adjacent anchor image BBe transmitted.In this example, at length explain the global motion vector GDV that uses the adjacent anchor image below in conjunction with Fig. 9 and Figure 10 AWith GDV BIn at least one generate the process of the global motion vector GDVcur of present image.
Please refer to Fig. 9 and Figure 10, the selective reception unit 105 of apparatus for processing of video signals 100 according to an embodiment of the invention receives bit stream, its pairing situation is based on the precedence information of bit stream or receiving element (not shown), and the priority of bit stream or receiving element (equal or) is higher than the situation of priority preset.In this example,, can establish priority higher if the value of this precedence information is lower.
For example, precedence information is set to 5, is 4 priority height than precedence information.In this example, if priority preset information is ' 5 ', selective reception unit 105 only receives precedence information and is equal to or less than ' 5 ' (for example, 0,1,2,3 or 4) bit stream can not receive precedence information and be equal to, or greater than ' 6 ' bit stream.
Therefore, the bit stream of selective reception unit 105 receptions is input to image information extraction unit 110.
Image information extraction unit 110 extracts anchor logos information (anchor_pic_flag) (step S110).
Then, image information extraction unit 110 judges according to anchor logos information whether present image is anchor image (step S120).
If present image is anchor image (' denying ' among the step S120), global motion vector extraction unit 122 extracts the global motion vector of present image (anchor image), finishes this process (step 130) then.
If present image is not anchor image (' being ' among the step S120), global motion vector generation unit 124 is searched for the adjacent anchor image contiguous with present image, extracts the global motion vector GDV of the adjacent anchor image that searches then AWith GDV B(step S140).
If necessary, global motion vector generation unit 124 calculates image sequence number (the picture order count of present image and adjacent anchor image; POC) (POCcur, POC AAnd POC B) (step 150).
Next, use the global motion vector GDV that extracts among the step S140 AWith GDV B(and image sequence number (POC:POCcur, the POC of step 150 calculating AAnd POC B)), global motion vector generation unit 124 generates the global motion vector GDVcur (step S160) of present image.Among the step S160, can use following several different methods to generate the global motion vector GDVcur of present image.
At first, please refer to formula 1, constant (c) multiply by the global motion vector GDVprev of anchor image, generates the global motion vector GDVcur of present image.
The global motion vector GDVprev of anchor image can be: the 1) global motion vector of the anchor image that extracts recently; Perhaps 2) temporal back to or the global motion vector (GDV of temporal forward direction adjacent anchor image BPerhaps GDV A)).In this example,, can consider image sequence number (POC:POCcur, POC in order to judge ' forward direction ' or ' back to ' AAnd POC B), this also unrestricted the present invention.
Constant (c) can be preset value or uses image sequence number (POCcur, POC B) value calculated.
Therefore, the method for formula 1 has advantage on amount of calculation and data storing capacity.
[formula 1]
GDV cur=c*GDV prev
Secondly, please refer to formula 2, use the image sequence number (POC:POCcur, the POC that calculate among the step S150 AAnd POC B) calculate the adjacent degree (POCcur-POC between present image and the adjacent image AOr POC B-POC A).Use the global motion vector GDV of the adjacent anchor image that extracts among this adjacent degree and the step S140 AWith GDV BGenerate the global motion vector GDVcur of present image.
If the global motion vector GDV of temporal forward direction adjacent anchor image AValue and temporal back to the global motion vector GDV of adjacent anchor image BValue between difference quite big, this method can be calculated the global motion vector of present image more exactly.
[formula 2]
GDV cur = GDV A + [ POC cur - POC A POC B - POC A × ( GDV B - GDV A ]
So far, more than the notion of global motion vector, multiple transfer approach and derivation have been explained in description.
In below describing, explain the motion skip mode that utilizes global motion vector in detail.
2. motion skip mode
At first, if the required movable information of inter prediction (inter-prediction) (for example, block type, reference image information, motion vector etc.) be not transmitted, motion skip mode according to the present invention makes processor itself can generate the movable information of present image by another movable information that uses different images.Especially because between viewpoint image for a change the position of camera to the image that same object generated, between have great similitude, so the movable information of present image extremely is similar to the movable information of another viewpoint.Therefore, under the situation of multi-view point video signal, motion skip mode of the present invention has advantage.
Explain motion skip mode of the present invention in detail below in conjunction with accompanying drawing.
Figure 11 is for explaining the concept map of video signal processing method according to another embodiment of the present invention, Figure 12 is the block diagram of apparatus for processing of video signals according to another embodiment of the present invention, Figure 13 is the more detailed block diagram of movable information acquiring unit shown in Figure 12, Figure 14 is the flow chart according to another video signal processing method of the present invention, and Figure 15 is the detail flowchart that movable information shown in Figure 14 generates step (step S300).
At first, explain the notion of motion skip mode of the present invention below in conjunction with Figure 11.
Please refer to Figure 11,, use the corresponding blocks (perhaps reference block) of the global motion vector GDVcur search adjacent viewpoint of present image, obtain the movable information of corresponding blocks then if current block (current macro) is a motion skip mode.
Please refer to Figure 12 and Figure 13, apparatus for processing of video signals 200 according to another embodiment of the present invention comprises selective reception unit 205, motion skip judging unit 210, movable information acquiring unit 220, motion compensation units 230 and piece reconstruction unit (not shown).Apparatus for processing of video signals 200 can connect global motion vector acquiring unit 320.In this example, movable information acquiring unit 220 comprises extraction of motion information unit 222 and movable information generation unit 224.Selective reception unit 205 has identical configuration with previous selective reception unit 105.The bit stream that selective reception unit 205 is received is transfused to motion skip judging unit 210.
Motion skip judging unit 210 extracts motion skip information (motion_skip_flag) etc., whether is in motion skip mode to judge current block.Its details will be in conjunction with Figure 14, Figure 16 and Figure 17 judgement at ' 2.1 motion skip modes ' in explained.
If current block is not in motion skip mode, the extraction of motion information unit 222 of movable information acquiring unit 220 extracts the movable information of current block.
If current block is in motion skip mode, movable information generation unit 224 is skipped the extraction of movable information, searches for corresponding blocks, obtains the movable information of corresponding blocks then.Its details will be explained in ' the movable information generative process of motion skip mode ' in conjunction with Figure 15.
The movable information (for example motion vector (mvLX), reference picture index (refIdxLX)) that uses movable information acquiring unit 220 to obtain, motion compensation units 230 is by carrying out motion compensation to generate the predicted value of current block.
The piece reconstruction unit uses the predicted value of current block to rebuild current block.
And global motion vector acquiring unit 320 obtains the global motion vector GDVcur of the present image under the current block.Global motion vector acquiring unit 320 has identical configuration with previous global motion vector acquiring unit 120 shown in Figure 9.Therefore, will omit description in below describing to global motion vector acquiring unit 320.
2.1 motion skip mode deterministic process
In conjunction with Figure 12 and Figure 14, explanation motion skip judging unit 210 judges whether current block is in the process of motion skip mode.
At first, extract motion skip index information (motion_skip_idc) (S210) from slice layer (slice layer).In this example, the motion skip index information (motion_skip_idc) in the information indicates in the piece that belongs to band (image or sequence) whether the use motion skip mode is arranged.If motion skip index information (motion_skip_idc) is ' 0 ', represent that then the piece that belongs to this band does not all use motion skip mode.Therefore, if motion skip index information (motion_skip_idc) is ' 0 ', then there is not piece corresponding to motion skip mode.Therefore, need not extract motion skip flag information (motion_skip_flag), wherein whether motion skip flag information (motion_skip_flag) indication motion skip mode is used in each piece of macroblock layer.Therefore, if the band that is made of the piece that does not use motion skip mode is frequently generated, then the motion skip index information is set at and is included in (or picture parameter set (picture parameter set in the slice layer; PPS) or sequence parameter set (sequence parameterset; SPS)), this helps compression ratio or amount of calculation.
Next, comprise that the attribute information of current block of anchor logos information (anchor_pic_flag) or the attribute information of present image are extracted (S220).In this example, whether anchor logos information (anchor_pic_flag) is the information of anchor image for the indication present image.If present image is the anchor image, then be difficult to use motion skip mode.Therefore, if present image is the anchor image, then also need not extract the motion skip flag information.
The viewpoint correlation (view_dependency_non_anchor) (step S230) of non-anchor image is judged at viewpoint place under current block.Whether viewpoint correlation (view_dependency) the indication present image of non-anchor image is relevant with other viewpoints.Because present image can this means the present image of can't decoding with reference to the image of other viewpoints before other viewpoints of decoding.
Judge the viewpoint correlation according to sequence parameter set extension information (sequence parameter set extensioninformaiton).Use the decision process of the viewpoint correlation of sequence parameter set extension information can be performed in before the resolving of macroblock layer.
Simultaneously, according to the information of number (num_non_anchor_refs_1X) and the viewpoint id information (non_anchor_ref_1X) of reference between viewpoint, the viewpoint correlation (view_dependency_non_anchor) of the non-anchor image of decidable.If there is not the viewpoint correlation in the non-anchor image, the movable information of adjacent viewpoint not decoded before present image (non-anchor image) is decoded (yet, get rid of multiloop example).Therefore, because the movable information of adjacent viewpoint can't be used to the present image of decoding,, can arrange not use motion skip mode if the viewpoint correlation of non-anchor image does not exist.Therefore, if reach an agreement, when then need not extract the motion skip flag information.
Next, the viewpoint under current block, the viewpoint correlation (view_dependency_anchor) of judgement anchor image is (S240).Viewpoint correlation (view_dependency) in the anchor image indicates whether to exist the adjacent viewpoint of present image.This means and there are differences vector (global motion vector GDV for example APerhaps GDV B), the difference between its expression current view point and the adjacent viewpoint.
According to the information of number (num_anchor_refs_1X) and the viewpoint id information (anchor_ref_1X) of reference between viewpoint, the viewpoint correlation (view_dependency_anchor) of decidable anchor image.If there is not the viewpoint correlation in the anchor image, the global motion vector (GDV of anchor image APerhaps GDV B) then can't exist.Especially, because the required global motion vector of corresponding blocks of search current block is not transmitted, so, can arrange not use motion skip mode if the viewpoint correlation of anchor image does not exist.Therefore, if reach an agreement, then need not extract the motion skip flag information certainly.
Simultaneously, execution in step S210 to S240 intactly, but can be according in the planning execution above-mentioned steps at least one.For example, if be set at the motion skip mode that uses non-anchor image, execution in step S220 only then.If motion skip mode is set to the situation of the correlation existence that only can be used for non-anchor image, then execution in step S210 and step S230.In addition, if motion skip mode is set to the situation of the viewpoint correlation existence that only is applicable to the anchor image, then execution in step S220 to S240.If agree to use the motion skip index information, execution in step S210 only then.
Obtain various types of information with after the extraction motion skip flag information by step S210 to S240,, then extract motion skip flag information (motion_skip_flag) (step S260) if information meets some requirements (step S250 ' being ').
Figure 16 and Figure 17 are the example of the grammer of motion skip mode.Figure 16 represent the prerequisite of motion skip mode be non-anchor image (if (! Anchor_picflag)) grammer.Figure 17 represents that the prerequisite of motion skip mode comprises viewpoint correlation (the if (﹠amp of non-anchor image; ﹠amp; View_dependency_non_anchor_pic)) with viewpoint correlation (the if (﹠amp of anchor image; ﹠amp; And the grammer of non-anchor image view_dependency_anchor_pic)).
Then, the motion skip flag information (motion_skip_flag) that uses step S260 to extract judges whether current block is in motion skip mode (step S270).Figure 18 and Figure 19 are the schematic diagram of example of the implication of motion skip flag information.If agreement as shown in figure 18, step S270 only judges whether current block is in motion skip mode.If the agreement as shown in figure 19, the direction of then further judging corresponding blocks be behind space forward direction or the space to.The present invention is limited to Figure 18 and form shown in Figure 19.
The step S270 result that makes a determination if judge that current block is not in motion skip mode (' denying ' among the step S270), then extracts the movable information (step S280) of current block.In this example, movable information comprises macro block (mb) type (mb_type), reference picture index (refIdxLX), motion vector (mvLX) etc., this also unrestricted the present invention.If current block is in motion skip mode (' being ' among the step S270), then carry out movable information generative process (step S300).The details of step S300 is explained in following chapters and sections in conjunction with Figure 15.
2.2 the movable information generative process of motion skip mode
If judge that current block is in motion skip mode, then begin to carry out the movable information generative process.
Please refer to Figure 13 and Figure 15, if current block is in motion skip mode, the movable information of movable information generation unit 224 is skipped the extraction (step S310) that unit 224a skips movable information.
Next, corresponding blocks search unit 224b searches for corresponding blocks in adjacent block.In order to search for corresponding blocks, corresponding blocks search unit 224b preferentially judges the adjacent viewpoint (step S320) at corresponding blocks place.In this example, adjacent viewpoint be with current block under the different viewpoint of viewpoint, and the movable information that comprises of this viewpoint of image is suitable as the movable information of current block.
The viewpoint identifier (view_id) of the adjacent viewpoint of current block can be by band or the included special variable (motion_skip_view_id or interview_base_view) of macroblock layer and is transmitted clearly.Another kind method is not to be transmitted clearly, but the viewpoint identifier of adjacent viewpoint can be estimated the identifier of the adjacent viewpoint of current block according to the viewpoint correlation (view_dependency_non_anchor) of aforesaid present image.For example, this identifier can be confirmed as viewpoint (anchor) refL0[view_id of the anchor image of current view point (view_id) for the image of interview prediction institute reference] [i]) in the immediate viewpoint of current view point (anchor_refL0[view_id] [0]), this and unrestricted the present invention.
Simultaneously, if the motion skip flag information that extracts among the step S260 is arranged according to shown in Figure 19, what then can use the motion skip flag information to judge to select from the viewpoint near current view point is that forward direction viewpoint or back are to viewpoint.
Corresponding blocks search unit 224b obtains the global motion vector (GDVcur) of current block with search corresponding blocks (step S330).The global motion vector of the current block that obtains among the step S330 (GDVcur) can be by the global motion vector acquiring unit 120 of the apparatus for processing of video signals 100 of one embodiment of the present of invention motion vector (GDV according to the adjacent anchor image A, GDV B) calculate, perhaps corresponding to preset value (GDV Predetermined) or the value that calculates by preordering method, this and unrestricted the present invention.
Corresponding blocks search unit 224b uses the adjacent viewpoint of step S320 judgement and the global motion vector of step S330 acquisition to judge corresponding blocks (step 340).Especially, corresponding blocks (mbAddrNeighbor) is positioned at the position of the mobile global motion vector of same position (mbAddr) (globalDisparitymbLX) from current block.The address of corresponding blocks can be calculated by formula 3.
[formula 3]
mbAddrNeighbor=mbAdrr+globalDisparityMbLX[1]*PicWidthInMbs+globalDisparityMbLX[0]
Please refer to formula 3, the address of corresponding blocks (mbAddrNeighbor) adds the result of the address (mbAdrr) of current block for the address value of global motion vector (globalDisparityMbLX).In this example, the address value of global motion vector (globalDisparityMbLX[1] * PicWidthInMbs+globalDisparityMbLX[0]) is for multiply by the value that y axis component (globalDisparityMbLX[1]) calculates by the piece number (PicWidthInMbs) by the horizontal direction in the image, the component of increase x axis (globalDisparityMbLX[0]) to multiplication result, changing 2 n dimensional vector ns then is 1 dimension index.In this example, corresponding blocks preferentially is judged as and is present in having in the image with the identical time of current block in the adjacent viewpoint image.
Movable information acquiring unit 224c extracts the movable information (S350) of corresponding blocks.The movable information of corresponding blocks can be the information that extraction of motion information unit 222 is extracted.If can't extract the movable information of corresponding blocks, for example, if corresponding blocks is only carried out infra-frame prediction (intra-prediction) and is not carried out inter prediction (inter-prediction), then change the adjacent viewpoint of judging among the step S320, extract the movable information of the corresponding blocks that is changed.
For example, if among the step S320 the forward direction viewpoint near current view point be judged as adjacent viewpoint, then be judged as adjacent viewpoint to viewpoint near the back of current view point.
At last, movable information acquiring unit 224c uses the movable information of step S350 extraction to obtain the movable information (S360) of current block.In this example, the movable information of corresponding blocks can be used as the movable information of current block, and this is not to be limited to the present invention.
Industrial applicibility
Therefore, the present invention can be applicable to the video coding and the decoding of many viewpoints.Though the present invention discloses as above with aforesaid preferred embodiment, it is not in order to limit the present invention.Those skilled in the art should recognize that under the situation that does not break away from the appended the spirit and scope of the present invention that claim disclosed of the present invention the change of being done all belongs within the scope of patent protection of the present invention with retouching.Please refer to appended claim about the protection range that the present invention defined.

Claims (20)

1. method that is used to handle vision signal comprises:
From vision signal, extract the attribute information of current block or the attribute information of present image;
Extract the motion skip information of described current block; And
According to described attribute information and described motion skip information, the movable information of use reference block generates the movable information of described current block.
2. the method for claim 1, wherein, the attribute information of described current block or the attribute information of described present image further comprise anchor logos information, it indicates described current block whether to belong to the anchor image or whether described present image is the anchor image, and if described current block belongs to non-anchor image or if described present image is non-anchor image, then carry out the extraction of described motion skip information.
3. the method for claim 1, further comprise: use the image information on first territory that described current block is positioned at, estimate image on first territory and the viewpoint correlation information between the image on second territory, wherein, carry out the generation of described movable information according to described viewpoint correlation information.
4. method as claimed in claim 3, wherein, described first territory is on time orientation, and described second territory is on direction between viewpoint.
5. method as claimed in claim 3, wherein, described viewpoint correlation information comprises at least one in the viewpoint correlation information between viewpoint correlation information between the anchor image and non-anchor image.
6. method as claimed in claim 5 wherein, according to correlation information between described non-anchor image, exists under the situation of viewpoint correlation between non-anchor image on first territory that described current block was positioned at, and carries out the extraction of described motion skip information.
7. method as claimed in claim 5 wherein, according to correlation information between described anchor image, exists under the situation of viewpoint correlation between the anchor image on first territory that described current block was positioned at, and carries out the extraction of described motion skip information.
8. method as claimed in claim 3 wherein, uses the information of number of the reference field that comprises in the sequence parameter set extension information and the identifying information of reference field to estimate described viewpoint correlation information.
9. method as claimed in claim 3 wherein, according to image on first territory that described current block was positioned at and the viewpoint correlation between the image on second territory, is determined the position of described reference block on second territory.
10. the adjacent image that the method for claim 1, wherein includes described reference block is positioned at the identical time with described present image on first territory.
Search for described reference block 11. the method for claim 1, wherein use the global motion vector of described present image.
12. method as claimed in claim 11 wherein, uses the global motion vector of described anchor image to generate the global motion vector of described present image.
13. method as claimed in claim 11 wherein, generates the global motion vector of described present image according to the adjacent degree between described present image and the described anchor image.
14. method as claimed in claim 13 wherein, number is determined described adjacent degree according to the image sequence of the image sequence of described present image number and described anchor image.
15. method as claimed in claim 11, wherein, the global motion vector of described present image is the global motion vector that extracts recently.
16. the method for claim 1, wherein described movable information comprises in block type information, motion vector and the reference picture index at least one.
17. the method for claim 1, wherein described vision signal is used as broadcast singal and receives.
18. the method for claim 1, wherein described vision signal is received by Digital Media.
19. a computer-readable medium, it comprises the program that wherein records execution the method for claim 1.
20. a device that is used to handle vision signal comprises:
The attribute information extraction unit is used to extract the attribute information of current block or the attribute information of present image;
The motion skip judging unit is used to extract the motion skip information of described current block; And
The movable information generation unit is used for according to described attribute information and described motion skip information, and the movable information of use reference block generates the movable information of described current block.
CN200780023130.5A 2006-06-19 2007-06-19 Method and apparatus for processing a vedeo signal Expired - Fee Related CN101473655B (en)

Applications Claiming Priority (22)

Application Number Priority Date Filing Date Title
US81456106P 2006-06-19 2006-06-19
US60/814,561 2006-06-19
US83059906P 2006-07-14 2006-07-14
US83068506P 2006-07-14 2006-07-14
US60/830,685 2006-07-14
US60/830,599 2006-07-14
US83215306P 2006-07-21 2006-07-21
US60/832,153 2006-07-21
US84215106P 2006-09-05 2006-09-05
US60/842,151 2006-09-05
US85270006P 2006-10-19 2006-10-19
US60/852,700 2006-10-19
US85377506P 2006-10-24 2006-10-24
US60/853,775 2006-10-24
US86831006P 2006-12-01 2006-12-01
US60/868,310 2006-12-01
US90780707P 2007-04-18 2007-04-18
US60/907,807 2007-04-18
US92460207P 2007-05-22 2007-05-22
US60/924,602 2007-05-22
US60/929,046 2007-06-08
PCT/KR2007/002964 WO2007148906A1 (en) 2006-06-19 2007-06-19 Method and apparatus for processing a vedeo signal

Publications (2)

Publication Number Publication Date
CN101473655A true CN101473655A (en) 2009-07-01
CN101473655B CN101473655B (en) 2011-06-08

Family

ID=40829601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200780023130.5A Expired - Fee Related CN101473655B (en) 2006-06-19 2007-06-19 Method and apparatus for processing a vedeo signal

Country Status (1)

Country Link
CN (1) CN101473655B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103329525A (en) * 2011-01-22 2013-09-25 高通股份有限公司 Combined reference picture list construction for video coding
CN104541510A (en) * 2012-04-15 2015-04-22 三星电子株式会社 Inter prediction method in which reference picture lists can be changed and apparatus for same
CN104782123A (en) * 2012-10-22 2015-07-15 数码士控股有限公司 Method for predicting inter-view motion and method for determining inter-view merge candidates in 3d video
CN104969551A (en) * 2012-12-07 2015-10-07 高通股份有限公司 Advanced residual prediction in scalable and multi-view video coding
CN105122796A (en) * 2013-04-12 2015-12-02 联发科技(新加坡)私人有限公司 Method of error-resilient illumination compensation for three-dimensional video coding
CN105264894A (en) * 2013-04-05 2016-01-20 三星电子株式会社 Method for determining inter-prediction candidate for interlayer decoding and encoding method and apparatus
CN106210737A (en) * 2010-10-06 2016-12-07 株式会社Ntt都科摩 Image prediction/decoding device, image prediction decoding method
CN106464898A (en) * 2014-03-31 2017-02-22 英迪股份有限公司 Method and device for deriving inter-view motion merging candidate
CN106658225A (en) * 2016-10-31 2017-05-10 广州日滨科技发展有限公司 Video extension code setting and video playing method and system

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210737A (en) * 2010-10-06 2016-12-07 株式会社Ntt都科摩 Image prediction/decoding device, image prediction decoding method
CN106210737B (en) * 2010-10-06 2019-05-21 株式会社Ntt都科摩 Image prediction/decoding device, image prediction decoding method
CN103329525A (en) * 2011-01-22 2013-09-25 高通股份有限公司 Combined reference picture list construction for video coding
CN103329525B (en) * 2011-01-22 2016-08-24 高通股份有限公司 Combined reference just list construction for video coding
CN104541510A (en) * 2012-04-15 2015-04-22 三星电子株式会社 Inter prediction method in which reference picture lists can be changed and apparatus for same
CN104541510B (en) * 2012-04-15 2018-08-24 三星电子株式会社 The inter-frame prediction method and its equipment that reference picture list can be changed
CN104782123A (en) * 2012-10-22 2015-07-15 数码士控股有限公司 Method for predicting inter-view motion and method for determining inter-view merge candidates in 3d video
CN104969551B (en) * 2012-12-07 2018-11-23 高通股份有限公司 Advanced residual prediction in the decoding of scalable and multi-angle video
CN104969551A (en) * 2012-12-07 2015-10-07 高通股份有限公司 Advanced residual prediction in scalable and multi-view video coding
US10334259B2 (en) 2012-12-07 2019-06-25 Qualcomm Incorporated Advanced residual prediction in scalable and multi-view video coding
US9948939B2 (en) 2012-12-07 2018-04-17 Qualcomm Incorporated Advanced residual prediction in scalable and multi-view video coding
CN105264894A (en) * 2013-04-05 2016-01-20 三星电子株式会社 Method for determining inter-prediction candidate for interlayer decoding and encoding method and apparatus
CN105122796A (en) * 2013-04-12 2015-12-02 联发科技(新加坡)私人有限公司 Method of error-resilient illumination compensation for three-dimensional video coding
CN105122796B (en) * 2013-04-12 2019-03-29 寰发股份有限公司 The method of illuminance compensation in three-dimensional or multi-view coded and decoding system
CN106464898A (en) * 2014-03-31 2017-02-22 英迪股份有限公司 Method and device for deriving inter-view motion merging candidate
CN106464898B (en) * 2014-03-31 2020-02-11 英迪股份有限公司 Method and apparatus for deriving inter-view motion merge candidates
US10616602B2 (en) 2014-03-31 2020-04-07 Intellectual Discovery Co., Ltd. Method and device for deriving inter-view motion merging candidate
US11729421B2 (en) 2014-03-31 2023-08-15 Dolby Laboratories Licensing Corporation Method and device for deriving inter-view motion merging candidate
CN106658225A (en) * 2016-10-31 2017-05-10 广州日滨科技发展有限公司 Video extension code setting and video playing method and system

Also Published As

Publication number Publication date
CN101473655B (en) 2011-06-08

Similar Documents

Publication Publication Date Title
CN101473655B (en) Method and apparatus for processing a vedeo signal
EP2030450B1 (en) Method and apparatus for processing a video signal
US11831898B2 (en) Moving picture coding device, moving picture coding method, moving picture coding program, moving picture decoding device, moving picture decoding method, and moving picture decoding program
CN101248671B (en) Method of estimating disparity vector, apparatus for encoding and decoding multi-view picture
CN101491096B (en) Signal processing method and apparatus thereof
EP3348060B1 (en) Method and device for encoding a light field based image, and corresponding computer program product
CN113545081B (en) Method and apparatus for processing video data in video codec system
US20170150167A1 (en) Moving picture decoding device, moving picture decoding method, and moving picture decoding program
KR100287211B1 (en) Bidirectional motion estimation method and system
CN101617537A (en) Be used to handle the method and apparatus of vision signal
US20100201870A1 (en) System and method for frame interpolation for a compressed video bitstream
KR20150114988A (en) Method and apparatus of inter-view candidate derivation for three-dimensional video coding
CN110662077B (en) Symmetric bi-directional prediction modes for video coding and decoding
US10110923B2 (en) Method of reference view selection for 3D video coding
CN104798375B (en) For multiple view video coding or decoded method and device
CN105453561A (en) Method of deriving default disparity vector in 3D and multiview video coding
WO2014166063A1 (en) Default vector for disparity vector derivation for 3d video coding
CN104429078A (en) Method and device for processing video signal
WO2007037645A1 (en) Method of estimating disparity vector using camera parameters, apparatus for encoding and decoding multi-view picture using the disparity vectors estimation method, and computer-redadable recording medium storing a program for executing the method
CN105122792A (en) Method of inter-view residual prediction with reduced complexity in three-dimensional video coding
CN105432084A (en) Method of reference view selection for 3d video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110608

Termination date: 20210619

CF01 Termination of patent right due to non-payment of annual fee