MXPA00008676A - Method and apparatus for padding interlaced macroblock texture information - Google Patents

Method and apparatus for padding interlaced macroblock texture information

Info

Publication number
MXPA00008676A
MXPA00008676A MXPA/A/2000/008676A MXPA00008676A MXPA00008676A MX PA00008676 A MXPA00008676 A MX PA00008676A MX PA00008676 A MXPA00008676 A MX PA00008676A MX PA00008676 A MXPA00008676 A MX PA00008676A
Authority
MX
Mexico
Prior art keywords
block
field
texture
undefined
pixels
Prior art date
Application number
MXPA/A/2000/008676A
Other languages
Spanish (es)
Inventor
Sang Hoon Lee
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Publication of MXPA00008676A publication Critical patent/MXPA00008676A/en

Links

Abstract

A method for padding interlaced texture information on a previous frame to perform a motion estimation detects whether said each texture macroblock of the previous frame is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.

Description

METHOD AND APPARATUS FOR FILLING TEXTURE INFORMATION OF MACRO BLOCKS INTERLOCATED TECHNICAL FIELD OF THE INVENTION The present invention relates to a method and apparatus for filling texture information of interlaced macroblocks; and more particularly to a method and apparatus capable of filling texture information in a previous frame based on macro block-by-macro block in order to perform an estimation of movement in the interlaced coding technique. PREVIOUS TECHNIQUE In digitally televised systems such as video systems' telephone, teleconference and high definition television, a large amount of digital data is required to define each video frame signal, since a video line signal in the signal of The video frame comprises a sequence of digital data referred to as pixel values. Since, however, the bandwidth of available frequency of a limited conventional transmission channel, in order to transmit the large amount of digital data through it, it is necessary to compress or reduce the volume of data through the use of various data compression techniques, especially in the case of low bit rate video signal encoders such as video telephone and teleconferencing systems. One of these techniques for encoding video signals for a low bit rate coding system is the so-called object-oriented synthesis-analysis coding technique, wherein a fed video image is divided into objects and three sets of parameters to define movement, contour and pixel data of each object are processed through different coding channels. An example of an object-oriented coding scheme is the so-called Express Motion Picture Group (MPEG = (Moving Picture Express Group) phase 4 (MPEG-4), which is designed to provide standard visual audio coding to allow interactivity based in content, improve coding efficiency and / or universal accessibility in applications such as low bit-rate communication, interactive multimedia (for example, codes, interactive TV, etc. and surveillance of areas) see for example models of MPEG-video derivation 4 version 7.0, International Organization for Standarization (International Organization for Standardization), ISO / IEC JTC1 / SC29 / WG11 MPEG97 / N1642, April 1997). According to the MPE-4, a fed video image is divided into a plurality of video object planes (VOP 's = Video Object Plans), which correspond to entities in a stream of bits that a user can access and manipulate . A VOP can be referred to as an object and represented by a boundary rectangle whose width and height can be the smallest multiples of 16 pixels (a macro block size) that each object moves in such a way that the encoder can process the fed video image on a VOP-by-VOP basis, that is, an obj eto-por-obj eto base. Each VOP in the MPEG-4 is described by three sets of parameters that define shape information, movement information and texture information, ie color information, wherein the shape information represented by a binary mask, for example, corresponds to the outline of each VOP, that is, the border of an object; the movement information represents spatial correlation between a current frame and a corresponding previous frame; and texture information consisting of luminance and chrominance data. In the meantime, since the texture information for two sequentially fed video images received naturally has time redundancy, it is convenient to get rid of time redundancy by using an estimation and motion compensation technique, in order to efficiently encode the information of texture in MPEG-4. A progressive image filling technique, i.e., a conventional repetitive filling technique is applied on a frame-by-frame basis to the VOP before the motion compensation information. At first, the technique of repetitive filling fills the transparent area outside the object of the VOP when repeating pixels border of the object, where the border pixels are located in the contour of the object. It is preferable to perform the repetitive filling technique with respect to the information of reconstructed form that is generated by decoding the information in an encoded manner in the reverse order on the coding scheme. Each border pixel is repeated towards the outside of the object. If transparent pixels in the transparent area outside the object can be filled by the repetition of more than one pixel border, it is preferred to take the average of the repeated values as a fill value. This progressive filling process is generally divided into three stages, ie a horizontal repetitive filling; a vertical repetitive filling and external filling (see MPEG-4 video verification model version 7.0). While the progressive fill technique is used to perform motion estimation and compensation based on each frame generated for every 1/30 second as described above, an interlaced filling technique is required to perform motion estimation and compensation in a field-by-field basis, where the fields, that is, an upper field and a background field, combine to reconstruct themselves as a picture, that is, an interlaced texture image. The interlaced coding technique for performing motion compensation and estimation on a field-by-field basis can be conveniently employed to accurately encode interlaced structure information either for rapid movement such as a sport, horse racing or race of cars, or with a rare field correlation, that is, a rare temporal correlation between the upper and background fields. However, if the padding for each block field is done independently without considering its field correlation based on the interlaced texture information as described in the progressive fill technique, there may be some pixel that is not modified based on single the pixels objects of the object, but it will be modified by filling when its field correlation is taken into account, requiring two consecutive fields, that is, the upper and background fields, included in the interlaced texture information. DESCRIPTION OF THE INVENTION Therefore, an object of the invention provides a method and apparatus for filling in interlaced texture information with its field correlation in a macro texture block-by-macro texture block to perform an estimation and compensation of movement . According to the invention, there is provided a method for filling an interlaced texture information of a previous frame into a macro texture block-by-macro texture block, to perform a motion estimation, wherein each macro block of texture of the previous frame has MxN texture pixels and M and N are integers for devices, the method comprises the steps of: (a) detecting whether each macro texture block of the previous frame is a boundary block or not, where the border block it has one or more texture pixels defined and one or more undefined texture pixels; (b) extrapolate the undefined texture pixels of the border block of the defined texture pixels, to generate an extrapolated border block and fill, if any of the two field blocks have no defined texture pixel, an undefined field block based on another field block of the border block, where the undefined field block represents a field block that has no defined texture pixel; and (c) expansion of an undefined adjacent block based on the extrapolated boundary block, wherein the undefined adjacent block is adjacent to the extrapolated boundary block and has undefined texture pixels. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing objects and features and others of the present invention will be apparent from the following description of preferred embodiments which are given in conjunction with the accompanying drawings in which: Figure 1 shows a schematic block diagram of a conventional apparatus for encoding interlaced texture information of an object into a video signal; Figure 2 presents a flowchart to illustrate the operation of the pre-frame processing circuit shown in Figure 1, in accordance with the present invention; Figures 3A and 3B describe an exemplary border block macro and upper and lower boundary field blocks for the macro border block, respectively; Figures 3C and 3E represent a method of filling the upper and lower boundary block field, sequentially according to the present invention; and Figure 4 illustrates a plurality of adjacent blocks not defined for an exemplary VOP and the fill addresses for each undefined adjacent block. MODE FOR CARRYING OUT THE INVENTION With reference to Figure 1, there is illustrated a schematic block diagram of an apparatus for encoding interlaced texture information in a current frame, wherein the interlaced texture information can be referred to as a current VOP in MPEG-4 The interlaced texture information is divided into a plurality of macro texture blocks to be applied to a frame division circuit 102, wherein each macro texture block has MxN texture pixels, M and N are positive integers pairs typically in the range between 4 and 16. The frame division circuit 102 divides each macro texture block into upper and background field blocks, where the upper field block has N / 2 per N texture pixels, contains every non row of each The texture block and the background field block have the other M / 2xN texture pixels, each row contains each pair of texture block macro. The upper and background field blocks for each texture block macro are sequentially provided as current upper and current background field blocks respectively to a subtracter 104 and a motion estimator 116. Essentially at the same time, texture information interleaved prior it is read from a prior frame processing circuit 114 and provides the motion estimator 116 and a motion compensator 118, wherein the previous interlaced texture information of a previous frame has previous texture pixels and precedes the texture information of the interlaced texture. current frame for a frame period The frame is also divided into a plurality of search sections and each search section is divided into top and bottom search sections, where the top search section has P (M / 2xN ) previous texture pixels contains each row non of the region of each search region and the search region of The background has the other P (M / 2xN) previous texture pixels contains each even row of each search region, P is a positive integer, typically 2. The motion estimator 116 determines motion vectors for the current VOP on a basis field-by-field. First, the motion estimator 116 detects a prior texture macro block in the previous frame for each upper field block or current background block, where the previous texture macro block is located in the same condition as each upper field block or block. of current background. Then, the previous texture macro block is divided into previous upper field and previous background blocks; and errors between each upper or current background field block and two previous upper and background field blocks, are calculated. Since the upper and background search sections have a plurality of candidate upper field and candidate background blocks, including the previous upper field and previous background blocks, respectively, each upper field or current background block is shifted in a pixel base-per-pixel within the upper and background search regions, according to each of the candidate upper and candidate background field blocks, respectively; and in each displacement, two errors between the upper field or current background block and each of the candidate and upper candidate field fields blocks are compared with each other. As previously described, the motion estimator 116 performs the motion estimation of each upper field or current background block with respect to its previous upper and background field blocks and chooses, as an optimal candidate field block or a block of more similar field, a block of upper field or candidate background that produces a minimum error. Outputs of the movement estimator 116 are a motion vector and a field indication flag, which are provided to the motion compensator 118 and a variable length coding circuit (VLC = Variable Length Coding) 108, wherein the motion vector denotes a shift between each upper field or current background block and the field block optimal candidate and the field indication flag represents whether the optimal candidate field block belongs to the upper search region or not. The motion compensator 118 performs motion compensation by causing the optimal candidate field block to move through the motion vector towards each current or higher background field block, depending on the field indication flag, to thereby provide an optimal candidate field block compensated with movement as a predicted upper or background field block for each upper field and current background block to subtractor 104 and an adder 112. Subtractor 104 has an error field block when subtracting the predicted upper or background field block of each upper field or current background block in a corresponding pixel-per-pixel basis, to provide the error field block to a texture encoding circuit 106. In the circuit of 106 texture coding, the error field block can be subjected to an orthogonal transform to remove spatial redundancy and then be tify transform coefficients to thereby provide the quantized transform pixels to the VLC circuit 108 and a texture reconstruction circuit 110. Since the conventional orthogonal transform such as discrete cosine transform (DCT) is performed in a base block DCT-by -CDT block, each DCT block typically have 8 x 8, ie M / 2 x N / 2 pixels of texture, the error field block that has 8 x 16 pixel error reading can preferably be divided into two DCT blocks in the texture coding circuit 106. If necessary, before performing the DCT, each error field block may fill DCT based on the shape information or reconstructed form information of each VOP in order to reduce components of higher frequency that can be generated in DCT processing. For example, a predetermined value, for example "0" can be assigned to the error reading pixels outside the contour in each VOP. The VLC circuit 108 performs a statistical coding on the quantized transform coefficients which are fed from the texture coding circuit 106 and the field indication flag and the motion vector ', for each block of upper field or current background, which is fed by the motion estimator 116 when using for example a conventional variable length coding technique, in order to thereby provide statistically encoded data to a transmitter (not shown) for transmission. Meanwhile, the texture reconstruction circuit 110 performs inverse and inverse quantized transform on the quantized transform coefficients to provide a reconstructed error field block corresponding to the error field block, to the adder 112. The adder combines the block of error field reconstructed from the texture reconstruction circuit 110 and the upper or background field block predicted from the motion compensator 118 at a pixel-per-pixel, in order to provide a combined result as a field block reconstructed top or bottom for each upper field or current background block to the pre-frame processing circuit 114. The pre-frame processing circuit 114 sequentially modifies the reconstructed upper or background field block based on shape information or the reconstructed form information for the current VOP, in order to provide t All the upper and background field blocks modified as other interleaved texture information prior to a current VOP subsequent to the motion estimator 116 and the motion compensator 118. With reference to Figure 2, there is a flow diagram illustrating an important aspect of the invention related to the operation of the prior frame processing circuit 114 shown in Figure 1, ie explaining a filling procedure the texture ination in a previous frame in a macro base block-by-macro block. In step S201, the reconstructed upper or background field block the upper field or current background block is received sequentially and in step S203, outer pixels in the upper field block or reconstructed background block are removed based on the shape ination of the current VOP, where the outer pixels are located outside the contour of the current VOP. The reconstructed ination can be used in place of the ination in which the ination is encoded according to a coding scheme, in order to generate the ination in a coded and the ination in a coded is reconstructed as the ination. reconstructed when decoded in the reverse order on the coding scheme. While the outer pixels are removed to fit as transparent pixels, ie pixels of undefined texture, the remaining internal pixels in the upper field block or the reconstructed background block are provided as pixels of texture defined in a field-block basis. per-block field. In step S204, each reconstructed block is determined whether or not it is located in the contour of the current VOP, where each reconstructed block has the reconstructed upper camp block and the corresponding reconstructed background field block. In other words, each reconstructed block is determined as an inner block, a boundary block or an outer block, where the inner block only has the defined texture pixels, the outer block only has the undefined texture pixels and the border block it has both texture pixels defined as pixels of undefined texture. If the reconstructed block is determined as an internal block in "step S210, no filling is pered and the process continues in step S211.If the reconstructed block is a boundary block as illustrated in Figure 3A in steps S221 to S224. , the undefined texture pixels of the border block are extrapolated from the defined texture pixels to generate an extrapolated border block First, in step S221, each border block is divided into background border field blocks as illustrated in FIG. Figure 3B, each border field block has M / 2xN texture pixels In step S222, the undefined texture pixels are filled in a row-by-row basis when using a horizontal repetitive filling technique as illustrated in FIG. Figure 3C, to generate a row fill each row In other words, the undefined texture pixels are filled by repeating the border pixels towards the arrows as illustrated in Figure 3C, where each pix the boundary between the defined texture pixels is located in the boundary, ie the boundary of the current VOP. If any undefined texture pixel can be filled by the repetition of more than one pixel boundary, the average value of the repeated values is used. If there is one or more transparent rows in each upper or lower field block in step S223, each transparent row is filled in by using one or more nearest filled or defined rows between the corresponding upper field block or background block, where the defined row has all the letter pixels defined there. example, as illustrated in Figure 3D, each undefined texture pixel of the transparent row shown in the background field block is replaced with an average of two defined texture pixels or fillings with base in downwardly filled rows more closest and closest up, ie the second and fourth filled rows in the bottom field block. If the transparent row is located in the highest or lowest row, that is, corresponds to the first row 1 or the eighth row, the texture pixel is replaced with a pixel of defined texture or filled with the nearest defined row or fill. If there is only one transparent boundary field block of the two boundary field blocks as illustrated in FIG. 3B, in step S224, the transparent boundary field block is filled with base in another boundary field block of the boundary block, in where the transparent boundary field block, ie an undefined field block, does not have a texture pixel defined there. In other words, if an upper field block is transparent, all undefined texture pixels of the transparent boundary field block, ie the upper field block, can be replaced with a constant value P as illustrated in Figure 3E, for example an average value of the texture pixels defined within the background field block. The average value of both the defined and filled pixels within the background field block can also be used to fill the transparent field block. If necessary, a mean value 2L ~ 1 all possible values for any texture pixel can be used based on the channel characteristics, where L is the number of bits allocated for each pixel. For example, if L equals 8, there are 256 pixels of texture 0 to 255 and the average value is determined to be 128. To face a fast-moving VOP, the fill must extend more to blocks that are stretched out of the VOP but are immediately near any border block. If the reconstructed block is determined as the outer block in step S204, in step S207 it is determined whether the outer block is adjacent to any extrapolated border block. If the outer block is not adjacent to an extrapolated border block, in step 209, filling is not performed and the process continues in step S211. If the outer block is adjacent to one or more border blocks extrapolated such that the outer block corresponds to an undefined adjacent block, in step S208, the texture pixels not defined in the adjacent undefined block are filled in based on its extrapolated border blocks to generate an extrapolated adjacent block for the undefined adjacent block, wherein each extrapolated border block has part of the contour A or B of the current VOP and each undefined adjacent block is illustrated as a 'shaded region as shown in Figure 4. If a plurality of extrapolated border blocks surrounds the undefined adjacent block, an extrapolated border block is chosen in priority sequence from left, right, up and down based on the adjacent undefined block. and then a vertical or horizontal border of the selected extrapolated border block is repeated to the left, to the right, upwards and down, where the vertical or horizontal edge is attached to the adjacent undefined block. As illustrated in Figure 4, the undefined adjacent blocks JB4, boards B20, JB9 and JB2 choose their extrapolated borders blocks to the left, up, right and down respectively a2, aO, a3 and a, respectively; and a vertical boundary further to the right of the extrapolated border block a2 expands to the right to fill the adjacent undefined block JB4, a lower horizontal border of the extrapolated border block AlO expands downward to fill the adjacent undefined block JB20 and so on. Also, undefined diagonal blocks such as Ml and M2 can fill with a constant value, for example "128" to make the adjacent block extrapolated for the block / not defined, where each block / not defined is diagonally adjacent to the extrapolated border block and it has all the undefined texture pixels. As described above, in step S211, the modified upper and background field blocks, i.e. the extrapolated border and the adjacent extrapolated blocks as well as the inner blocks, are provided as other pre-interlaced texture information for the subsequent current VOP . While the present invention has been described with respect to particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (18)

1. - A method for filling texture information interlaced in a previous frame, in a base macro texture block-by-macro texture block to perform a motion estimation, where each macro texture block of the previous square has MxN pixels of texture, M and N are integral positive pairs respectively, the method is characterized in that it comprises the steps of: (a) detecting whether each macro texture block of the previous frame is a boundary block or not, wherein the border block has at least one of defined texture pixels and at least one of undefined texture pixels; (b) divide the border block into two field blocks, each field block has M / 2xN texture pixels; • extrapolate the undefined texture pixels of the border block from their defined texture pixels to generate an extrapolated border block; and fill, if any of the two field blocks have no defined structure pixel, an undefined field block based on the other field block of the border block, where the undefined field block represents a field block that it has no defined texture pixel; and (c) expanding an undefined adjacent block based on the bounded-extrapolated block, wherein the undefined adjacent block 'is adjacent to the extrapolated boundary block and only has undefined read pixels.
2. - The method according to claim 1, characterized in that step (b) further includes the step (bl) field filling the texture pixels not defined in a field block from the defined texture pixels, for in this way generate a block of field fill for the field block.
3. - The method according to claim 2, characterized in that step (bl) has the steps of: (bll) filling in rows the texture pixels not defined in a row-by-row base to generate a row fill; and (bl2) filling, if there is a transparent row, the transparent row of at least one closest filled row, where the transparent row represents a row that has no pixel of defined texture.
4. - The method according to claim 3, characterized in that step (c) includes the steps of: (cl) selecting, if the adjacent block not defined is surrounded by a plurality of extrapolated border blocks, a border block extrapolated in priority sequence from the left upwards, to the right, upwards and downwards, based on the adjacent block not defined; and (c2) replicate a vertical or horizontal border of the selected extrapolated border block to the right, down, to the left or up, in order to expand the adjacent undefined block, where the vertical or horizontal border attached to the adjacent block not defined.
5. The method according to claim 3, characterized in that all undefined texture pixels of the undefined field block are replaced with a constant value.
6. The method according to claim 5, characterized in that all the undefined texture pixels of the undefined field block are replaced with an average value of both texture pixels defined as texture fill pixels within the field block fill for the other field block, where the texture pixels are filled in field through stage (bl).
7. - The method according to claim 5, characterized in that all undefined texture pixels of the undefined field block are replaced with an average value of the texture pixels defined within the field block filled by the other block of field. countryside .
8. - The method according to claim 5, characterized. because the constant value is 2L_1, where, L is the number of bits assigned to each pixel.
9. - The method according to claim 8, characterized in that L is 8.
10. An apparatus for filling texture information interlaced in a prior frame on a macro basis texture block-by-macro texture block to perform an estimation of movement, wherein each macro texture block of the previous frame has MxN texture pixels, M and N are positive integers pairs, respectively, the apparatus is characterized in that it comprises a border block detector, to detect if each macro texture block of the previous frame is a boundary block or not, in where the border block has at least one of the defined texture pixels and at least one of the undefined texture pixels; a field divider, to divide the border block into two field blocks, each field block has M / 2xN texture pixels; a transparent field filling circuit for filling an undefined field block based on the other field block of the border block, wherein the undefined field block represents a field block that does not have a defined texture pixel; an adjacent block fill circuit for expansion of an undefined adaxial block based on the extrapolated boundary block, wherein the undefined adjacent block is adjacent to the extrapolated boundary block and only has undefined texture pixels.
11. The apparatus according to claim 10, characterized in that the apparatus also comprises a field filling circuit for field filling the reading pixels not defined in a field block from the defined texture pixels, for This way you generate a block of fill field by the field block.
12. - The apparatus according to claim 11, characterized in that the field filling circuit includes: a horizontal filling circuit to fill the undefined texture pixels of a row-by-row base to generate a fill row; and a transparent row fill circuit for filling the transparent row of at least one nearest fill row wherein the transparent row represents a row that has no pixel of defined texture.
13. The apparatus according to claim 12, characterized in that the adjacent block fill circuit includes: a selector for a frontier block extrapolated in priority sequence from left to right, upward to the right and downward based on the adjacent block not defined; and means for replicating a vertical or horizontal boundary of the selected extrapolated border block to the right, down to the left or up, in order to expand the adjacent undefined block, where the vertical or horizontal border attaches the adjacent block undefined.
14. - The apparatus according to claim 12, characterized in that all undefined texture pixels of the undefined field block are replaced with a constant value.
15. - The apparatus according to claim 14, characterized in that all undefined texture pixels of the undefined field block are replaced with an average value of both the texture pixels defined as the fill texture pixels within the block of fill field for the other field block, where the fill texture pixels are filled in field through the field filling circuit.
16. The apparatus according to claim 14, characterized in that all the undefined texture pixels of the undefined field block are replaced with an average value of the texture pixels defined within the fill field block for the other block. of field
17. - The apparatus according to claim 14, characterized in that the constant value is 2L_1, where L is the number of bits assigned to each pixel.
18. The apparatus according to claim 17, characterized in that L is 8.
MXPA/A/2000/008676A 1998-03-14 2000-09-05 Method and apparatus for padding interlaced macroblock texture information MXPA00008676A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1019980008637 1998-03-14

Publications (1)

Publication Number Publication Date
MXPA00008676A true MXPA00008676A (en) 2001-07-09

Family

ID=

Similar Documents

Publication Publication Date Title
USRE41951E1 (en) Method and apparatus for encoding interlaced macroblock texture information
EP1528813B1 (en) Improved video coding using adaptive coding of block parameters for coded/uncoded blocks
EP0818930B1 (en) Video coding method
US6404814B1 (en) Transcoding method and transcoder for transcoding a predictively-coded object-based picture signal to a predictively-coded block-based picture signal
JPH114441A (en) Estimation and compensation of moving of video object plane for interlaced digital video device
JPH09224254A (en) Device and method for estimating motion
US6133955A (en) Method for encoding a binary shape signal
US7426311B1 (en) Object-based coding and decoding apparatuses and methods for image signals
Packwood et al. Variable size block matching motion compensation for object-based video coding
GB2330472A (en) Mode coding in binary shape encoding
KR100186980B1 (en) Information hierarchical encoding method for an object of image data
MXPA00008676A (en) Method and apparatus for padding interlaced macroblock texture information
US7899112B1 (en) Method and apparatus for extracting chrominance shape information for interlaced scan type image
JPH11196415A (en) Device and method for encoding/decoding shape information by adaptive bordering
JP2004112842A (en) Coding device for coding block pattern and coding method for coding block pattern
KR100632105B1 (en) Digital interlaced intra picture encoding / decoding method
KR100549926B1 (en) Motion vector estimator for each area of image and motion compensation prediction encoding / decoding method for each area of image
KR19990027349A (en) How to convert video information
KR19990013990A (en) Image compression
GB2341030A (en) Video motion estimation