WO1999059342A1 - Procede et systeme de codage mpeg a partitionnement d'image - Google Patents

Procede et systeme de codage mpeg a partitionnement d'image Download PDF

Info

Publication number
WO1999059342A1
WO1999059342A1 PCT/CA1999/000417 CA9900417W WO9959342A1 WO 1999059342 A1 WO1999059342 A1 WO 1999059342A1 CA 9900417 W CA9900417 W CA 9900417W WO 9959342 A1 WO9959342 A1 WO 9959342A1
Authority
WO
WIPO (PCT)
Prior art keywords
macroblock
macroblocks
frame
difference
insignificant
Prior art date
Application number
PCT/CA1999/000417
Other languages
English (en)
Inventor
Rabab K. Ward
Panos Nasiopoulos
Ekaterina Barzykina
Original Assignee
The University Of British Columbia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of British Columbia filed Critical The University Of British Columbia
Priority to AU38049/99A priority Critical patent/AU3804999A/en
Publication of WO1999059342A1 publication Critical patent/WO1999059342A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the invention relates to coding of moving pictures (video) for digital storage media and, more particularly, to MPEG methods and systems for coding video data.
  • MPEG Motion Pictures Expert Group
  • ISO International Standardization Organization
  • MPEG Video Compression Standard J.L. Mitchell et al., Chapman and Hall, 1996.
  • compression of video data for storage in digital format is desirable because a sequence of pictures can otherwise occupy a vast amount of digital storage space.
  • a typical digital video picture with 360x288 pixels may occupy 311 kilobytes. If such pictures are sent at 24 pictures/second, the raw data rate is about 60 Mbit/s.
  • One of the concepts of the MPEG standards is that, rather than sending each new picture in its entirety, sending only the differences between the pictures allows the bit-rate to be substantially reduced. Other various techniques are also used to reduce the bit rate.
  • MPEG-1 the first version of MPEG, MPEG-1, standardized the coding of the combined audio-visual signal at bit-rates around 1.5 Mbit s.
  • the MPEG-1 standard was primarily intended to process video at SIF (Source Input Format) resolution (images of 352x240 pixels at 30 frames per second) and to play back video off a CD-ROM or over telephone lines at a low bit-rate.
  • SIF Source Input Format
  • MPEG-1 syntax had been shown to operate successfully on higher-resolution pictures and at higher bit-rates, some of its limitations, such as being able to accommodate only progressively scanned and not interlaced pictures, have prompted the development of a second stage of the MPEG standard— MPEG-2.
  • MPEG-2 a standard designed for broadcast-quality video and associated audio.
  • MPEG-2 allowed compression of audiovisual signals at higher bit-rates (around 6 Mbit/s and higher) for full-resolution images (720x480 pixels and higher).
  • MPEG-2 supported interlaced video, multi-resolution scalability, as well as a number of other new technical features.
  • the MPEG committee standardized five profiles to be used in different communications scenarios — Simple Profile, Main Profile, Spatially Scaleable Profile, and others.
  • the MPEG-2 standard enjoys a worldwide acceptance. It is exclusively used in many applications, including High Definition Television (HDTV), Digital Television, and Digital Versatile Disc (DVD).
  • HDMI High Definition Television
  • DVD Digital Versatile Disc
  • MPEG-4 The newest phase of the MPEG standard, MPEG-4, is expected to reach its final stage in November 1998 and is expected to provide a new level of interactivity for next generation multimedia applications.
  • MPEG compression works by exploiting two properties of video streams: the presence of considerable similarities between consecutive frames, which constitute temporal redundancies; and the fact that some details within a single image may be unnoticeable to the human eye, which constitute spatial redundancies.
  • the main features supported by both MPEG-1 and MPEG-2 standards include block based motion compensation, Discrete Cosine Transform (DCT), quantization of the DCT coefficients, and variable length encoding of the quantized DCT coefficients.
  • Motion compensation takes advantage of the temporal redundancies in the video stream and reduces the initial amount of information to be encoded. Quantization exploits the spatial redundancies and matches the output to a given bit rate, allowing for most of the compression and determining the quality of the reconstructed images.
  • Motion Compensation in MPEG Motion compensation (MC) is the first stage in the MPEG
  • motion compensation it takes advantage of the similarities between consecutive frames (temporal redundancies) and performs the initial compression through differential encoding of some frames in the input video sequence.
  • the idea of motion compensation is that a current frame (and its constituent parts) can be modeled as a translation of another frame from some previous time, i.e., a predicted frame.
  • the method processes the information on the mismatch between parts of the current frame and their corresponding parts from the predicted frame.
  • the output of motion compensation is the prediction error frame, defined as the pixel-by-pixel difference between the current frame and its predicted frame, and motion information, which explains how the predicted parts of the frame can be found.
  • I frames are coded with no reference to any previous frame (no motion compensation).
  • P and B frames are coded with respect to a preceding frame (P frames) or with respect to both preceding and following frames (B frames).
  • the MPEG standard divides the video sequence into groups of pictures (GOP).
  • a GOP always starts with an I frame, with the rest of the frames being of type P or of types P and B.
  • the following discussion is simplified by assuming that a GOP is composed of I and P frames only.
  • each frame is divided into blocks of 16x16 pixels, called a macroblock (MB).
  • MB macroblock
  • Each macroblock is composed of four 8x8 luminance blocks and their corresponding chrominance blocks.
  • a macroblock serves as a unit for motion compensation and quantization, with a block being a unit for the discrete cosine transform (DCT).
  • the DCT is used by the MPEG standard to form a mathematical representation of the relatively complex variations in signal amplitude across a video picture.
  • the motion estimation (ME) the motion estimation
  • each macroblock in the current frame is predicted from its reference frame, which can be a previous frame, a future frame or both.
  • the results of motion estimation are the Predicted Frame, composed of all of the predicted macroblocks, and a set of parameters called motion information consisting mainly of motion vectors, which indicate the location of each predicted macroblock in the reference frame.
  • the method finds the pixel-by-pixel difference between the predicted and the current frame, the Prediction Error Frame.
  • the MPEG method of motion compensation is illustrated in FIGURES 1A-1D.
  • FIGURES 1A, IB, 1C, and ID each illustrate a frame that is four macroblocks high and five macroblocks wide. As previously described, each macroblock contains 16x16 pixels. Therefore, the frames of FIGURES 1A-1D are 64 pixels high by 80 pixels wide. In the following description, references will be made to particular pixel coordinates.
  • FIGURE 1A illustrates a current frame 50. As illustrated in FIGURE 1A, the current frame 50 includes a shape SHP1 in the lower right hand corner.
  • a macroblock BLl includes part of the shape, and may be defined by the pixel coordinates of its upper left corner (16,16). As described in more detail below, as part of the motion compensation process, a reference frame will be searched for a shape similar to that found in macroblock BLl .
  • FIGURE IB illustrates a reference frame 52.
  • a shape SHP2 is located in the upper right-hand corner of the reference frame 52.
  • the reference frame 52 of FIGURE IB is searched for a 16x16 pixel block that contains a shape similar to the shape that is found in macroblock BLl of the current frame 50 of FIGURE 1A.
  • the closest shape to that of macroblock BLl is found in a block BL2 defined by the pixel coordinates of its upper left-hand corner (40,32).
  • a motion vector is then defined in the following manner.
  • the motion vector MV consists of an X component MV-X, and a Y component MV-Y.
  • FIGURE IC illustrates a predicted frame 54 that has been formed to look similar to the current frame 50 of FIGURE 1A.
  • the predicted frame 54 is essentially the best match for the current frame 50 that can be formed by taking pieces from the reference frame 52 and moving them around.
  • the motion compensation process as described above has provided a motion vector to tell the MPEG encoder to take the part of the shape in block BL2 of FIGURE IB, and copy it to the pixel coordinates (16,16) in the predicted frame 54.
  • FIGURE ID illustrates a prediction error frame 56.
  • the prediction error frame 56 is essentially the difference between the current frame 50 of FIGURE 1A, and the predicted frame 54 of FIGURE IC. Therefore, the block defined by the pixel coordinates (16,16) shows the difference between the block BL2 of FIGURE IC, and the block BLl of FIGURE 1A.
  • the motion compensation method substantially reduces the amount of data that must be transmitted in order to reconstruct the image of the current frame 50. Rather than having to retransmit the entire image of the current frame 50, all that must be transmitted are the motion vectors and the prediction error frame 56 of FIGURE ID.
  • MPEG standards are generic, they do not specify a particular implementation of any part of the compression method.
  • the standardization is limited to only the syntax of the encoded bit stream.
  • motion compensation the MPEG standard specifies how to represent the motion information for each macroblock. It does not, however, specify how this information is to be found.
  • the motion estimation method uses a block-matching technique to find the motion vectors.
  • the vectors are obtained by minimizing a cost fiinction measuring the mismatch between the reference and the current macroblock.
  • ME methods use a macroblock pixel- by-pixel absolute difference to measure the mismatch.
  • the absolute difference or absolute error (AE) for each macroblock is defined as follows:
  • AE(v x , v y ) (1) in which v x and v y are the motion vectors in the horizontal and vertical direction, respectively; cur(i, j) is a pixel value in a macroblock from a current frame; ref(i, j) is a pixel value of the same macroblock in a reference frame; ref(i-v x j-v y ) is a pixel value of the reference macroblock displaced by a vector (v* v y ); The minimum absolute error within a search window will give the values of the motion vectors.
  • motion vectors for each macroblock are first found in two stages.
  • FIGURE 2 is a flow chart illustrating the steps of the traditional MPEG motion compensation method.
  • the rough motion vectors are found by doing a full pixel search comparison of the original current frame to the original reference frame.
  • the reference frame is quantized and then inversely quantized, so as to simulate the decoding process, and thereby form a reconstructed reference frame.
  • the motion vectors are further refined by doing a half pixel search comparison of the original current frame to the reconstructed reference frame.
  • the refined motion vectors are combined with selected blocks of the reconstructed reference frame to determine the predicted frame.
  • the predicted frame is compared with the current original frame to determine the predicted error frame.
  • each macroblock in a frame is identified as being of either Intra or Inter type.
  • the pixel values of an Intra macroblock are just its original values in the current frame, taken with respect to the average intensity value of 128.
  • the pixel values of an Inter macroblock are equal to its prediction error, i.e., the pixel-by-pixel difference between the predicted and the current macroblock.
  • I frames are composed of Intra macroblocks only.
  • P and B frames can have both Intra- and Inter-type macroblocks, where an Inter macroblock can be either of two types: predicted with motion vectors or predicted with no motion vectors.
  • An Inter macroblock of the first type has motion vectors whose values are not equal to zero, while a macroblock of the second type has a zero motion vector.
  • each P and B frame is represented by the prediction error frame and its corresponding motion information.
  • the traditional MPEG motion compensation is lossless, i.e., no error is introduced in this step of the encoding.
  • the lossy part of the method comes later, in the quantization of the DCT coefficients.
  • the typical motion compensation method treats all areas of the image equally, regardless of the visual importance of the temporal changes in different parts of the frames. In many cases, even changes that cannot be detected by the human eye are still encoded. As a result, a significant percentage of the number of bits allocated for a frame can be wasted on these areas and, because of the limited bit budget, other parts of the image may suffer from visual artifacts.
  • the following sections examine in detail how the traditional MPEG motion compensation method works when it encounters different kinds of temporal changes that occur in a video sequence.
  • an input video stream contains several consecutive frames, each composed of two regions: a "no-change” area that remains exactly the same for all of these frames and a "change” area that is continuously different from one frame to the next.
  • the motion compensation method calculates the pixel values of the prediction error frame, it finds the mismatch between the current original frame and the reconstructed reference frame (see block 108 of FIGURE 2). Due to the quantization error, the reconstructed reference frame can significantly differ from its original version.
  • a good example of a video stream containing areas that are exactly the same is a computer-generated sequence with an object moving on top of a still background. If the background of such a sequence is reasonably complex, then, provided that the bit-rate is moderate, the encoding method will need the time interval of several consecutive frames to exhaust the background data. As a result, while the quality of the background continuously improves, areas with changes may not have enough bits to prevent artifacts. Consecutive Frames Containing Areas with "Non- Visible" Changes
  • the previous example examined a video stream containing picture areas that remain exactly the same during a period of several consecutive frames.
  • Such pixel-by- pixel similarities can be encountered mostly in artificially generated images (computer graphics, cartoons, etc.). Due to the nature of the real-life based video, areas that seem "the same” to the human eye are most of the time different on a pixel-by-pixel comparison level. Many times, conditions such as subtle changes in lights and shadows or changes in details that are too intricate to recognize can account for the pixel level differences between images that are perceived as being the same.
  • the encoding method does not make any distinction between “visible” and “nonvisible” changes, it will give equal attention to macroblocks carrying information about nonvisible background fluctuations as well as macroblocks that correspond to visible motion. As a result, a significant part of the allowed bit budget can be spent on encoding the nonperceived differences between consecutive frames, while other areas of the image can suffer from visual artifacts.
  • An example of a sequence that contains the non-visible temporal changes would be a panoramic view, where most of the changes between the frame macro blocks are caused by motion, and not by absolute changes in their visual content.
  • a video object can be defined in a scene context, where a scene (video information of several consecutive frames) is composed of a number of objects. For example, a person moving across a background would represent a scene with the person classified as VO1 and the background as VO2. Each VO can be of arbitrary shape and is encoded separately.
  • the introduction of video objects opens great possibilities in many new applications such as interactive video. For simple video encoding, one of the advantages of video objects is that different objects can be encoded at different temporal/spatial scalability levels.
  • Video objects require additional processing and overhead information due to the contradiction between the arbitrary shapes of video objects and their block-based encoding.
  • MPEG-4 handles this problem by first defining an arbitrary boundary for a VO and then using zero padding to fill out a shape composed of 8x8 blocks.
  • This method is relatively complex and still has the overhead problem.
  • the present invention is directed to a novel MPEG video compression method that incorporates the characteristics of the human visual system into the process of motion compensation, so as to increase the overall quality and/or compression performance of the encoding method.
  • the present invention provides a way to exploit temporal redundancies in the input video stream in a perceptually adaptive fashion that does not introduce any additional overhead information and improves the encoding efficiency of the compression method. More specifically, as will be better understood from the following summary, the present invention modifies the existing MPEG motion compensation procedure in a way that achieves different temporal scalability encoding for different parts of a frame.
  • a new frame partitioning method is provided.
  • the frame partitioning method separates the temporal changes between consecutive video frames into two categories: changes that are detectable by the human eye and changes that are unnoticeable to the human eye.
  • the frame partitioning method is used as a new initial stage added to the traditional MPEG motion compensation procedure. This introduces a degree of lossiness in the traditionally lossless motion compensation procedure. This allows an attainment of a significant reduction of the bit- rate for the same quality of reconstructed images or, reversibly, an increase in quality within the same bit budget.
  • the objective is that the quality reach its highest level for a given bit-rate/buffer size and keep this level consistent throughout each frame and the entire video stream.
  • the frame partitioning method specifically separates macroblocks into two categories: significant and insignificant macroblocks.
  • a macroblock is considered to be insignificant if it is not "visibly" different from its predicted macroblock.
  • a significant macroblock has acquired visibly new content in comparison with its predicted counterpart.
  • Prediction error for significant macroblocks is calculated using the standard motion compensation procedure.
  • the prediction error of insignificant macroblocks is made equal to zero.
  • Motion information motion vectors
  • the frame partitioning method is block based, it works within the standardized MPEG-2 syntax, and thus does not require any extra overhead information.
  • significant macroblocks are encoded at a resolution equal to the frame rate, whereas the resolution for insignificant macroblocks is half of the frame rate or less.
  • the frame partitioning method thus results in different temporal resolution for different regions of a single frame.
  • the resulting flexibility can be very beneficial for encoding, in that the reduction of the temporal resolution for some regions of the frame can allow more bits to be used for other regions. This improvement in bit allocation can prevent visible artifacts from appearing and, hence, improve picture quality.
  • the determination by the frame partitioning method of whether or not a macroblock is labeled significant or insignificant is based on mathematical formulas that relate to the way that the human vision system perceives images.
  • the mathematical formulas are used to calculate certain difference measurements, such as those that relate to the differences between a current original frame and the corresponding predicted frame. If any of these difference measurements are found to be above a determined threshold, it is assumed that a difference is present that could be detected by the human vision system. If none of the difference measurements are above the selected threshold, then it is assumed that the changes would not be detectable by the human vision system, and as a result the macroblock is labeled insignificant and the values of the prediction error frame are set to zero.
  • difference measurements include the DC luminance difference, the DC chrominance difference, the pixel luminance difference, the pixel chrominance difference, and the variance difference.
  • the difference measurements for the macroblocks may be found in different ways.
  • the DC luminance difference may be defined as the difference between the DC components of the current original and the corresponding predicted macroblock. The differences are found for the four 8x8 blocks of the luminance component of each macroblock. For a macroblock with a luminance or chrominance edge, the maximum of these differences is chosen to be the DC luminance difference of the macroblock. For a non-edge macroblock, the DC luminance difference is the average of the four block difference measurements. Similarly, the DC chrominance difference is the difference between the current original macroblock and its corresponding predicted macroblock. For both edge and non-edge macroblocks, the DC chrominance difference is found as the maximum of the difference measurements of the two chrominance components, chrominance R and chrominance B.
  • the pixel luminance difference and the pixel chrominance difference are sometimes called the absolute difference.
  • the absolute difference is defined as the average pixel-by- pixel difference between the original and the predicted macroblocks.
  • the pixel luminance difference for a non-edge macroblock equals the average of four absolute block difference measurements, and for an edge macroblock to the maximum block difference measurement.
  • the pixel chrominance difference is found for both edge and non-edge macroblocks as the maximum of the absolute pixel-by-pixel differences of the R and B chrominance components.
  • the variance difference is the difference in the variance of the luminance component between the predicted macroblock and the current original macroblock.
  • the variance difference for the non-edge macroblock is calculated as the absolute difference between the average variance of the predicted macroblock and that of the current original macroblock. For an edge macroblock, the variance difference is calculated as the maximum of its four block variance difference measurements.
  • the disclosed frame partitioning method has significant advantages over traditional MPEG encoding.
  • all kinds of temporal changes are treated in exactly the same way: the "non-visible,” the “almost non-visible,” and the “clearly visible” changes.
  • the frame partitioning method of the present invention it is established beforehand that the areas with changes that cannot be detected by the human eye in the consecutive frames are to be regarded as exactly the same. This allows these changes to be completely omitted.
  • This approach results in either a decrease in the bit-rate and/or in an improvement in quality, as it will enable the encoding method to spend all or most of the available bit budget on the areas that correspond to the region with detectable changes, thus avoiding artifacts and maintaining an even level of quality for all parts of the images.
  • the frame partitioning method can distinguish the "almost non-visible” fluctuations from the visually important information, it is able to provide a better encoding solution as a compromise between the overall visual quality of the video stream and the accuracy of the representation of the "almost non-visible” changes.
  • FIGURES 1A, IB, IC, and ID are diagrams of video frames illustrating the MPEG motion compensation method of the prior art
  • FIGURE 2 is a flowchart of a traditional MPEG motion compensation method of the prior art
  • FIGURES 3A-3F are a series of video macroblocks illustrating frame partitioning analysis in accordance with the method of the present invention
  • FIGURES 4A and 4B are flowcharts of an MPEG motion compensation with frame partitioning method according to the present invention.
  • FIGURES 5A and 5B are block diagrams illustrating the luminance and chrominance component structure of a macroblock in MPEG-2 Video Main Profile as utilized in the present invention
  • FIGURE 6 is a flowchart illustrating a routine for identifying significant and insignificant macroblocks according to the present invention
  • FIGURES 7A to 7C are flowcharts illustrating a routine for determining difference measurements according to the present invention.
  • FIGURE 8 is a flowchart illustrating a routine for setting the values of a prediction error frame to zero.
  • the present invention uses a concept of frame partitioning, a novel technique that is designed to separate temporal changes that occur between the frames into different categories based on the degree of the perceived importance of these changes.
  • frame partitioning a novel technique that is designed to separate temporal changes that occur between the frames into different categories based on the degree of the perceived importance of these changes.
  • the following description describes only two categories of temporal changes — changes that are detected by the human eye and changes that are invisible to human vision.
  • the technique can be extended to allow for the detection of "almost non-visible" changes as well.
  • frame partitioning is designed as a block-based technique. For each macroblock or each block in a frame, a frame partitioning method makes a decision as to whether or not the changes contained within that MB are detected by the human eye.
  • the macroblocks (or blocks) that carry the "non-visible” changes shall be referred to as insignificant, while all other macroblocks are labeled significant.
  • a collection of all insignificant macroblocks (or blocks) in a frame constitute a frame partitioning mask, an area whose spatial information is not detected by human vision and, thus, can be disregarded during encoding.
  • Frame partitioning carries an analogy to MPEG-4 video objects, with only two instances of an object allowed within a single frame: one "object,” composed of all significant macroblocks, and one "background,” composed of all insignificant macroblocks.
  • frame partitioning identifies significant and insignificant areas in a frame based only on the degree of similarities between the original macroblocks and their predicted counterparts. Information about motion is not considered in the decision as to whether or not a macroblock is significant or insignificant. In other words, a macroblock whose location in the consecutive frames changes but whose visual content remains the same is considered insignificant by the frame partitioning method.
  • Local temporal resolution is defined as a virtual frame/macroblock rate at which the corresponding frame/macroblock video information can be encoded without introducing visual discontinuity into the decoded stream.
  • the prior art performed this type of analysis on a frame-to-frame comparison level, according to which of two (or more) consecutive frames were visually the same, then one of the frames would be designated as having its temporal resolution twice (or more) lower than the frame rate.
  • a similar analysis is still used to determine temporal resolution, only it is done on a macroblock level, rather than a frame level.
  • the term "visually the same” is used to emphasize that the macroblocks do not have to be exactly the same, but just have to be identified as visually unchanged.
  • Frame partitioning identifies significant and insignificant macroblocks based on the minimum temporal resolution that is necessary for their encoding. Within a single frame, all macroblocks that must be encoded with a local temporal resolution equal to the frame rate will be identified as significant macroblocks, while all macroblocks whose local temporal resolution can be half that of the frame rate or less will be identified as insignificant macroblocks.
  • FIGURES 3A-3F are a series of video macroblocks illustrating how macroblocks are labeled significant or insignificant in accordance with the frame partitioning method of the present invention.
  • the temporal resolution of insignificant macroblocks is reduced to one-third the frame rate.
  • FIGURE 3 A shows a frame 150 with a macroblock MBl.
  • FIGURE 3B shows a frame 152 with a macroblock MB2.
  • macroblock MB2 When macroblock MB2 is compared to macroblock MBl, they can be seen to be visually different, for which reason macroblock MB2 is labeled significant, and therefore provided with a temporal resolution equal to the full frame rate.
  • FIGURE 3 C shows a frame 154 with a macroblock MB3.
  • macroblock MB3 When macroblock MB3 is compared to macroblock MB2, they can be seen to be visually identical, for which reason macroblock MB3 is labeled insignificant, and therefore provided with a temporal resolution equal to one-third the frame rate.
  • FIGURE 3D shows a frame 156 with a macroblock MB4.
  • macroblock MB4 When macroblock MB4 is compared to macroblock MB3, they can be seen to be visually identical, for which reason macroblock MB4 is labeled insignificant, and therefore provided with a temporal resolution equal to one-third the frame rate.
  • FIGURE 3E shows a frame 158 with a macroblock MB 5.
  • macroblock MB5 is compared to macroblock MB4, they can be seen to be visually different, for which reason macroblock MB5 is labeled significant, and therefore provided with a temporal resolution equal to the full frame rate.
  • FIGURE 3F shows a frame 160 with a macroblock MB6.
  • macroblock MB6 is compared to macroblock MB 5, they can be seen to be visually different, for which reason macroblock MB6 is labeled significant, and therefore provided with a temporal resolution equal to the full frame rate.
  • Frame partitioning constitutes the first stage of the method, followed by the second stage, which consists of modified MPEG motion compensation.
  • the objective of the first stage of the method is to separate each frame into significant and insignificant regions.
  • the method looks for the similarities between the two original consecutive frames through the use of MPEG motion estimation.
  • the new method does not use the reconstructed reference frame to find the predicted frame in the first part of the method.
  • the original reference frame is used in calculating the motion vectors (in both full-pixel and half-pixel search) and the predicted frame.
  • the method eliminates the influence of the quantization error on the process of distinguishing between the visible and the non-visible changes in the input video stream.
  • the method proceeds with its second stage, the motion compensation stage, which results in a prediction error frame.
  • the prediction error for significant macroblocks is calculated using the standard MPEG motion compensation procedure, which finds the pixel-by-pixel difference between the current and the predicted macroblocks.
  • the prediction error of insignificant macroblocks is made equal to zero.
  • Motion information (constituted mainly of motion vectors) is transmitted for both significant and insignificant macroblocks.
  • FIGURES 4A and 4B are flowcharts illustrating the motion compensation with frame partitioning method of the present invention.
  • rough motion vectors are found by doing a full pixel search comparison of the original current frame to the original reference frame.
  • the results of the full pixel motion vector search are saved for future use at a block 220, as will be described in more detail below.
  • the motion vectors are further refined by doing a half pixel search comparison of the original current frame.
  • the refined motion vectors are combined with the original current frame and the original reference frame to determine the predicted frame for frame partitioning.
  • the significant and insignificant macroblocks are determined.
  • the method proceeds to a block 220.
  • the results of the full pixel motion vector search that were saved at block 202 are recalled.
  • the reference frame is quantized and then inversely quantized, so as to simulate the decoding process, and thereby form a reconstructed reference frame.
  • motion vectors are further refined by doing a half pixel search comparison of the original current frame to the reconstructed reference frame.
  • the refined motion vectors are combined with selected blocks of the reconstructed reference frame to determine the predicted frame.
  • the predicted frame is compared with the current original frame to determine the predicted error frame.
  • the human eye is less sensitive to changes that occur in very dark or very light parts of the image and is most sensitive to changes occurring in parts of the image with average luminance; and • motion has the ability to mask changes: it is more difficult to recognize changes in texture, luminance or color if there is motion associated with those changes; the changes are more obvious in parts of the picture with no motion.
  • the decision as to whether or not a macroblock belongs to the frame partitioning mask is based on the following factors:
  • ⁇ DC the DC difference of the luminance component
  • ⁇ 2 the variance difference
  • ⁇ Y the absolute difference of the luminance component
  • ⁇ C the absolute difference of the chrominance component of the image.
  • FIGURES 5A and 5B illustrate the luminance and chrominance component blocks of a macroblock.
  • FIGURE 5 A illustrates the luminance and chrominance component blocks of a macroblock 300 A.
  • a 16x16 macroblock is composed of four 8x8 luminance blocks.
  • Each macroblock also comprises two chrominance blocks that relate to the two chrominance components, chrominance R (designated Cr) and chrominance B (designated Cb).
  • macroblock 300A comprises four luminance blocks AYO to AY3, and two chrominance blocks AC 1 and AC2.
  • Each of the luminance blocks AYO to AY3 includes its own function for the DC difference of the luminance component, the variance difference, and the absolute difference of the luminance component.
  • the chrominance blocks AC 1 and AC2 each include a function for the absolute difference of the chrominance component of the image.
  • FIGURE 5B shows a macroblock 300B that includes four luminance blocks BYO to BY3, and two chrominance blocks BC1 and BC2.
  • Macroblock 300B is a predicted macroblock.
  • Each of the luminance blocks BYO to BY3 and chrominance blocks BC1 and BC2 have the same difference functions as the corresponding blocks of FIGURE 5A, except with a subscript p, which designates them as being for a predicted macroblock.
  • the comparison between a predicted and a current macroblock is performed either on a block level or on a macroblock level.
  • the level of comparison is chosen based on whether or not a macroblock contains a luminance or a chrominance edge.
  • the comparison is performed on a macroblock level, where the ⁇ DC, ⁇ 2 , and ⁇ Y differences are found as the average of their corresponding four block values ⁇ bl DC, ⁇ bl ⁇ 2 , and ⁇ bl Y, and the ⁇ C difference is found as the maximum of the ⁇ bl Cr and ⁇ bl Cb, the chrominance R and the chrominance B differences.
  • the edge macroblocks are compared on a level of blocks, with ⁇ DC, ⁇ 2 , and ⁇ Y identified as the maximum of the four corresponding block differences and ⁇ C being the same as for a non-edge macroblock.
  • the DC difference ⁇ bl DC(n) of block n is found as the difference between the DC component of the corresponding original and the predicted blocks in a macroblock:
  • ADC AVGf ⁇ bl DC(0), A bl DC(l), A bl DC(l), (3)
  • the final DC difference ⁇ DC is found as the maximum of the four block differences:
  • ADC MAxiA bl DC(0), A bl DC(l), A bl DC ⁇ l), (4)
  • the block absolute luminance difference ⁇ bl Y is defined as the average pixel-by- pixel absolute difference between the original and the predicted block before the DCT.
  • the variance difference ⁇ 2 is calculated as the absolute difference between the average variances of the predicted macroblock and the current original macroblock and is found as
  • the variance difference ⁇ 2 is calculated as the maximum of four block variances difference ⁇ bI ⁇ 2 :
  • n 0, 1, 2, 3
  • chrominance difference ⁇ C the absolute difference of the chrominance component, chrominance difference ⁇ C, is found for both edge and non-edge macroblocks as the maximum of the absolute differences of the chrominance components and in accordance with FIGURES 5A and 5B.
  • ⁇ hWiC6 ⁇ 1 ' ⁇ - 7 J ⁇ ⁇ ⁇ fc[ C&(/ j) - Cb p (i,j,) ] (13)
  • a macroblock is classified as insignificant if all of its differences ( ⁇ DC, ⁇ 2 , ⁇ Y, and ⁇ C) do not exceed their corresponding threshold values.
  • K and KK are coefficients whose values depend on a specific difference for which a threshold is used and on the extent we wish to increase the use of temporal redundancies in addition to standard motion compensation.
  • HVS human visual system
  • the human eye is less sensitive to changes that occur in very dark or very light areas of the image and is most sensitive to changes occurring in areas with luminance (i.e., brightness or luminance masking); • motion has the ability to mask changes, i.e., it is more difficult to recognize changes in texture, luminance, or color if there is motion associated with those changes; the changes are more obvious in parts of the picture with slow or no motion (i.e., motion masking).
  • five difference measurements are used. These five difference measurements are ⁇ DCY, ⁇ 2 , ⁇ Y, ⁇ DCC, and ⁇ C.
  • the difference measurements ⁇ DCY and ⁇ DCC have been substituted for the simple ⁇ DC difference term used before.
  • the definitions of the various difference measurements under this method are as follows:
  • ⁇ DCY the difference in the DC of the luminance
  • ⁇ 2 the difference in the variance of the luminance component between the current and its corresponding predicted macroblock
  • ⁇ Y the absolute pixel-by-pixel difference of the luminance component between the current and its corresponding predicted macroblock
  • ⁇ DCC the difference in the dc of the chrominance component between the current and its corresponding predicted macroblock
  • ⁇ C the absolute pixel-by-pixel difference of the chrominance component between the current and its corresponding predicted macroblock.
  • an individual set of measurements is found for each macroblock in the frame by comparing characteristics of the current macroblock and its corresponding predicted macroblock. Depending on whether or not a macroblock contains a luminance or a chrominance edge, the difference measurements are found based on the average parameters of a macroblock or based on parameters of each of the four blocks that make up a macroblock.
  • the DC luminance difference of a block n, ⁇ bl DCY(n), is found in the difference between the DC component of the original block and the DC component of its corresponding predicted block:
  • the final DC luminance difference ⁇ DCY is found in the average of the four block difference measurements.
  • the final ⁇ DCY difference is found as the maximum of the four block difference measurements:
  • the pixel luminance difference of a block ⁇ bl Y is defined as the average pixel-by- pixel absolute difference between the original and the predicted block before the DCT.
  • the final value of the pixel luminance difference ⁇ Y for a non-edge macroblock equals the average of four absolute block difference measurements
  • the variance difference ⁇ 2 for the non-edge macroblock is calculated as the absolute difference between the average variance of the predicted macroblock and that of the current original macroblock:
  • the variance difference ⁇ 2 is calculated as the maximum of its four block variance difference measurements ⁇ bl ⁇ 2 :
  • the DC chrominance difference, ⁇ DCC is found for both edge and non-edge macroblocks as the maximum of the DCC difference measurements of the two chrominance components, chrominance R and chrominance B.
  • ⁇ DCC MAX ⁇ bl DC_Cr, ⁇ bl DC Cb ⁇ , (27) where
  • the pixel chrominance difference, ⁇ C is found for both edge and non- edge macroblocks as the maximum of the absolute pixel-by-pixel differences of the R and B chrominance component:
  • ⁇ bl Cr - ⁇ ⁇ ⁇ abi Cr(i,j)- Cr p (i,j,) 1 (31)
  • ⁇ bl Cb ⁇ - ⁇ ⁇ absf Cb(i,j)- Cb p (i,j,) 1 (32)
  • a macroblock is said to contain visible changes if one or more of its difference measurements exceed their corresponding thresholds of visibility. Only if all five of the macroblock difference measurements ( ⁇ DCY, ⁇ 2 , ⁇ Y, ⁇ DCC, and ⁇ C) are found to be below their thresholds does the method classify the macroblock as insignificant, i.e., containing changes that are not perceived by the human eye.
  • T mask is the term that takes into account the texture-masking property of the HVS and is usually empirically calculated as a function of a macroblock variance.
  • the relationship between T_ mask and macroblock variance has been previously estimated as a logarithmic dependency. It has experimentally been found that a single logarithmic model does work well for macroblocks with low and macroblocks with high variances. In this method, the influence of texture masking is modeled with a function that rises slower than a logarithm at lower values of variances and has a lesser saturation at higher variances:
  • T nask j exp KTj*log(( ⁇ 2 ⁇ ,+1.1)], (34)
  • L mask is the term that accommodates for the change in sensitivity of the human visual system to macroblocks of different illumination, i.e., luminance masking.
  • a well-known assessment of the brightness (luminance) sensitivity of the HVS is used which can be found or approximated as:
  • I is the mean luminance, corresponding in our case to DC j n b , the average dc component of a macroblock:
  • M mask is a term that takes into account the masking effect of motion. It is known that the human eye has a diminished sensitivity to changes that are accompanied by high rates of motion, but at the same time does not exhibit such sensitivity thresholds by up to 10 percent, depending on the rate of motion. The influence of motion is determined as:
  • Ki is coefficient estimated empirically for each threshold.
  • FIGURE 6 illustrates the method of the present invention for identifying significant and insignificant macroblocks, such as performed at block 208 of FIGURE 4 A.
  • a decision block 300 it is determined whether the current macroblock is an intra macroblock. If the current macroblock is an intra macroblock, the routine proceeds to a block 310, where the macroblock is labeled as significant. If the current macroblock is not an intra macroblock, the routine proceeds to a decision block 312. At decision block 312, it is determined whether the macroblock contains a luminance or a chrominance edge. If the macroblock does contain an edge, the routine proceeds to a block 314, where the difference measurement routine is performed using edge-based calculations, as described above. If the macroblock does not contain an edge, the routine proceeds to a block 316, where the difference measurement routine is performed using non-edge-based calculations.
  • FIGURE 7A-7C illustrate the difference measurement routine as performed from either block 314 or block 316 of FIGURE 6. Depending on whether the routine is performed from block 314 or 316, different calculations are used for the various steps, as described above.
  • the DC luminance difference between the current and the predicted macroblocks is calculated as illustrated above in equations 3, 4, 19, and 20.
  • the threshold for the DC luminance difference is calculated as illustrated above in equations 16 and 33.
  • the luminance variance difference between the current and predicted macroblocks is calculated as illustrated above in equations 8, 10, 24, and 25.
  • the threshold for the luminance variance difference is calculated as illustrated above in equations 16 and 33.
  • the significance routine proceeds to a block 370.
  • the absolute pixel-by-pixel luminance difference between the current and the predicted macroblocks is calculated as illustrated above in equations 6, 7, 22, and 23.
  • the threshold for the pixel-by-pixel luminance difference is calculated as demonstrated above in equations 17 and 33.
  • the DC chrominance difference between the current and the predicted macroblocks is calculated as illustrated above in equation 27.
  • the threshold for the DC chrominance difference is calculated as illustrated above in equation 33.
  • the pixel-by-pixel chrominance difference between the current and the predicted macroblocks is calculated as illustrated above in equations 11 and 30.
  • the threshold for the pixel-by-pixel chrominance difference is calculated as illustrated above in equations 17 and 33.
  • FIGURE 8 illustrates a routine for setting the values of the prediction error of an insignificant frame to zero.
  • a routine may be used at block 212 of FIGURE 4A.
  • the calculation of the prediction error is bypassed.
  • the DCT transform that is normally done for inter macroblocks is bypassed.
  • the VLC coding of the DCT coefficients is bypassed.
  • the information on the motion vectors is written into the output file, as is normally done in the MPEG motion compensation procedure.
  • a symbol indicating that all of the coefficients in the macroblock are zero is written into the output file.
  • a compression method can have an option not to encode the changes that have been identified as nonvisible and to use different temporal resolutions for different parts of a frame, thus increasing the encoding efficiency and improving bandwidth.
  • the invention provides a way of combining the foregoing frame partitioning technique with a traditional MPEG motion compensation approach to create a new perceptually adaptive motion compensation method — motion compensation with frame partitioning.
  • This new method significantly improves the use of temporal redundancies in encoding and results in a higher efficiency and quality of encoding without impacting the perceived content of the video stream.
  • the method introduces a degree of lossiness into the motion compensation procedure and, thus, allows for its perceptual adaptability.
  • Motion compensation with frame partitioning significantly improves the use of temporal redundancies in video encoding and introducing perceptual adaptability into this traditionally lossless stage of video compression.
  • the technique of the present invention can achieve a significant improvement in either the quality of the reconstructed images or in the bit-rate reduction.
  • frame partitioning has been found to improve the encoding efficiency by up to 50 percent, with the average improvement at 20 to 30 percent.
  • the motion compensation with frame partitioning method provides better encoding results than the traditional MPEG encoding even with very complex images.
  • the outmost degree of improvement will be achieved with images that have large areas of background, either moving or still, that contain some changes that are not perceived by the human vision.
  • frame partitioning provides a significant improvement in the encoding efficiency, which in turn can be translated into quality improvement if the bandwidth is not constrained.
  • frame partitioning carries an analogy to MPEG-4 video objects.
  • the present technique is block based and, thus, unlike the MPEG-4 video encoding, it does not require any additional overhead.
  • one way to employ the idea of frame partitioning is to use it as an extra stage prior to the traditional MPEG motion compensation. This results in a new two-stage motion compensation method.
  • This novel approach changes the traditional MPEG motion compensation from being a lossless stage in the compression to a stage that can be either lossless or lossy, depending on the complexity of the video stream and the input encoding parameters.
  • the motion compensation with frame partitioning method has the ability to use different temporal resolutions for different parts of a single frame, which can be highly beneficial for encoding.
  • the possibility of reducing the temporal resolution for some part of the frame can prevent visible artifacts from appearing elsewhere and hence can result in a substantial improvement in picture quality.
  • the motion compensation with frame partitioning method represents just one application of the frame partitioning technique of the invention.
  • frame partitioning can also be used as a preprocessing step prior to the traditional MPEG encoding. Used in this fashion, frame partitioning will provide the same benefits as in motion compensation with frame partitioning, but to a lesser extent, since frame partitioning can not impact directly the content of the compressed bit-stream.
  • another application of the technique can be found in the area of noise reduction for TV and video broadcasting, where frame partitioning can help diminish the appearance of artifacts caused by noise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention se rapporte à un procédé de codage avec partitionnement d'image destiné à une compression vidéo MPEG. Ce procédé de codage avec partitionnement d'image fait appel à un nouveau procédé de partitionnement d'image au cours d'une étape initiale dans le but de rejeter des informations correspondant à des changements temporels qui ne sont pas perceptibles à l'oeil humain. Ce procédé de partitionnement d'image s'effectue par bloc et, de ce fait, n'entraîne aucune surcharge supplémentaire du système pour travailler avec la syntaxe MPEG. Conformément à ce procédé de partitionnement d'image, si un macrobloc ou un bloc ne contient pas de changement perceptible, il est étiqueté comme non significatif. L'erreur de prédiction des macroblocs (blocs) non significatifs est annulée avant la transformation et la quantification. Les vecteurs de mouvement continuent d'être transmis pour les deux types de macroblocs. La détermination aboutissant au choix effectué pour un macrobloc entre significatif et non significatif est fondée sur de multiples facteurs tels que : la différence entre cosinus discrets, la différence de variance et la différence absolue. Un macrobloc est classé comme non significatif si toutes les différences sélectionnées n'excèdent pas leurs valeurs de seuil correspondantes. Les macroblocs significatifs sont codés à une résolution égale à la fréquence des images, tandis que la résolution instantanée pour les macroblocs non significatifs est inférieure ou égale à la moitié de cette fréquence des images.
PCT/CA1999/000417 1998-05-11 1999-05-11 Procede et systeme de codage mpeg a partitionnement d'image WO1999059342A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU38049/99A AU3804999A (en) 1998-05-11 1999-05-11 Method and system for mpeg-2 encoding with frame partitioning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8501098P 1998-05-11 1998-05-11
US60/085,010 1998-05-11

Publications (1)

Publication Number Publication Date
WO1999059342A1 true WO1999059342A1 (fr) 1999-11-18

Family

ID=22188720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA1999/000417 WO1999059342A1 (fr) 1998-05-11 1999-05-11 Procede et systeme de codage mpeg a partitionnement d'image

Country Status (2)

Country Link
AU (1) AU3804999A (fr)
WO (1) WO1999059342A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003107683A1 (fr) * 2002-06-12 2003-12-24 Unisearch Limited Procede et appareil pour la compression evolutive de signaux video
US7889791B2 (en) 2000-12-21 2011-02-15 David Taubman Method and apparatus for scalable compression of video

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0259562A1 (fr) * 1986-08-29 1988-03-16 Licentia Patent-Verwaltungs-GmbH Procédé de codage à prédiction intertrame avec compensation du mouvement
EP0535963A2 (fr) * 1991-10-02 1993-04-07 Matsushita Electric Industrial Co., Ltd. Codeur à transformation orthogonale
EP0582819A2 (fr) * 1992-06-30 1994-02-16 Sony Corporation Appareil pour le traitement digital de signal d'image
EP0613299A2 (fr) * 1993-02-25 1994-08-31 Industrial Technology Research Institute Architecture à bus dual pour la compensation de mouvement
US5539468A (en) * 1992-05-14 1996-07-23 Fuji Xerox Co., Ltd. Coding device and decoding device adaptive to local characteristics of an image signal
GB2317525A (en) * 1996-09-20 1998-03-25 Nokia Mobile Phones Ltd Motion estimation system for a video coder
US5742289A (en) * 1994-04-01 1998-04-21 Lucent Technologies Inc. System and method of generating compressed video graphics images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0259562A1 (fr) * 1986-08-29 1988-03-16 Licentia Patent-Verwaltungs-GmbH Procédé de codage à prédiction intertrame avec compensation du mouvement
EP0535963A2 (fr) * 1991-10-02 1993-04-07 Matsushita Electric Industrial Co., Ltd. Codeur à transformation orthogonale
US5539468A (en) * 1992-05-14 1996-07-23 Fuji Xerox Co., Ltd. Coding device and decoding device adaptive to local characteristics of an image signal
EP0582819A2 (fr) * 1992-06-30 1994-02-16 Sony Corporation Appareil pour le traitement digital de signal d'image
EP0613299A2 (fr) * 1993-02-25 1994-08-31 Industrial Technology Research Institute Architecture à bus dual pour la compensation de mouvement
US5742289A (en) * 1994-04-01 1998-04-21 Lucent Technologies Inc. System and method of generating compressed video graphics images
GB2317525A (en) * 1996-09-20 1998-03-25 Nokia Mobile Phones Ltd Motion estimation system for a video coder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"ACTIVITY DETECTION", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 34, no. 7B, 1 December 1991 (1991-12-01), pages 217 - 219, XP000282558, ISSN: 0018-8689 *
GHANBARI M: "MOTION VECTOR REPLENSIHMENT FOR LOW BIT-RATE VIDEO CODING", SIGNAL PROCESSING. IMAGE COMMUNICATION, vol. 2, no. 4, 1 December 1990 (1990-12-01), pages 397 - 407, XP000234774, ISSN: 0923-5965 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7889791B2 (en) 2000-12-21 2011-02-15 David Taubman Method and apparatus for scalable compression of video
WO2003107683A1 (fr) * 2002-06-12 2003-12-24 Unisearch Limited Procede et appareil pour la compression evolutive de signaux video

Also Published As

Publication number Publication date
AU3804999A (en) 1999-11-29

Similar Documents

Publication Publication Date Title
US20220312021A1 (en) Analytics-modulated coding of surveillance video
US6404814B1 (en) Transcoding method and transcoder for transcoding a predictively-coded object-based picture signal to a predictively-coded block-based picture signal
US8665960B2 (en) Real-time video coding/decoding
US6862372B2 (en) System for and method of sharpness enhancement using coding information and local spatial features
US6466624B1 (en) Video decoder with bit stream based enhancements
EP2278815B1 (fr) Procédé et dispositif de commande d'un filtre de boucle ou de post-filtrage pour un codage vidéo à base de blocs et à compensation de mouvement
US9247250B2 (en) Method and system for motion compensated picture rate up-conversion of digital video using picture boundary processing
JP2006519565A (ja) ビデオ符号化
JP2006519564A (ja) ビデオ符号化
US20150312575A1 (en) Advanced video coding method, system, apparatus, and storage medium
US11743475B2 (en) Advanced video coding method, system, apparatus, and storage medium
EP1506525B1 (fr) Systeme et procede d'amelioration de la nettete d'une video numerique codee
US20080247466A1 (en) Method and system for skip mode detection
KR20110042321A (ko) 관련 시각적 디테일의 선택적인 보류를 이용하는 고 효율 비디오 압축을 위한 시스템들 및 방법들
JP2002369209A (ja) Mpeg4標準を用いたビデオ符号化の方法及び装置
US8472523B2 (en) Method and apparatus for detecting high level white noise in a sequence of video frames
WO1999059342A1 (fr) Procede et systeme de codage mpeg a partitionnement d'image
Pronina et al. Improving MPEG performance using frame partitioning
JP2002010268A (ja) 画像符号化装置および画像符号化方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

NENP Non-entry into the national phase

Ref country code: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase