TW201105144A - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
TW201105144A
TW201105144A TW99108540A TW99108540A TW201105144A TW 201105144 A TW201105144 A TW 201105144A TW 99108540 A TW99108540 A TW 99108540A TW 99108540 A TW99108540 A TW 99108540A TW 201105144 A TW201105144 A TW 201105144A
Authority
TW
Taiwan
Prior art keywords
prediction
mode
offset
unit
pixel
Prior art date
Application number
TW99108540A
Other languages
Chinese (zh)
Other versions
TWI400960B (en
Inventor
Kazushi Sato
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of TW201105144A publication Critical patent/TW201105144A/en
Application granted granted Critical
Publication of TWI400960B publication Critical patent/TWI400960B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The disclosed subject matter relates to an image processing apparatus and method, the intra prediction encoding efficiency of which is improved. When the most suitable intra prediction mode is mode 0, the neighboring pixels used in the prediction of a target block are the pixels A0, A1, A2, A3. Using these pixels and a 6-tap FIR filter, pixels a-0.5,a+0.5,... of 1/2 pixel precision are generated, further, linear interpolation is used to generate pixels a-0.75,a-0.25,a+0.25,a+0.75 of 1/4 pixel precision. Further, the phase differences between the integer pixels and the generated decimal pixel precision pixels, that is, the values -0.75 to +0.75 are used as candidates for horizontal direction shift amounts, and the most suitable shift amount is determined. The disclosed subject matter can be applied to an image encoding apparatus which encodes using the H.264/AVC scheme for example.

Description

201105144 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種圖像處理裝置及方法,尤其係關於一種 可提高幀内預測之編碼效率之圖像處理裝置及方法。 【先前技術】 近年來’將圖像資訊作為數位而進行處理,此時,以高效 率之資訊傳輸及儲存為目的,而正在普及如下裝置,其利用 圖像資訊特有之冗長性,並採用藉由離散餘弦轉換等正交轉 換及動態補償而進行壓縮之編碼方式來對圖像進行壓縮編 碼。該編碼方式例如有MPEG(Moving Picture Experts Group, 動畫專家群)等。 尤其,MPEG2(ISO/IEC(Internet Standard Organization/ International Electrotechnical Commission,國際標準組織/國 際電子技術委員會)1 38 18-2)係被定義為通用之圖像編碼方 式,且係將隔行掃描圖像及逐行掃描圖像之雙方、以及標準 解像度圖像及高精細圖像網羅在内之標準。例如,MPEG2目 前廣泛應用於專業用途及消費用途之廣泛用途中。藉由使用 MPEG2壓縮方式,若為例如具有720x480像素之標準解像度 之隔行掃描圖像,則可分配4至8 Mbps之編碼量(位元率)。 又,藉由使用MPEG2壓縮方式,若為例如具有1920x1088像 素之高解像度之隔行掃描圖像,則可分配18至22 Mbps之編 碼量(位元率)。藉此,可實現高壓縮率及良好之畫質。 MPEG2係主要以適合於廣播用之高畫質編碼為對象,但 無法應對低於MPEG1之編碼量(位元率)亦即壓縮率更高之編 145451.doc 201105144 碼方式。由於便攜終端之普及,可認為今後此種編碼方式之 需求會提高,對應於此而進行MPEG4編碼方式之標準化。關 於圖像編碼方式,於1998年12月ISO/IEC 14496-2之規格被 公認為國際標準。 進而,近年來,起初以電視會議用之圖像編碼為目的而推 進 H.26L(ITU-T(International Telecommunication Union Telecommunication Standardization Sector,國際電信聯盟電 信標準化部門)Q6/16 VCEG(Video Coding Experts Group, 視訊編碼專家群))之標準之規格化。眾所周知的是,H.26L 係與MPEG2或MPEG4之先前之編碼方式相比,因其編碼及 解碼而需要較大之運算量,但可實現更高之編碼效率。又, 目前作為MPEG4之活動之一環節,進行基於該H.26L、且亦 引入H.26L中不支持之功能而實現更高之編碼效率之標準化 來作為 Joint Model of Enhanced-Compression Video Coding(增 強壓縮視訊編碼之聯合模型)。作為標準化之進程,於2003 年 3 月成為 H.264 及 MPEG-4 Part 10 (Advanced Video Coding(進階視訊編碼),以下記作H.264/AVC)之國際標準。 進而,作為其擴展,將RGB(red green blue,紅綠藍)或 4:2:2、4:4:4之商業用所必需之編碼方法、或MPEG-2中規定 之 8x8 DCT(Discrete Cosine Transform,離散餘弦轉換)或量 化矩陣亦包括在内的FRExt(Fidelity Range Extension,保真度 範圍擴展)之標準化已於2005年2月完成。藉此,成為使用 H.264/AVC亦可良好地表現電影中所含之影片雜訊之編碼方 式,從而可用於Blu-Ray Disc(藍光光碟)(商標)等之廣泛用途 145451.doc 201105144 中。 然而,最近,對於欲將高畫質圖像之4倍之4〇〇〇χ2〇⑽像素 左右之圖像進行壓縮的更高之壓縮率編碼之需求提高。或 者,對於欲在如網際網路般傳輸容量有限之環境中傳=高晝 質圖像之更高的壓縮率編碼之需求提高。因此,在上述之隸 ^ITU-T^VCEG(=Video Coding Expert Group , 專家群)中,繼續進行與編碼效率之改善相關之研究。 例如,於MPEG2方式中,藉由線性内插處理而進行Μ像 素精度之動態預測、補償處理。另_方面,於H 264/avc方式 中,進行使用 6階 FIR(Finite Impulse Resp〇nse Fmer,有= 脈衝響應濾波器)濾波器之1/4像素精度之預測、補償處理。 針對該1/4像素精度之預測、補償處理,近年來研究進一 步提向H.264/AVC方式之效率。作為用於此之編碼方式之一 種,於非專利文獻丨中提出有1/8像素精度之動態預測。 亦即,於非㈣文獻1巾,1/2像㈣度之插值處理係藉由 濾波器卜3、12、-39、158、158、_39、12、_3]/256 而進行。 又,1/4像素精度之插值處理係藉由濾波器[_3、η、η 229: 71、_21、6、-1]/256而進行,Μ像素精度之插值處理 係藉由線性插值而進行。 如上所述,藉由進行使用像素精度更高之插值處理之動離 預測’尤其於具有高解像度之紋理、動作比較緩慢之序列 中,可提高預測精度,並可實現編碼效率之提高。 然而’作為H.264/AVG方式實現較先前之MITEG2方式等更 高之編碼效率的原因之―,可列舉採用了以下說明之幢内預 145451.doc 201105144 測方式。 HW方式中,關於亮度信號,規定有$種叫像素及 8X8像素之區塊單元、以及4種16X16像素之巨集區塊單元之 中貞内預測模式。關於色差作泸, 尼早疋之 巴圭彳。唬,規疋有4種8><8像素之區 元之幀内預測模式。多罢p缺# μ & π、, 色差彳S唬之幀内預測模式係可與亮产 號之幀内預測模式獨立地設定。外 力汴頂冽模式之類型係盥 圖1之以序號〇、1、3至8所矣+今七a必丄命 至8所表不之方向對應。預測模式2係 均值預測。 藉由採用如此之幢内預測方式,實現預測精度之提高。缺 而,。於H:264/AVC方式中,如圖1之方向所*,只能進行以 22.5°為單位之幀内預測。因此,於邊緣之傾斜度為此以外 之角度之情形時,會限制編碼效率之提高。 因此,為進一步改善編碼效率,於非專利文獻2中提出有 以較2 2.5。之單位更細之角度進行預測。 [先前技術文獻] [非專利文獻] [非專利文獻 1]「Motion compensated prediction with 1/8-pel displacement vector resolution」,VCEG-AD09,ITU-Telecommunications Standardization Sector STUDY GROUP Question 6 Video coding Experts Group(VCEG),23-27 Oct 2006 [非專利文獻 2]Virginie Drugeon,Thomas Wedi,and Torsten Palfner,「High Precision Edge Prediction for Intra Coding」,2008 145451.doc 201105144 【發明内容】 [發明所欲解決之問題] 然而,於H.264/AVC方式之幀内預測中,預測時使用有成 為編碼對象之區塊之特定之鄰接像素,與此相對,於非專利 文獻2所揭示之提案中,成為編碼對象之區塊之除鄰接像素 以外之像素亦必需用於預測。 因此,於非專利文獻2所揭示之提案中,即便以較22 5。之 單位更細之角度進行預測,亦會導致記憶體存取次數或處理 增加。 本發明係雲於如此之情況而完成者,其不增加記憶體存取 次數或處理便可進一步提高幀内預測之編碼效率。 [解決問題之技術手段] 本發明之第m點之圖像處理裝置包括:模式決定機構, 其係針對成為幢内預測之處理對象之幢内預測區塊,對圖像 資料決以貞内預測之預測模式;相位偏移機構,其係依昭盘 藉由上述模式決定機構所決定之上述㈣模式對應mm 向及成為候補之偏移量’使以特定之位置關係與上述 測區塊鄰接之鄰接像素之相位偏移;偏移量決定機構,、 使用上述鄰接像素及藉由上述相位偏移機構偏移上述相= 鄰接像素’對上述鄰接像素決定上述相位之最佳 · 預測圖像生成機構,其係使用依照藉由上述 =機= 所決定之上述最佳偏移量而偏移上述相位之鄰接像 上述幀内預測區塊之預測圖像。 成 本發明之第1觀點之圖像處理裝置可争、^ 又匕括:編碼機構, 145451.doc 201105144 、’、f上迷幀内預測區塊之 ML 豕,、藉由上述預測圖像生成機 菁斤生成之上述預測圖像之差分- 流;及傳輸機構,Μ將㈣由:扁碼而生成編碼串 〜…、係將表不猎由上述偏移量決定機構所決 I μ最佳偏移量之偏移量資訊、及表示藉由上述模式決 定機構所決定之上述預龍式之制模式資訊,與藉由:述 編碼機構所生成之編碼串流一併傳輸。 上述編碼機構係可將表示針對上述悄内預測區塊所決定之 上述最佳偏移量與針對賦予MostPr〇bableM〇de(最可能模幻 之區塊所決定之最佳偏銘吾# θ〜、㈣偏移量之差》的差分資訊作為上述偏移 置育讯而進行編碼;上述傳輸機構可傳輸藉由上述編碼機構 所生成之編碼串流及上述差分資訊。 上述相位偏移機構係可於藉由上述模式決定機構所決定之 prediction(Direct Current prediction,i 預測)模式之情形時,禁止上述相位之偏移。 机 上述相位偏移機構係可於藉由上述模式決定機構所決定之 上述預測模式為Vertical predicti〇n(垂直預測)模式、BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing apparatus and method, and more particularly to an image processing apparatus and method that can improve encoding efficiency of intra prediction. [Prior Art] In recent years, the image information has been processed as a digital device. At this time, for the purpose of efficient information transmission and storage, the following devices are being used, which utilize the versatility of image information and use The image is compression-encoded by an encoding method such as discrete cosine transform or the like by orthogonal transform and dynamic compensation. The coding method is, for example, MPEG (Moving Picture Experts Group) or the like. In particular, MPEG2 (ISO/IEC (Internet Standard Organization/International Electrotechnical Commission) 1 38 18-2) is defined as a general image coding method, and is an interlaced image and Standards for both progressive scan images, as well as standard resolution images and high-resolution image snares. For example, MPEG2 is currently widely used in a wide range of applications for professional and consumer use. By using the MPEG2 compression method, for example, an interlaced image having a standard resolution of 720 x 480 pixels, an encoding amount (bit rate) of 4 to 8 Mbps can be allocated. Further, by using the MPEG2 compression method, for example, an interlaced image having a high resolution of 1920 x 1088 pixels, an encoding amount (bit rate) of 18 to 22 Mbps can be allocated. Thereby, high compression ratio and good image quality can be achieved. The MPEG2 system mainly targets high-definition encoding suitable for broadcasting, but cannot cope with the encoding amount (bit rate) lower than that of MPEG1, that is, the encoding method of 145451.doc 201105144. Due to the spread of portable terminals, it is considered that the demand for such a coding method will increase in the future, and the MPEG4 coding method will be standardized in accordance with this. Regarding the image coding method, the specifications of ISO/IEC 14496-2 were recognized as international standards in December 1998. Furthermore, in recent years, H.26L (ITU-T (International Telecommunication Union Telecommunication Standardization Sector) Q6/16 VCEG (Video Coding Experts Group) was promoted for the purpose of image coding for video conferencing. Standardization of standards for video coding expert groups)). As is well known, the H.26L system requires a larger amount of computation due to its encoding and decoding than the previous encoding method of MPEG2 or MPEG4, but can achieve higher encoding efficiency. In addition, as a part of the activities of MPEG4, the standardization of higher coding efficiency based on the H.26L and the functions not supported in H.26L is introduced as the Joint Model of Enhanced-Compression Video Coding. Compressed video coding joint model). As a process of standardization, in March 2003, it became the international standard for H.264 and MPEG-4 Part 10 (Advanced Video Coding, hereinafter referred to as H.264/AVC). Further, as an extension thereof, an encoding method necessary for commercial use of RGB (red green blue) or 4:2:2, 4:4:4, or 8x8 DCT (Discrete Cosine) prescribed in MPEG-2 Standardization of FRExt (Fidelity Range Extension), including Transform, Discrete Cosine Transform) or Quantization Matrix, was completed in February 2005. Therefore, it is possible to use H.264/AVC to express the encoding method of the movie noise contained in the movie well, and thus it can be used for a wide range of uses such as Blu-Ray Disc (trademark) 145451.doc 201105144. . Recently, however, there has been an increasing demand for a higher compression ratio encoding for compressing an image of 4 〇〇〇χ 2 〇 (10) pixels or so of 4 times of a high-quality image. Alternatively, there is an increasing demand for higher compression ratio coding to transmit high quality images in an environment with limited transmission capacity such as the Internet. Therefore, in the above-mentioned ITU-T^VCEG (=Video Coding Expert Group), research on improvement in coding efficiency is continued. For example, in the MPEG2 system, dynamic prediction and compensation processing of the pixel accuracy is performed by linear interpolation processing. On the other hand, in the H 264/avc method, prediction and compensation processing using 1/4 pixel precision of a 6th-order FIR (Finite Impulse Resp〇nse Fmer) filter is performed. For the prediction and compensation processing of the 1/4 pixel accuracy, in recent years, the efficiency of the H.264/AVC method has been further improved. As one of the encoding methods used for this, a non-patent document has been proposed to have a dynamic prediction of 1/8 pixel precision. That is, in the non-fourth document, the interpolation processing of the 1/2 image (fourth degree) is performed by the filters 3, 12, -39, 158, 158, _39, 12, _3]/256. Moreover, the interpolation processing of 1/4 pixel precision is performed by the filters [_3, η, η 229: 71, _21, 6, -1]/256, and the interpolation processing of the pixel precision is performed by linear interpolation. . As described above, by performing the motion prediction using the interpolation processing with higher pixel precision, especially in the sequence with high resolution and slow motion, the prediction accuracy can be improved and the coding efficiency can be improved. However, the reason why the H.264/AVG method achieves higher coding efficiency than the previous MITEG2 method, etc., can be exemplified by the intra-building pre-test 145451.doc 201105144. In the HW method, regarding the luminance signal, a median intra prediction mode is defined for a block unit of a type of pixel and 8×8 pixels, and a macroblock unit of four types of 16×16 pixels. Regarding the color difference, the early morning of the Pakistani 巴.唬, there are four kinds of intra prediction modes of 8<8 pixels. The intra prediction mode of the color difference 彳S唬 can be set independently of the intra prediction mode of the bright number. The type of external force dome mode is shown in Figure 1. The serial number 〇, 1, 3 to 8 矣 今 今 今 今 今 今 今 今 今 今 今 今 今 今 今 今 丄 。 。 。 。 。 。 。 。 。 。 Prediction mode 2 is the mean prediction. By adopting such an intra-prediction method, the prediction accuracy is improved. Missing, In the H:264/AVC mode, as shown in the direction of Fig. 1, only intra prediction in units of 22.5° can be performed. Therefore, when the inclination of the edge is at an angle other than this, the improvement of the coding efficiency is restricted. Therefore, in order to further improve the coding efficiency, it is proposed in Non-Patent Document 2 to be 2 2.5. The unit is forecasted at a finer angle. [Prior Art Document] [Non-Patent Document] [Non-Patent Document 1] "Motion compensated prediction with 1/8-pel displacement vector resolution", VCEG-AD09, ITU-Telecommunications Standardization Sector STUDY GROUP Question 6 Video coding Experts Group (VCEG) ), 23-27 Oct 2006 [Non-Patent Document 2] Virginie Drugeon, Thomas Wedi, and Torsten Palfner, "High Precision Edge Prediction for Intra Coding", 2008 145451.doc 201105144 [Summary of the Invention] [Problems to be Solved by the Invention] However, in the intra prediction of the H.264/AVC method, the specific adjacent pixel of the block to be coded is used for the prediction, and the object disclosed in the proposal disclosed in Non-Patent Document 2 Pixels other than adjacent pixels of the block must also be used for prediction. Therefore, in the proposal disclosed in Non-Patent Document 2, even if it is 22. Forecasting at a finer angle of the unit will also result in an increase in the number of memory accesses or processing. The present invention is accomplished in such a situation that it can further improve the coding efficiency of intra prediction without increasing the number of memory accesses or processing. [Technical means for solving the problem] The image processing device according to the mth point of the present invention includes a mode determining means for predicting an intra-frame prediction target to be an intra-intra prediction, and predicting the image data by intra-prediction a prediction mode; a phase shifting mechanism that is adjacent to the measurement block by a specific positional relationship by the (4) mode corresponding mm direction and the candidate offset amount determined by the mode determining means a phase shift of the adjacent pixels; an offset determining means for determining the phase optimum for the adjacent pixels by using the adjacent pixels and the phase shifting means offsetting the phase = adjacent pixels And using the predicted image of the adjacent intra prediction block in which the phase is shifted according to the optimal offset determined by the above = machine =. The image processing apparatus according to the first aspect of the invention is also applicable to: encoding means, 145451.doc 201105144, ML 豕 of the intra prediction block in ', f, by the above-mentioned predictive image generating machine The differential-flow of the above-mentioned predicted image generated by the jinji; and the transmission mechanism, ((4) is generated by the flat code to generate the encoded string~..., which is determined by the above-mentioned offset determining mechanism The offset amount information of the shift amount and the pattern information indicating the pre-dragon type determined by the mode determining unit are transmitted together with the encoded stream generated by the encoding unit. The above coding mechanism may indicate that the optimal offset determined for the intra-predicted block is the best bias determined by the block given to the MostPr〇bableM〇de (the most likely phantom block) And (4) the difference information of the difference between the offsets is encoded as the offset-casting information; the transmission mechanism can transmit the encoded stream generated by the encoding mechanism and the difference information. The phase shifting mechanism can be In the case of the prediction (Direct Current Prediction) mode determined by the mode determining means, the phase shift is prohibited. The phase shifting mechanism is determined by the mode determining means. The prediction mode is Vertical predicti〇n (vertical prediction) mode,

Diag_Down_Left predictions^ ^ Vertical_Left prediction^ 式之情形時’對於上述鄰接像素中之上部鄰接像素,使 H〇nzontal(水平)方向之相位依照上述候補之偏移量進行偏 移,且對於上述鄰接像素中之左部鄰接像素,可禁止 Vertical方向之相位之偏移。 上述相位偏移機構係可於藉由上述模式決定機構所決定之 上述預測模式為Horizontal prediction模式、或H〇riz〇ntai prediction模式之情形時,對於上述鄰接像素中之左部鄰接 145451.doc 201105144 像素,使vertica丨方向之相位依照成為上述候補之偏移量 行偏移’且對於上述鄰接像素中之上部鄰接像素,禁2 Horizontal方向之相位之偏移。 • 上述模式決定機構係可決定上述幀内預測之全部之預測模 式;上述相位偏移機構係可依照與藉由上述模式決定機構^ 決定之上述全部之預測模式對應之偏移方向及成為候補之偏 移量,使上述鄰接像素之相位偏移;上錢移量衫機構係 可使用上述鄰接像素及藉由上述相位偏移機構偏移上述相位 之鄰接像素’對上述鄰接像素決定上述相位之最佳偏移量及 最佳預測模式。 本發明之第1觀點之圖像處理裝置可更包括動態預測補償 孤構#針對上述圖像之幢間動態預測區塊進行幢間動態預 測;且,上述相位偏移機構係可使用藉由上述動態預測補償 機構於預測小數像素精度時使用之毅器,使上述鄰接像素 之相位偏移。 ' 本發月之第1觀點之圖像處理方法包含如下步驟·由圖像 處理裝置進行:針對成為_預測之處理對象之㈣内預測區 塊’對®像資料決定巾貞㈣測之預龍式;依照與所決定之 . 上述預測模式對應之偏移方向及成為候補之偏移量,使以特 • 定之位置關係與上述㈣内預測區塊鄰接之鄰接像素之相位偏 移’使用上述鄰接像素及已偏移上述相位之鄰接像素,對上 述鄰接像素決定上述相位之最佳偏移量;及使用依照所決定 之上述最佳偏移量而偏移上述相位之鄰接像素,生成上述幀 内預測區塊之預測圖像。 145451.doc 201105144 本發明之第2觀點之圖像處理裝置包括:接收機構,其接 收預測模式資訊及偏移量資訊,該預測模式資訊係表示針對 成為巾貞内預測之處理對象之_預測區塊之幢内預測之預測 模式’該偏移#資訊係表示使以特定之位置關係、與上述幢内 預測區塊鄰接之鄰接像素之相位根據上述預測模式資訊所表 不之預測模式而偏移之偏移量;相位偏移機構,其係依照與 藉由上述接收機構接收之上述預測模式對應之偏移方向及偏 移量,使上述鄰接像素之相位偏移;及預測圖像生成機構, 其係使用藉由上述相位偏移機構偏移上述相位之鄰接像素, 生成上述幀内預測區塊之預測圖像。 上述接收機構係可接收表示針對上述幀内預測區塊之偏移 量與針對賦予MostProbableMode之區塊之偏移量之差分的差 分資訊來作為上述偏移量資訊。 本發明之第2觀點之圖像處理裝置可更包括解碼機構,其 使用藉由上述預測圖像生成機構所生成之預測圖像,對上述 幀内預測區塊進行解碼。 上述解碼機構係可對藉由上述接收機構接收之預測模式資 訊及上述偏移量資訊進行解碼。 上述相位偏移機構係於藉由上述解碼機構所解碼之上述預 測模式為DC prediction模式之情形時,可禁止上述鄰接像素 之相位之偏移。 上述相位偏移機構係可於藉由上述解碼機構所解碼之上述 預測模式為 Vertical prediction 模式、Diag_D〇Wn—Left prediction模式、或Vertical—Left prediction模式之情形時, 14545 丨.doc •10- 201105144 對於上述鄰接像素中之上部鄰接像素,使HGdZ〇ntal方向之 相位依照藉由上述解碼機構所解碼之上述偏移量進行偏移, 且對於上述鄰接像素中之左部鄰接像素,禁止vertical方向 之相位之偏移。 上述相位偏移機構係可於藉由上述解碼機構所解碼之上述 預測模式為 H〇rizontal predicti〇n 模式、或 H〇dz〇ntai—办 prediction模式之情形時,對於上述鄰接像素中之左部鄰接 像素’使Vertieal方向之相位依照藉由上述解碼機構所解碼 之上述偏移量進行偏移,且對於上述鄰接像素t之上部鄰接 像素,禁止H〇rizontal方向之相位之偏移。 本發明之第2觀點之圖像處理裝置可更包括動態預測補償 機構’其使賴編碼之㈣動態制區塊及藉由上述解碼機 構所解碼之移動向量,進行t貞間動態預測;且,上述相位偏 移機構係可使用藉由上述動態預測補償機構於預測小數像素 精度時使用之毅H,使上述鄰接像素之相位偏移。 本發明之第2觀點之圖像處理方法包含如下步驟:由圖像 處=裝置進行··接收預測模式資訊及偏移量資訊,該預測模 式資訊係表示針對成為φ貞内預測之處理對象之巾貞内預測區塊 之幢内預測之預測模式,該偏移量資訊係表示使以特定之位 置關係與上述幀内預測區塊鄰接之鄰接像素之相位根據上述 預測模式資訊所表示之預測模式而偏移之偏移量;依照與所 接收之上述預測模式對應之偏移方向及偏移量,使上述鄰接 像素之相位偏移;使用已偏移上述相位之鄰接像素,生成上 述幢内預測區塊之預測圖像。 14545I.doc 201105144 於本發明之第1觀點中,針對成為幢内預測之處理對象之 悄内預測區塊,對圖像資料決定㈣制之預龍式;依昭 ,所決定之上㈣賴式對叙偏移方向^為候補之偏移 S ’使則寺定之位置關係與上述巾貞内預測區塊鄰接之鄰接像 素之相位偏移、繼而,使用上述鄰接像素及偏移上述相位之 鄰接像素,對上述鄰接像素決^上述相位之最佳偏移量;使 用依照所決定之上述最佳偏移量而偏移上述相位之鄰接像 素,生成上述幀内預測區塊之預測圖像。 *於本發明之第2觀點中,接收預測模式資訊及偏移量資 訊’該預測模式資訊係表示針對成為㈣内預測之處理對象之 傾内預測區塊之_預測之預測模式,該偏移量資訊係表示 使以特定之位置關係與上述t貞内預測區塊鄰接之鄰接像素之 ^位根據上述預測模式f訊所表示之制模式而偏移之偏移 =;依照與所接收之上述制模式對應之偏移方向及偏移 量’使上述鄰接像素之相位偏移。繼而,使用已偏移上述相 位之鄰接像素,生成上述巾貞内預測區塊之預測圖像。 另外,上述圖像處理裝置之各個可為獨立之裝置’亦可為 構成-個圖像編碼裝置或圖像解碼裝置之内部區塊。 [發明之效果] 根據本發明之第1觀點,0Γ茲+ U & 弟観點可精由幀内預測而生成預測圖 根據本發明之第1觀點,不增加記憶體存取次數或 處理便可提高編碼效率。 根據本發明之第2觀點,可藉由幀内預測而生成預測圖 根據本發明之第2觀點’不增加記憶體存取次數或 145451.doc 12 201105144 處理便可提高編碼效率。 【實施方式】 以下,參考圖式對本發明之實施形態加以說明。 [圖像編碼裝置之構成例] 圖2表示作為適用本發明之圖像處理裝置之圖像編碼裝置 之一實施形態之構成。 該圖像編碼裝置51係以例如H.264及MPEG-4 Pan 1〇 (Advanced Video Coding)(以下記作 H 264/Avc)方式對圖像 進行壓縮編碼。 於圖2之例巾’圖像編石馬裝置51包括a/d(繼⑽/叫㈣,類 比/數位)轉換部61、畫面重排緩衝部62、運算部〇、正交轉 換部64、量化部65、可逆編碼部66、儲存緩衝部67、逆量化 部68、逆正交轉換部69、運算部7〇、解塊滤波器71、訊框記 憶體72、開關73、幀内預測部74、鄰接像素内插部&動態 預測、補償部76、預测圓像選擇部刃及速率控制部π。 A/D轉換部61對所輸入之圖像進行A/D轉換後,輸出至蚩 面重排緩衝部62並記憶於此。畫面重排緩衝部62係將所記: 之顯示依序之訊框之圖像按照G〇P(Gr〇up Qf pieture,圖像 群)’重排為用以編碼之訊框之順序。 運算部63係使自畫面重排緩衝部62所讀出之圖像,減去藉 由預測圖像選擇部77所選擇之來自㈣預測㈣之㈣㈣ 或來自咖測、補償部76之預測圖像,並將其差分資訊輸 出至正父轉換部64。正交轉換部64係相對於來自運算和之 差分資訊,實施離散餘弦轉換、K_L轉換等正交轉換,並輸 145451.doc 201105144 ^其轉換係數。量化部65係對正交轉換部64 數進行量化。 35 <轉換係 成為量化部65之輸出之經量化之轉換係數係輸入 碼部“,並於此實施可變長度編碼、算術編 :編 被壓縮。 足蝎碼而 #可逆編碼部6 6係自幀内預測部7 4取得表示幀内預測之士兮 等’並自動態預測、補償部76取得表示幀間預測模式之 =另外’以下將表示t貞内預測之資訊亦稱作巾貞内預測模^ 貝讯》又’以下將表示用來表示幀間預測之資訊模式之 亦稱作幀間預測模式資訊。 ° 可逆編碼部66係對經量化之轉換係數進行編碼且對表干 巾貞内預測之資訊或表示㈣預測模式之f = 設為壓縮圖像中之標頭資訊之—部分。可逆編㈣U將= 編碼之資料供給至儲存緩衝部67並儲存於此。 例如,於可逆編碼部66中,進行可變長度編碼或算術編碼 4可逆編碼處理。作為可變長度編碼,可列舉乩264/八乂〔方 式中規定之 CAVLC(Context_Adaptive VadabieDiag_Down_Left predictions^ ^ Vertical_Left prediction^ In the case of the above equation, the phase of the H〇nzontal (horizontal) direction is shifted according to the offset of the candidate for the adjacent pixels in the adjacent pixels, and for the adjacent pixels The left adjacent pixel can disable the phase shift in the vertical direction. The phase shifting mechanism is configured such that when the prediction mode determined by the mode determining means is the Horizontal prediction mode or the H〇riz〇ntai prediction mode, the left adjacent one of the adjacent pixels is 145451.doc 201105144 The pixel is such that the phase of the vertica丨 direction is offset by the shift amount which becomes the above-described candidate, and the phase of the adjacent horizontal pixel is offset from the adjacent pixel in the adjacent pixel. • The mode determining mechanism determines all of the prediction modes of the intra prediction; and the phase shifting mechanism is configured to be in accordance with an offset direction corresponding to all of the prediction modes determined by the mode determining unit The offset is such that the phase of the adjacent pixels is shifted; and the adjacent microphones can use the adjacent pixels and the adjacent pixels of the phase offset by the phase shifting mechanism to determine the phase of the adjacent pixels. Good offset and best prediction mode. The image processing apparatus according to the first aspect of the present invention may further include: a dynamic prediction compensation orphan # for inter-building dynamic prediction block inter-block dynamic prediction; and the phase shifting mechanism may be used by The dynamic predictive compensation mechanism uses a factorer for predicting the precision of the fractional pixels to shift the phase of the adjacent pixels. The image processing method of the first aspect of the present invention includes the following steps: • The image processing apparatus performs the fourth prediction of the (four) intra prediction block for the _ prediction processing. And using the above-mentioned adjacency according to the offset direction corresponding to the prediction mode and the offset amount to be the candidate, and the phase difference between the adjacent pixel adjacent to the (4) intra prediction block by the specific positional relationship a pixel and an adjacent pixel shifted from the phase, determining an optimum offset amount of the phase for the adjacent pixel; and generating the intraframe by using adjacent pixels offset from the phase according to the determined optimal offset amount Predicted image of the predicted block. 145451.doc 201105144 The image processing apparatus according to the second aspect of the present invention includes: a receiving unit that receives prediction mode information and offset information, wherein the prediction mode information indicates a prediction area for processing an object to be predicted in the frame The prediction mode of the block intra-block prediction 'this offset# information indicates that the phase of the adjacent pixel adjacent to the intra-predicted block in a specific positional relationship is shifted according to the prediction mode indicated by the prediction mode information. Offset amount; a phase shifting mechanism that shifts a phase of the adjacent pixel according to an offset direction and an offset amount corresponding to the prediction mode received by the receiving means; and a predicted image generating means The prediction image of the intra prediction block is generated by using the phase shifting mechanism to offset adjacent pixels of the phase. The receiving mechanism may receive the difference information indicating the difference between the offset of the intra prediction block and the offset for the block given to the MostProbableMode as the offset information. The image processing device according to the second aspect of the present invention may further include a decoding unit that decodes the intra prediction block using the predicted image generated by the predicted image generating unit. The decoding mechanism can decode the prediction mode information received by the receiving means and the offset information. When the phase shifting means is in the DC prediction mode by the decoding means decoded by the decoding means, the phase shift of the adjacent pixels can be prohibited. The phase shifting mechanism may be when the prediction mode decoded by the decoding mechanism is a Vertical prediction mode, a Diag_D〇Wn-Left prediction mode, or a Vertical-Left prediction mode, 14545 丨.doc •10-201105144 For the adjacent pixels in the adjacent pixels, the phase of the HGdZ〇ntal direction is shifted according to the offset amount decoded by the decoding mechanism, and the vertical direction is prohibited for the left adjacent pixel of the adjacent pixels. Phase shift. The phase shifting mechanism is configured to be in the case of the H〇rizontal predicti〇n mode or the H〇dz〇ntai-prediction mode when the prediction mode decoded by the decoding mechanism is the left side of the adjacent pixels. The adjacent pixel 'shifts the phase in the Vertieal direction in accordance with the offset amount decoded by the decoding means, and prohibits the phase shift in the H〇rizontal direction for the adjacent pixel above the adjacent pixel t. The image processing apparatus according to the second aspect of the present invention may further include: a dynamic prediction compensation unit that performs a dynamic prediction between the (4) dynamic block and the motion vector decoded by the decoding unit; The phase shifting mechanism can use the phase H used in predicting the decimal precision by the dynamic prediction compensation mechanism to shift the phase of the adjacent pixels. An image processing method according to a second aspect of the present invention includes the steps of: receiving, by the image location=device, prediction mode information and offset information indicating that the prediction target information is to be processed by φ贞 prediction a prediction mode of intra-block prediction of the prediction block in the frame, the offset information indicating a prediction mode indicating a phase of a neighboring pixel adjacent to the intra prediction block by a specific positional relationship according to the prediction mode information And an offset of the offset; the phase of the adjacent pixel is offset according to an offset direction and an offset corresponding to the received prediction mode; and the intra-frame prediction is generated by using adjacent pixels that have been offset from the phase The predicted image of the block. 14545I.doc 201105144 In the first aspect of the present invention, the pre-predicted block which is the processing target of the intra-building prediction is determined by the pre-dragon type of the image data (Y4); The offset direction of the pair is a candidate offset S' such that the positional relationship of the temple is offset from the phase of the adjacent pixel adjacent to the prediction block in the frame, and then the adjacent pixel and the adjacent pixel offset from the phase are used. And determining, by the adjacent pixel, an optimum offset amount of the phase; and using the adjacent pixel offset from the phase according to the determined optimal offset amount to generate a predicted image of the intra prediction block. In the second aspect of the present invention, the prediction mode information and the offset information are displayed, and the prediction mode information indicates a prediction mode for the prediction of the intra-predicted block to be the processing target of the (four) intra prediction, the offset The quantity information indicates that the bit of the adjacent pixel adjacent to the intra prediction block in the specific positional relationship is shifted according to the mode indicated by the prediction mode f == according to the received The offset direction and the offset amount corresponding to the system mode shift the phase of the adjacent pixels. Then, using the adjacent pixels shifted from the above-described phase, a predicted image of the intra-prediction block is generated. Further, each of the above image processing apparatuses may be an independent apparatus' or may be an internal block constituting one image encoding apparatus or image decoding apparatus. [Effect of the Invention] According to the first aspect of the present invention, it is possible to generate a prediction map from intra prediction according to the first aspect of the present invention, without increasing the number of memory accesses or processing. Can improve coding efficiency. According to the second aspect of the present invention, the prediction map can be generated by intra prediction. According to the second aspect of the present invention, the coding efficiency can be improved without increasing the number of memory accesses or the processing of 145451.doc 12 201105144. [Embodiment] Hereinafter, embodiments of the present invention will be described with reference to the drawings. [Configuration Example of Image Encoding Device] Fig. 2 shows a configuration of an embodiment of an image encoding device to which the image processing device of the present invention is applied. The image encoding device 51 compresses and encodes an image by, for example, H.264 and MPEG-4 Pan 1 (Advanced Video Coding) (hereinafter referred to as H 264/Avc). In the example of FIG. 2, the image editing apparatus 51 includes a/d (continuous (10)/called (four), analog/digital) conversion unit 61, screen rearrangement buffer unit 62, arithmetic unit 〇, orthogonal conversion unit 64, The quantization unit 65, the reversible coding unit 66, the storage buffer unit 67, the inverse quantization unit 68, the inverse orthogonal conversion unit 69, the calculation unit 7A, the deblocking filter 71, the frame memory 72, the switch 73, and the intra prediction unit 74. Adjacent pixel interpolation unit & dynamic prediction and compensation unit 76, prediction round image selection unit edge and rate control unit π. The A/D conversion unit 61 performs A/D conversion on the input image, and outputs it to the face rearrangement buffer unit 62 and stores it therein. The screen rearrangement buffer unit 62 rearranges the image of the frame in which the order is displayed in the order of G〇P (Gr〇up Qf pieture) as the frame to be encoded. The calculation unit 63 subtracts the predicted image from the (four) prediction (4) (4) (4) or the coffee-measurement/compensation unit 76 selected by the predicted image selection unit 77 from the image read by the screen rearrangement unit 62. And the difference information is output to the positive parent conversion unit 64. The orthogonal transform unit 64 performs orthogonal transform such as discrete cosine transform and K_L conversion with respect to the difference information from the arithmetic sum, and converts the conversion coefficient. The quantization unit 65 quantizes the number of orthogonal transform units 64. 35 < Conversion is the quantized conversion coefficient of the output of the quantization unit 65 is an input code portion ", and variable length coding, arithmetic coding, coding is performed here. Foot code and #reversible coding unit 6 6 The intra-frame prediction unit 74 acquires the information indicating the intra prediction, etc., and acquires the inter-prediction mode from the dynamic prediction and compensation unit 76. The following information is also referred to as the intra-prediction. The prediction mode is also referred to as an information mode for indicating inter prediction, which is also referred to as inter prediction mode information. The reversible coding unit 66 encodes the quantized conversion coefficients and scatters the table. The information or representation of the intra prediction (4) the prediction mode f = is set to the part of the header information in the compressed image. The reversible coding (4) U supplies the encoded data to the storage buffer 67 and stores it therein. For example, in reversible coding The variable length coding or the arithmetic coding 4 reversible coding process is performed in the section 66. As the variable length coding, 乩264/八乂 [CAVLC (Context_Adaptive Vadabie specified in the mode) can be cited.

Coding’文絡適應可變長度編碼)等。作為算術編碼,可列 舉 CABAC(C〇ntext-Adaptive Binary 入祕―。c〇ding,文絡 適應性二進位算數編碼)等。 儲存緩衝部67係將自可逆編碼部66所供給之資料作為以 H.264/AVC方式已進行編碼之壓縮圖像,輸出至例如後段之 未圖示之記錄裝置或傳輸路等。 又,自量化部65輸出之經量化之轉換係數係亦輸入至逆量 I45451.doc • 14. 201105144 化。P 68,進行逆1化後,進而於逆正交轉換部69中進行逆正 交轉換。經逆正交轉換之輸出係藉由運算部7〇而與自預測圖 像選擇部77所供給之預測圖像進行相加,從而成為經局部解 碼之圖像。解塊濾波器71係將經解碼之圖像之區塊失真去除 後,供給至訊框記憶體72並儲存於此‘。對訊框記憶體72亦供 並儲存藉由解塊濾波器7丨進行解塊濾波處理前之圖像。 開關7 3係將儲存於訊框記憶體7 2之參考圖像輸出至動態預 測、補償部76或幀内預測部74。 於6亥圓像編碼裝置5丨中,例如來自畫面重排緩衝部62之工 晝面(Intra Picture,内部畫面)、b 畫面(Bidirecti〇naUy P"ediCtiVe_PictUre ’ 雙向預測晝面)及 P 晝面(Predictive Picture ’預測晝面)作為需進行幀内預測(亦稱作幀内處理)之 圖像而供給至Φ貞内預測部74。又,自畫面重排緩衝部62所讀 出之Β畫面及Ρ晝面作為需進行鴨間預測(亦稱作情間處理)之 圖像而供給至動態預測、補償部76。 幀内預測部74係根據自畫面重排緩衝部62所讀出之需進行 賴内預測之圖像及自訊框記憶體72所供給之參相像,進行 、二、補之王。ρ的幀内預測模式之幀内預測處理並生成預測 圖像。 幢内預測部74係對生成預測圖像之幢内預測模式計算成本 函數值’並選擇所計算出之成本函數值賦予最小值之幢内預 挪核式作為最佳t貞内預測模式。⑽内預測部Μ係將需進行賴 内預測之對象區塊之鄰接像素及最佳㉝内預測模式之資訊, 供給至鄰接像素内插部75。 145451.doc 201105144 鄰接像素内插部75係沿著與來自幀内預測部74之最佳幀内 預測模式對應之偏移方向,使鄰接像素之相位以成為候補之 偏移量進行偏移。實際上,鄰接像素内插部75係於與最佳幀 内預測模式對應之偏移方向上,對於鄰接像素使用“皆^尺濾 波器進行線性内插,藉此使鄰接像素之相位偏移至小數像素 精度。因此,以下,為方便說明’將藉由6階FIR濾波器及線 性内插已偏移相位之鄰接像素作為已進行内插之鄰接像素或 相位已進行偏移之鄰接像素而適當地說明,但彼等為相同含 義。 鄰接像素内插部75係將相位已進行偏移之鄰接像素供給至 幀内預測部74 » 幀内預測部74係使用來自鄰接像素緩衝部81之鄰接像素之 像素值及藉由鄰接像素内插部75而相位已進行偏移之鄰接像 素之像素值,對鄰接像素決定相位之最佳偏移量。又,巾貞内 預測部74係使用相位以所決定之最佳偏移量已進行偏移之鄰 接像素之像素值,生成對象區塊之預測圖像,並將所生成之 預測圖像及對對應之最佳幢内預測模式計算出之成本函數 值,供給至預測圖像選擇部77。 +貞内預測部74係於藉由預測圖像選擇部77已選擇以最佳幢 内預測模式所生成之預測圖像之情形時,將表示最佳巾貞内預 測模式之資訊及最佳偏移量之資訊供給至可逆編碼部^。可 我碼部66係於㈣内預測部74接收到f訊之情形時,對該 資訊進行編碼,並設為壓縮圖像中之標頭資訊之一部分。μ 動態預測、補償部76係進行成為候補之全部的傾間預測模 145451.doc 201105144 式之動態預測、補償處理。亦即,向動態預測、補償部%, 供給自晝面重排緩衝部62所讀出之需進行内部處理之圖像, 以及經由開關73自訊框記憶體72供給參考圖像。動態預測、 補償部76係根據需進行内部處理之圖像及參考圖像,檢測成 為候補之全部的_預測模式之移動向量,並根據移動向量 而對參考圖像實施補償處理,生成預測圖像。 又,動態預測、補償部76係對成為候補之全部的傾間預測 模式計算成本函數值。動態預測、補償部”係將所計算出之 成本函數值中賦予最小值之預測模式,決定為最㈣ 模式。 動態預測、補償部7 6將以最佳賴間預測模式所生成之預測 圖像及其成本函數值供給至預測圖像選擇部77。動能預心 ㈣部76係於藉由預測圖像選擇部77已選擇最㈣間預測模 式:所生成之制圖像之情形時,將表示最㈣間預測模式 之貝訊(t貞間預測模式資訊)輸出至可逆編碼部6 6。 另外,只要有必要,亦將移動向量資訊、旗標資訊、參考 ^框貝訊等輸出至可逆編碼部66。可逆編碼部Μ仍對來自動 2測、補償部76之資訊進行可變長度編碼、算術編碼之可 逆,·扁碼處理’並***至壓縮圖像之標頭部。 償擇部77係根據㈣内預測部74或動態預測、補 幀間心… 職自最佳鴨内預測模式及最佳 ㈣5 定最佳預測模式。而且,預測圖像選擇部π 夕決疋之最佳預測模式之預測圖像,並供給至運算部 、〇。此時,預測圖像選擇部77係將預測圖像之選擇資% 145451.doc 201105144 供給至巾貞内預測部7 4或動態預測、補償部7 6。 速率控制部78係根據儲存於儲存緩衝部67之壓縮圖像,以 不產生溢位或欠位之方式’控制量化部65之量化動作之速 率。 [H.264/AVC方式之說明] 圖3係表示H.264/AVC方式中之動態預測.補償之區塊大小 之例之圖。於H.264/AVC方式中,可改變區塊大小而進行動 態預測、補償。 於圖3之上段,自左側起依序表示有分割為16χΐ6像素、 16x8像素、8χ16像素及8χ8像素之分區且包含16χΐ6像素之 巨集區塊。又,於圖3之下段,自左側起依序表示有分割為 8x8像素、8x4像素、4Χ8像素及4χ4像素之子分區的8χ8像素 之分區。 亦即,於H_264/AVC方式中’可將一個巨集區塊分割為 16x16像素、16x8像素、8x16像素或8χ8像素之任一分區, 並可具有各自獨立之移動向量資訊。又,可將8χ8像素之分 區分割為8x8像素、8x4像素、4x8像素或4x4像素之任一子 分區,並可具有各自獨立之移動向量資訊。 圖4係對H.264/AVC方式中之1/4像素精度之預測、補償處 理進行說明之圖。於H.264/Avc方式中,進行使用6階 FIR(Finite lmpuise Resp〇nse Fiher)濾波器之1/4像素精度之 預測、補償處理。 於圖4之例中,位置A表示整數精度像素之位置,位置匕、 〇、d表示W2像素精度之位置,位置el、心、β表示ι/4像素 145451.doc 201105144 精度之位置。首先,Coding's context adapts to variable length coding) and the like. As the arithmetic coding, CABAC (C〇ntext-Adaptive Binary-.c〇ding, text-adaptive binary arithmetic coding) can be listed. The storage buffer unit 67 outputs the data supplied from the reversible encoding unit 66 as a compressed image encoded by the H.264/AVC method, and outputs it to a recording device, a transmission path, or the like, which is not shown, for example. Further, the quantized conversion coefficient output from the quantization unit 65 is also input to the inverse amount I45451.doc • 14. 201105144. P 68 is inversely converted, and then inverse orthogonal transform is performed in the inverse orthogonal transform unit 69. The output of the inverse orthogonal transform is added to the predicted image supplied from the predicted image selecting unit 77 by the arithmetic unit 7 to become a partially decoded image. The deblocking filter 71 removes the block distortion of the decoded image, supplies it to the frame memory 72, and stores it here. The frame memory 72 also supplies and stores an image before the deblocking filtering process by the deblocking filter 7丨. The switch 713 outputs the reference image stored in the frame memory 72 to the dynamic prediction, compensation unit 76 or intra prediction unit 74. In the 6-Hai circular image encoding device 5, for example, an Intra Picture (internal picture) from the screen rearranging buffer unit 62, a b-picture (Bidirecti〇naUy P" ediCtiVe_PictUre 'bidirectional prediction plane), and a P-plane (Predictive Picture 'predictive picture' is supplied to the Φ intra prediction unit 74 as an image to be subjected to intra prediction (also referred to as intraframe processing). Further, the frame and the surface read from the screen rearrangement buffer unit 62 are supplied to the motion prediction/compensation unit 76 as an image to be predicted by the duck (also referred to as the inter-process). The intra prediction unit 74 performs the image predicted by the image rearrangement buffer unit 62 and the reference image supplied from the frame memory 72, and performs the second and the second. The intra prediction process of the intra prediction mode of ρ and the generation of the predicted image. The in-building prediction unit 74 calculates the cost function value 'of the intra-prediction mode for generating the predicted image and selects the in-building pre-emption nucleus to which the calculated cost function value is the minimum value as the optimal intra prediction mode. (10) The intra prediction unit supplies the information of the adjacent pixels of the target block to be predicted and the optimum 33 intra prediction mode to the adjacent pixel interpolation unit 75. 145451.doc 201105144 The adjacent pixel interpolation unit 75 shifts the phase of the adjacent pixel by the offset amount of the candidate along the offset direction corresponding to the optimum intra prediction mode from the intra prediction unit 74. In fact, the adjacent pixel interpolation unit 75 is in the offset direction corresponding to the optimal intra prediction mode, and linear interpolation is performed for the adjacent pixels using the “all-size filter”, thereby shifting the phase of the adjacent pixels to Decimal pixel precision. Therefore, for convenience of explanation, it is appropriate to use a 6th-order FIR filter and linearly interpolate adjacent pixels of the offset phase as adjacent pixels that have been interpolated or adjacent pixels whose phases have been shifted. The adjacent pixel interpolation unit 75 supplies adjacent pixels whose phases have been shifted to the intra prediction unit 74. The intra prediction unit 74 uses adjacent pixels from the adjacent pixel buffer unit 81. The pixel value and the pixel value of the adjacent pixel whose phase has been shifted by the adjacent pixel interpolation unit 75 determine the optimum offset amount for the adjacent pixel. Further, the intra-frame prediction unit 74 uses the phase to Determining the optimal offset has been performed to offset the pixel values of the adjacent pixels, generating a predicted image of the target block, and calculating the generated predicted image and the corresponding optimal intra prediction mode The cost function value is supplied to the predicted image selection unit 77. When the predicted image selection unit 77 has selected the predicted image generated in the optimal intra prediction mode, The information indicating the optimal intra-prediction mode and the information of the optimal offset are supplied to the reversible coding unit. The code unit 66 encodes the information when the (4) intra prediction unit 74 receives the f-signal. And it is set as a part of the header information in the compressed image. The μ dynamic prediction and compensation unit 76 performs the dynamic prediction and compensation processing of the inter-predictive mode 145451.doc 201105144 which is the candidate. The prediction and compensation unit % is supplied to the image to be internally processed read from the face rearranging buffer unit 62, and the reference image is supplied from the frame memory 72 via the switch 73. The dynamic prediction and compensation unit 76 is based on The image to be processed internally and the reference image are detected, and the motion vector of all the _ prediction modes that are candidates are detected, and the reference image is subjected to compensation processing according to the motion vector to generate a predicted image. , Become the compensation unit 76 based on all of the inter-prediction mode candidate pour calculate the cost function value of dynamic prediction compensation unit "based prediction mode gives the minimum value of the calculated cost function values, the mode is determined as the most iv. The dynamic prediction and compensation unit 76 supplies the predicted image generated in the optimal inter-estimation mode and its cost function value to the predicted image selecting unit 77. The kinetic energy pre-centering unit (fourth) unit 76 is based on the case where the predicted (fourth) prediction mode: the generated image is selected by the predicted image selecting unit 77, and the (inter-) prediction mode is displayed. The information is output to the reversible coding unit 66. Further, as long as necessary, the motion vector information, the flag information, the reference frame, and the like are also output to the reversible coding unit 66. The reversible coding unit still performs variable length coding, reversible arithmetic coding, and flat code processing on the information of the automatic measurement and compensation unit 76, and inserts it into the header of the compressed image. The replenishing unit 77 is based on the (4) internal prediction unit 74 or the dynamic prediction, the inter-frame heart... The best in-duck prediction mode and the best (4) 5 optimal prediction mode. Then, the predicted image of the optimal prediction mode of the image selection unit π is determined and supplied to the arithmetic unit and the UI. At this time, the predicted image selection unit 77 supplies the selection image % 145451.doc 201105144 of the predicted image to the intra-frame prediction unit 74 or the dynamic prediction/compensation unit 76. The rate control unit 78 controls the rate of the quantization operation of the quantization unit 65 in accordance with the compressed image stored in the storage buffer unit 67 so as not to generate an overflow or an undershoot. [Description of H.264/AVC Method] Fig. 3 is a diagram showing an example of the block size of the dynamic prediction and compensation in the H.264/AVC method. In the H.264/AVC mode, dynamic block prediction and compensation can be performed by changing the block size. In the upper part of Fig. 3, from the left side, there are sequentially indicated macroblocks which are divided into 16χΐ6 pixels, 16x8 pixels, 8χ16 pixels and 8χ8 pixels and contain 16χΐ6 pixels. Further, in the lower part of Fig. 3, partitions of 8 χ 8 pixels which are divided into sub-partitions of 8x8 pixels, 8x4 pixels, 4Χ8 pixels, and 4χ4 pixels are sequentially indicated from the left side. That is, in the H_264/AVC mode, a macroblock can be divided into any of 16x16 pixels, 16x8 pixels, 8x16 pixels, or 8χ8 pixels, and each has independent motion vector information. In addition, the 8 χ 8 pixel partition can be divided into any sub-partition of 8x8 pixels, 8x4 pixels, 4x8 pixels or 4x4 pixels, and can have independent motion vector information. Fig. 4 is a view for explaining the prediction and compensation processing of the 1/4 pixel accuracy in the H.264/AVC method. In the H.264/Avc method, the prediction and compensation processing of the 1/4 pixel precision using the sixth-order FIR (Finite lmpuise Resp〇nse Fiher) filter is performed. In the example of FIG. 4, the position A represents the position of the integer precision pixel, the position 匕, 〇, d represents the position of the W2 pixel precision, and the position el, the heart, and β represent the position of the ι/4 pixel 145451.doc 201105144 precision. First of all,

,下如下式(1)般對Clip()進 行定義。 max__pix之值成 另外’於輸入圖像為8位元精度之情形時, 為 255。 ’如下式(2)般 位置b及d中之像素值係使用6階打尺濾波器, 生成。 [數2] F = Α-2 ~5·Α_! +20·Α〇 +20*Aj -5·Α2+Α3 b,d = Clip 1((F +16) » 5) .... (2) 位置c中之像素值係於水平方向及垂直方向適用6階FIR爐 波器’如下式(3)般生成。 [數3] F = b_2 - 5 · b_! + 20 · b0 + 20 · b! - 5 · b2 + b3 或者 F = d_2 - 5 · dq + 20 · d。+ 20 · d, - 5 · d2 + d3 c = Clipl((F + 512)»l〇) …⑶ 另外’ Clip處理係於進行水平方向及垂直方向之積和處理 之雙方後’最後僅執行1次。 位置e 1至e3係如下式(4)般,藉由線性内插生成。 145451.doc -19- 201105144 [數4] ®i = (A + b +1) » 1 e2 = (b + d +1)» 1 e3 = (b + c +1) »1 *·. (4) 於H.264/AVC方式中,參考圖3及圖4進行上述之動態預 測、補償處理,藉此生成龐大之移動向量資訊,直接對其達 行編碼’此情況會導致編碼效率之下降。肖此相對,於 H.264/AVC方式中,藉由圖5所示之方法實現移動向量之編 碼資訊之下降。 量資訊之生成方法 圖5係對藉由H.264/AVC方式之移動向 進行說明之圖。 於圖5之例中,表示有自此需進行編碼之對象區塊E(例 如,16x!6像素)、以及既已進行編碼且與對象區塊e鄰接之 區塊A至D。 亦即,區塊D係鄰接於對象區塊左上方,區塊b係鄰接 於對象區塊E之上部,區塊C係鄰接於對象區塊E之右上方, 區塊A係鄰接於對象區塊E之左部。另外,未劃分區塊八至〇 者分別表示圖2中已進行敘述之16><16像素至4χ4像素中之任 一構成之區塊。 例如,將對於X(=A、B、C、D、E)之移動向量資訊以mVx . 表示。首先,對於對象區塊E之預測移動向量資訊pmvE係使. 用與區塊A、B、C相關之移動向量資訊,藉由中值預測而如 下式(5)般生成。 pmvE=med(mvA,mvB,mvc) ---(5) 145451.doc ·20· 201105144 與區塊c相關之移動向量資訊有時因圖框之端部、或者 未進行編碼等原因而無法利用(unavailable,無效)。於該二 形時’與區塊C相關之移動向量資訊係由與區塊D相關:二 動向量資訊來代替。 多 使用PmvE ’如下式(6)般生成對壓縮圖像之標頭部附加之 貝料mvdE來作為對於對象區塊E之移動向量資訊。 mvdE=mvE-pmvE ,··(6) 另外,貫際上,對移動向量資訊之水平方向、垂直方向之 各成分獨立進行處理。 如此,生成預測移動向量資m,並將相關於鄰接之區塊而 生成之預測移動向量資訊與移動向量資訊之差分之資料㈣〜 作為移動向量資訊附加於壓縮圖像之標頭部,藉此可降低移 動向量資訊。 於此,參考圖4已進行敘述之H 264/AVC方式中之1/4像素 精度之預測、補償處理係於動態預測.補償部中執行,但於 之圖像編碼裝置51中’ 1/4像素精度之預測亦於㈣預測 中進行忒小數像素精度之幀内預測係藉由以下說明之幀内 預測部74及鄰接像素内插部乃而執行。 、 • [幀内預測部及鄰接像素内插部之構成例] . 圖6係表示巾貞内預測部及鄰接像素内插部之詳細構成例之 方塊圖。 於圖6之例之情形時,幢内預測部74包括鄰接像素緩衝部 81、最佳模式決定部82、最佳偏移量決定部83及預測圖像生 成部84。 145451.doc •21 · 201105144 鄰接像素内插部75包括模式判別部91、水平方向内插部% 及垂直方向内插部93。 鄰接像素緩衝部81係儲存來自訊框記憶體72之幀内預測之 對象區塊之鄰接像素。於圖6之情形時,省略開關73之圖 示,但鄰接像素係自訊框記憶體72經由開關73而被供給至鄰 接像素緩衝部8 1。 向最佳模式決定部82輸入自畫面重排緩衝部62所讀出之需 進行帕内關n最佳模式決定部82係自鄰接像素緩衝 部81讀出與需進行悄内預測之對象區塊對應之鄰接像素。 最佳模式決定部82係使用需進行幀内預測之對象區塊之圖 像及對應之鄰接像素,進行成為候補之全部的幀内預測模式 之幢内預測處理並生成預測圖像。最佳模式決定部82係對^ 成預測圖像之+貞内預測模式計算成本函數值,並將所計算出 之成本函數值賦予最小值之幢内預測模丨,決定為最佳幀内 預測模式。所決定之預測模式之資訊被供給至模式判別部 91、最佳偏移量決定部83及預測圖像生成部84。又,向預測 圖像生成部84亦#給與所供給之預測模式對應之成本函數 值。 向最佳偏移量決定部83輸入自晝面重排緩衝部62所讀出之 需進行t貞内預測之圖像、以及藉由最佳模式決定部82已決定 為最佳之預測模式之資訊。χ,向最佳偏移量決定部Μ,根 據最佳幢内預測模式’輸入藉由水平方向内插部92及垂直方 向内插部93進行線性内插而相位已進行偏移之鄰接像素。最 佳偏移量決定部83係自鄰接像素緩衝部81讀出與需進行巾貞内 145451.doc .22- 201105144 預測之對象區塊對應之鄰接像素。 最佳偏移量決定部83係對藉由最佳模式決定部似斤決定之 預測模式’使用需進行巾貞_敎對象區塊之圖像、對應之 鄰接像素、及對應之已進行内插之鄰接像素之像素值來決定 最佳偏移量。最佳偏移量決定部83係例如計算預測誤差(餘 差)等,將所計算出之預測誤差較小者決定為最佳偏移量。 藉由最佳偏移量決定部83所決定之最佳偏移量之資訊被供給 至預測圖像生成部84。 向預測圖像生成部84輸入藉由最佳模式決定部似斤決定之 預測模式之資訊及對應之成本函數值、以及藉由最佳偏移量 決定部83所決定之最佳偏移量之資訊。預_像生成部⑽ 自鄰接像素緩衝部81讀出與需進行巾貞内制之對象區塊對應 之鄰接像素’並於與預測模式對應之相位方向,使已讀出之 鄰接像素之相位以最佳偏移量進行偏移。 預測圖像生成部84係使用相位已進行偏移之鄰接像素,於 藉由最佳模式決定部82所決定之最㈣㈣測模式中進行幢 内預測’生成對象區塊之預測圖像。預測圖像生成部84係將 斤生成之預測圖像及所對應之成本函數值輸出至預測圖像選 擇部77。 ^預'則圖像生成部料係於藉由預測圖像選擇部77已選擇 以最佳幢内預測模式所生成之預測圖像之情形時,將表示最 佳幀内預測模式之資訊及偏移量之資訊供給至可逆 66。 模 式判別部91係將與藉由最佳模式決定部82所決定 之預測 145451.doc •23· 201105144 模式對應之控制信號,輸出至水平方向内插部92及垂直方向 内插部93。例如,根據預測模式而輸出有表示内插處理之 ON之控制信號。 水平方向内插部92及垂直方向内插部93係根據來自模式判 別部91之控制信號’自鄰接像素緩衝部8丨分別讀出鄰接像 素。水平方向内插部92及垂直方向内插部93係對所讀出之鄰 接像素,藉由6階FIR濾波器及線性内插,於水平方向及垂直 方向上,使相位分別進行偏移。藉由水平方向内插部92及垂 直方向内插部93内插之鄰接像素之資訊被供給至最佳偏移量 決定部83。 [圖像編碼裝置之編碼處理之說明] 其次,參考圖7之流程圖,就圖2之圖像編碼裝置“之編碼 處理加以說明。 :步驟S 11中,A/D轉換部61對所輸入之圖像進行轉 換。於步驟S12中,畫面重排緩衝部62係記憶自A/D轉換部 61所供給之圖像並自各畫面之顯示順序重排為需進行編碼 之順序。 :步驟S1 3中,運异部63係對步驟s〗2令已進行重排之圖像 〃預·、’j圖像之差分進行運算。預測圖像係於需進行幀間預測 ::形時,自動態預測、補償部76經由預測圖像選擇部77而 紅 運;。卩63 ’於需進行幀内預測之情形時,自幀内預測 p 4、’里由預測圖像選擇部77而供給至運算部Ο。 差刀資料係與原圖像資料相比資料量減小。因此,與直接 對圖像進行編碼之情形相比,可使資料量壓縮。 145451.doc -24- 201105144 於V驟S14中,正交轉換部64係對自運算部63所供給之差 刀資。κ進行正乂轉換。具體而言,進行離散餘弦轉換、κι 轉換等正交轉換,輸出轉換係數。於步驟W中,量化糾 係將轉換係數進行量化。於該量化時,如下述步驟奶之處 理中進行說明般,控制速率。 X如此方式進行里化之差分資訊係如下所述般局部進行解 碼。亦即,於步驟S16中,A量化部68係將藉由量化部65已 崎量化之轉換係數以與量化部65之特性對應之特性進行逆 量化二於!驟S17中’逆正交轉換部69係將藉由逆量化部68 已進订逆里化之轉換係數以與正交轉換部6 4之特性對應之特 性進行逆正交轉換。 於步驟S18中,運算部70係將經由預測圖像選擇部77所輸 入之預測圖像相加於經局部解碼之差分資訊,生成經局部解 碼之圖像(與對運算部63之輪人制之圖像)。於步驟⑽ 中,解塊濾波器7 1對自運置#, 丁目運算部70所輸出之圖像進行濾波。藉 此’將區塊失真去除。於舟牌士 、少驟S20中,讯框記憶體72記憶經 濾波之圖像。另外,自i軍苜 … 運异。卩70亦對訊框記憶體72供給未 由解塊慮波裔71進行濟泳声 仃履"皮處理之圖像,並將該 訊框記憶體72中。 預測部74及動態預測、補償部76分 進行圖像之預測處理。亦即,於步驟如中,㈣内 係進行内預測模$ # ^ ^ 仃幅内預賴式之鴨内預測處理。動態預測、 係進行巾貞間預測模式之動態預測、補償處理。 ° 步驟⑶中之預測處理之詳細情況將參考圖8於下文中進 145451.d〇i •25- 201105144 敘达^猎由该處理,分別進行Μ㈣< 中之預測處理,分別計管出m去 们預㈣式 J “出成為候補之全部的預測模式令之 數值。繼而,根據所計算出之成本函數值,選擇最佳 、預測模式,將藉由最佳_制模式之巾貞内制 .之預測圖像及其成本函數值供給至預測圖像選擇部π。成 具體而言,此時’傾内預測部74係將使用鄰接像 ㈣生成之預測圖像供給至預測圖像選擇部77,該鄰接像素 糸猎由6P^ FIR渡波器及線性内插,於與最佳幢内預測模式對 應之偏移方向上,使相位以最佳偏移量進行偏移者。另外, :預測圖像一併地’亦將最佳幀内預測模式之成本函數值供 、,°至預測圖像選擇部77。 ’、 :一方面’根據所計算出之成本函數值,自_預測 中決定最佳㈣預龍式,並將以最㈣間預測模式所生 之預測圖像及其成本函數值供給至預測圖像選擇部^。 ^驟S22中,預測圖像選擇部77係根據自巾貞内預測部Μ 動態預測、補償部76所輸出之各成本函數值,將最佳傾内 ,測模式與最佳傾間預測模式中之一者決定為最佳預測模 式。繼而,預測圖像選擇部77選擇所決定之最佳預測 預測圖像,並供給至運算部 用於步驟S13、S18之運算 如上所述,該預測圖像 广卜,該預測圖像之選擇資訊被供給至幢内預測部 “測、補償部76。於已選擇最佳㈣預測模式之預測圖像 之情形時,傾内預測部74係將表示最佳幢内預測模式之資气 (亦即,幘内預測模式資訊)及已決定為最佳之偏移量之資^ i4545J.doc -26- 201105144 供給至可逆編碼部66。 於已選擇最佳幢間預測模式之預測圖 測、補償部76係將表示最佳幢間預測模式之;:二= 要將與最佳傾間預測模式對應之資訊育況、以及視需 66。作A盥畀佔砧叫M 輸出至可逆編碼部 作為料㈣間_模式對應n 資訊或旗標資訊'參考訊框資了列舉移動向篁 模式之預測圖像作為最佳巾貞間預測模式日^於選擇㈣預測 部76係將幢間預測模式資訊 向::,動態預測、補償 輸出至可逆編碼部66。 °里歧、參考訊框資訊 於步驟S23中,可逆編碼部66係對 化之轉換係數進行編碼 。f 65輸出之經量 »咏 P對差分圖像進行可嫌具庠绝 碼、算術編碼等可逆編碼而將 ^ ς99 . ^ /、整細此時’亦將於上述步 驟S22中輸入至可逆編碼部“ 〇 預測模式資訊或與來自動離預測 、& °P 74之傾内 模式對應之資料加以料^ ㈣76之最㈣間預測 以編碼,並附加於標頭資訊。 於步驟㈣中’儲存緩衝部㈣儲存 像。將儲存於储存緩衝部6象作為[縮圖 輸路傳輸至解碼側。 _像適^賣出’並經由傳 於步驟S25中,速率控制 之壓縮圖像存f儲存緩衝部67 量化動作之速率。 成人位之方式’控制量化祕之 [預測處理之說明] 行;::參考圖8之流程圖,對圖7之步驟⑵之預測處理進 145451.doc •27· 201105144 於自畫面重排緩衝部62所供給之處理對象之圖像為需针 幅内處理之區塊之圖像之情形時,自訊框記憶體㈣出所炎 考之已解碼之圖像’並經由開關73供給至.貞内預測部74。’ 於步驟S31中,幢内預測部74係使用所供給之圖像,對處 理對象之區塊之像素,以成為候補之全部_内預測模式進 打傾内預測。另外,作為所參考之已解碼之像素,使用未藉 由解塊濾波器7 1進行解塊濾波之像素。 芡驟S3 1中之幀内預測處 吓…丨月凡胂|亏_〜打卜3 中進行敘述,但藉由該處理,以忐盔4 成為候補之全部的幢内預沒 模式進行t貞内預測、繼而,對成為候補之全部㈣内預測指 式计昇出成本函數值,並根據所計算出之成本函數值,決沒 最佳幀内預測模式。 繼而’藉由6針_、波器及線性内插,於與所決定之最佳 傾内預測模式對應之偏移方向上,使鄰接像素之相位以最佳 偏移量進行偏移。使用該相位已進行偏移之鄰接像素,藉由 最佳傾内預測模式中之_預測生成預測圖像。將所生成之 預測圖像及最佳_預_式之成本函數值供給至預測圖像 選擇部77。 二自畫面重排緩衝部62所供給之處理對象之圖像為需進行 内P處理之圖像之情形時,自訊框記憶體72讀出所參考之圖 像,並經由開關73供給至動態預測、補償部%。根據該等圖 象於步驟S32中,動態預測、補償部76進行蛸間動態預測 仏亦即,動態預測、補償部76係參考自訊框記憶體72所 八之圖像,進行成為候補之全部的幀間預測模式之動態預 I4545I.doc *28- 201105144 測處理。 步驟S32中之幢間動態預測處理之詳細情況將參考圖22於 下文中進行敘述,但藉由該處理’以成為候補之全部的幀間 預測模式進行動態預測處理,對成為候補之全部的巾貞間預測 模式計算出成本函數值。 於步驟S33中,動態預測、補償部76係將於步驟s32甲所計 算出之相對㈣間預_式之成本函數值進行比較,並將賦 予最小值之預測模式決定為最佳鳩間預測模式。繼而,動態 預測、補償部76將以最佳㈣預測模式所生成之預測圖像及 其成本函數值供給至預測圖像選擇部77。 [H.264/AVC方式中之幀内預測處理之說明] 其次’就H.264/AVC方 < 中規定之幢内預測之各模式加以 說明。 、先就對於冗度仏號之巾貞内預測模式加以說明。亮度信 '中貞内預測模式中’規定有賴内4χ4預測模式、幢内㈣預 測模式及幢内16X16預測模式之3種方式。其係規定區塊單元 ,模式’且針對每個巨集區塊而設定。又,相對於色差信 號可針對母個巨集區塊而設定與亮度信號獨立之幀内預測 進^,於巾貞内4X4預測模式之情形時,針對每個4χ4像素之 對象區塊,可自9種預測模式設定1種預測模式。於_ 8x8 、·、式之清形時,針對每個8χ8像素之對象區塊,可自9種 預測模式設定1種預測楹 、模式。又,於幀内16x16預測模式之情 形時,相對於16><16像辛之蚪*口在广仏 *I之對象巨集區塊,可自4種預測模式 145451.doc •29· 201105144 設定1種預測模式。 另外,以下,幀内4x4預測模式、幀内8χ8預測模式及幀内 16 X16預測模式係分別亦適當稱作4 x 4像素之幀内預測模式、 像素之幢内預測模式及16χί6像素之幢内預測模式 於圖9之例中,對各區塊標註之數^25表示該各區塊 之位元串流順序(解碼側之處理順序)。另外,關於亮度信 號,將巨集區塊分割為4x4像素而進行4x4像素之DCT。繼 而,僅於幅内16χ16預測模式之情形時,如_丨之區塊所示, 將各區塊之直流成分加以集中而生成4χ4行列,對此進而實 施正交轉換。 另-方面,關於色差信號’將巨集區塊分割為4χ4像素而 進行㈣像素之DCT,然後如16及17之各區塊所示,將各區 塊之直流成分加以集中而生成2χ2行列,對此進而實施正交 轉換。 另外’該情況可僅適用於如下情形,即,關於鳩内…預 測模式,以高級規範或此以上之規範,對對象巨集區塊實施 8 X 8正交轉換。 圖10及圖11係表示9種亮度信號之4χ4像素之鴨内預測模式 (Intrajx4一pred_m〇de)之圖。除表示平均值(DC)預測之模式 2以外之8種各模式係分別與上述圖〗之以序號〇、1、3至8所. 表示之方向對應。 就9種Intra_4x4一pred—mode ’參考圖12加以說明。於圖12 之例中’像素3至p表示需進行幀内處理之對象區塊之像素, 像素值A至Μ表示屬於鄰接區塊之像素之像素值。亦即,像 145451.doc 201105144 素a至P係自晝面重排緩衝部62所靖山# 研吸衝。丨W所s買出之處理對象之圖像,像 素值Α至Μ係自訊框記憶體72讀出夕痛4 # Α k賤/ζ „買出之所參考之已解碼之圓像 之像素值。 於圖10及圖U所示之各幢内預測模式之情形時,像素… 之預測像素值係使用屬於鄰接區塊之像素之像素似倾, 如下所述般生成。另外,所謂像素值為「available(有 旬」H不存在處於圖框之端部或者尚未進行編碼等原 因而可m況。與此相對,所謂像素值為 「刪vai祕e」’係指因處於㈣之端部或者尚A進行編碼 等原因而無法利用之情況。 模式0係Vertical Prediction mode,僅洎田认你主从, the definition of Clip() is as follows (1). The value of max__pix is 255. When the input image is 8-bit precision, it is 255. The pixel values in positions b and d are generated using a 6-step scale filter as shown in the following equation (2). [Number 2] F = Α-2 ~5·Α_! +20·Α〇+20*Aj -5·Α2+Α3 b,d = Clip 1((F +16) » 5) .... (2 The pixel value in the position c is generated in the horizontal direction and the vertical direction by applying the sixth-order FIR furnace waver as follows (3). [Equation 3] F = b_2 - 5 · b_! + 20 · b0 + 20 · b! - 5 · b2 + b3 or F = d_2 - 5 · dq + 20 · d. + 20 · d, - 5 · d2 + d3 c = Clipl((F + 512)»l〇) (3) In addition, the 'clip processing is performed on both sides of the product in the horizontal direction and the vertical direction. Times. The positions e 1 to e3 are generated by linear interpolation as in the following equation (4). 145451.doc -19- 201105144 [Number 4] ®i = (A + b +1) » 1 e2 = (b + d +1)» 1 e3 = (b + c +1) »1 *·. (4 In the H.264/AVC method, the above-described dynamic prediction and compensation processing is performed with reference to FIGS. 3 and 4, whereby a large amount of motion vector information is generated, and the line is directly encoded. This situation causes a decrease in coding efficiency. In contrast, in the H.264/AVC method, the coding information of the motion vector is reduced by the method shown in FIG. Method of Generating Quantity Information FIG. 5 is a diagram for explaining the movement of the H.264/AVC method. In the example of Fig. 5, there are shown object blocks E (e.g., 16x! 6 pixels) to be encoded therefrom, and blocks A to D which have been encoded and are adjacent to the target block e. That is, the block D is adjacent to the upper left of the target block, the block b is adjacent to the upper part of the target block E, the block C is adjacent to the upper right of the target block E, and the block A is adjacent to the target area. The left part of block E. Further, the undivided blocks eight to 分别 respectively represent the blocks of any of the 16 < 16 pixels to 4 χ 4 pixels which have been described in Fig. 2 . For example, the motion vector information for X (=A, B, C, D, E) is represented by mVx . First, the predicted motion vector information pmvE for the target block E is generated by using the motion vector information associated with the blocks A, B, and C by the median prediction as shown in the following equation (5). pmvE=med(mvA,mvB,mvc) ---(5) 145451.doc ·20· 201105144 The motion vector information related to block c may not be available due to the end of the frame or not encoding. (unavailable, invalid). The motion vector information associated with block C during the binary is replaced by block D: binary motion vector information. The pop-up mvdE attached to the header of the compressed image is generated using PmvE' as the following equation (6) as the motion vector information for the target block E. mvdE=mvE-pmvE , (6) In addition, the components of the horizontal direction and the vertical direction of the motion vector information are processed independently. In this way, the predicted motion vector resource m is generated, and the data (4) of the difference between the predicted motion vector information and the motion vector information generated in association with the adjacent block is added as the motion vector information to the header of the compressed image. Reduces motion vector information. Here, the prediction and compensation processing of the 1/4 pixel accuracy in the H 264/AVC method described with reference to FIG. 4 is performed in the dynamic prediction and compensation unit, but in the image encoding device 51, '1/4 The prediction of the pixel accuracy is also performed by the intra prediction unit 74 and the adjacent pixel interpolation unit described below in the (fourth) prediction. [Example of the configuration of the intra prediction unit and the adjacent pixel interpolation unit] Fig. 6 is a block diagram showing a detailed configuration example of the intra-frame prediction unit and the adjacent pixel interpolation unit. In the case of the example of Fig. 6, the intra-frame prediction unit 74 includes an adjacent pixel buffer unit 81, an optimum mode determining unit 82, an optimum offset amount determining unit 83, and a predicted image generating unit 84. 145451.doc • 21 · 201105144 The adjacent pixel interpolating unit 75 includes a mode determining unit 91, a horizontal interpolating unit%, and a vertical interpolating unit 93. The adjacent pixel buffer unit 81 stores adjacent pixels from the intra prediction target block of the frame memory 72. In the case of Fig. 6, the illustration of the switch 73 is omitted, but the adjacent pixel system frame memory 72 is supplied to the adjacent pixel buffer unit 81 via the switch 73. The optimal mode determining unit 82 receives the target block to be read from the adjacent image buffer unit 81 and reads the target block to be subjected to intra-predicting from the adjacent pixel buffer unit 81. Corresponding adjacent pixels. The optimum mode determining unit 82 performs intra-prediction processing of all the intra prediction modes that are candidates, using the image of the target block to be intra predicted and the corresponding adjacent pixels, and generates a predicted image. The optimal mode determining unit 82 calculates the cost function value for the + intra prediction mode of the predicted image, and assigns the calculated cost function value to the minimum intra-prediction model to determine the optimal intra prediction. mode. The information of the determined prediction mode is supplied to the mode determination unit 91, the optimum offset amount determination unit 83, and the predicted image generation unit 84. Further, the predicted image generating unit 84 also gives the cost function value corresponding to the supplied prediction mode. The optimum offset amount determining unit 83 inputs the image to be subjected to intra prediction by the buffer rearrangement buffer unit 62 and the prediction mode determined by the optimal mode determining unit 82 as the optimum prediction mode. News. Then, the optimum offset amount determining unit 输入 inputs the adjacent pixels whose phases have been linearly interpolated by the horizontal direction interpolating unit 92 and the vertical direction interpolating unit 93 in accordance with the optimal intra prediction mode. The optimum offset amount determining unit 83 reads out adjacent pixels corresponding to the target block to be predicted in the frame 145451.doc .22 - 201105144 from the adjacent pixel buffer unit 81. The optimum offset determining unit 83 interpolates the image of the target block to be used in the prediction mode determined by the optimal mode determining unit, the corresponding adjacent pixel, and the corresponding adjacent pixel. The pixel value of the adjacent pixel determines the optimal offset. The optimum offset amount determining unit 83 calculates, for example, a prediction error (residue) or the like, and determines the calculated predicted error to be the optimum offset amount. The information of the optimum offset amount determined by the optimum offset amount determining unit 83 is supplied to the predicted image generating unit 84. The predicted image generation unit 84 inputs the information of the prediction mode determined by the optimal mode determining unit and the corresponding cost function value, and the optimal offset determined by the optimal offset amount determining unit 83. News. The pre-image generating unit (10) reads the adjacent pixel corresponding to the target block to be processed in the frame from the adjacent pixel buffer unit 81 and in the phase direction corresponding to the prediction mode, so that the phase of the adjacent pixel that has been read is The optimal offset is offset. The predicted image generating unit 84 uses the adjacent pixels whose phases have been shifted, and performs the intra prediction prediction image generation target block in the most (four) (four) measurement mode determined by the optimum mode determining unit 82. The predicted image generating unit 84 outputs the predicted image generated by the spike and the corresponding cost function value to the predicted image selecting unit 77. In the case where the predicted image selecting unit 77 has selected the predicted image generated in the optimal intra prediction mode, the information indicating the optimal intra prediction mode and the partial The information of the shift is supplied to the reversible 66. The mode determination unit 91 outputs a control signal corresponding to the prediction 145451.doc • 23· 201105144 mode determined by the optimum mode determination unit 82 to the horizontal interpolation unit 92 and the vertical interpolation unit 93. For example, a control signal indicating ON of the interpolation processing is output in accordance with the prediction mode. The horizontal direction interpolating unit 92 and the vertical direction interpolating unit 93 read out adjacent pixels from the adjacent pixel buffer unit 8 in accordance with the control signal ' from the mode judging unit 91. The horizontal direction interpolating portion 92 and the vertical direction interpolating portion 93 respectively shift the phases in the horizontal direction and the vertical direction by the sixth-order FIR filter and linear interpolation. The information of the adjacent pixels interpolated by the horizontal direction interpolating portion 92 and the vertical direction interpolating portion 93 is supplied to the optimum offset amount determining portion 83. [Description of Encoding Process of Image Encoding Device] Next, the encoding process of the image encoding device of Fig. 2 will be described with reference to the flowchart of Fig. 7. In step S11, the A/D conversion unit 61 inputs the input. In step S12, the screen rearrangement buffer unit 62 stores the images supplied from the A/D conversion unit 61 and rearranges them from the display order of the respective screens to the order in which encoding is required. Step S1 3 In the middle, the transport unit 63 calculates the difference between the image preview and the 'j image that has been rearranged in step s>2. The predicted image is required to be interframe-predicted:: shape, self-dynamic The prediction and compensation unit 76 is red-handed by the predicted image selection unit 77. When the intra prediction is required, the prediction/compensation unit 76 is supplied from the intra prediction p 4 and 'the prediction image selection unit 77 to the calculation unit. Ο The difference between the data and the original image data is reduced. Therefore, the amount of data can be compressed compared to the case where the image is directly encoded. 145451.doc -24- 201105144 In V, S14 The orthogonal conversion unit 64 performs a positive turbulence on the difference supplied from the calculation unit 63. Specifically, orthogonal conversion such as discrete cosine transform and κι conversion is performed, and conversion coefficients are output. In step W, the quantization algorithm quantizes the conversion coefficients, and in the quantization, as described in the following step milk processing The rate of control is such that the differential information that is being digitized in this manner is locally decoded as described below. That is, in step S16, the A quantization unit 68 quantizes the conversion coefficient by the quantization unit 65 and quantizes it. The characteristic corresponding to the characteristic of the unit 65 is inversely quantized. In the step S17, the 'inverse orthogonal transform unit 69 is a characteristic of the inverse transform unit 68 that has been reverse-transformed by the inverse quantization unit 68 to match the characteristics of the orthogonal transform unit 64. The inverse orthogonal conversion is performed on the corresponding characteristic. In step S18, the arithmetic unit 70 adds the predicted image input via the predicted image selecting unit 77 to the locally decoded difference information to generate a locally decoded image ( In step (10), the deblocking filter 7 1 filters the image output from the self-operating #, the order computing unit 70. By this, the block is distorted. Remove. Yu Zhou Shishi In a few steps S20, the frame memory 72 memorizes the filtered image. In addition, since the i military 苜 运 运 卩 卩 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 亦 71 The image processed by the skin is processed in the frame memory 72. The prediction unit 74 and the dynamic prediction and compensation unit 76 perform image prediction processing, that is, in the step, (4) Predictive modulo $ # ^ ^ 预 预 之 之 鸭 鸭 鸭 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 In the following, 145451.d〇i •25- 201105144 Suda and Hunting are processed by this process, and the prediction process is performed in Μ(4)< respectively, and the m-to-pre- (4) formula J is calculated separately. The value. Then, based on the calculated cost function value, the optimum and prediction modes are selected, and the predicted image and the cost function value thereof by the optimal mode are supplied to the predicted image selecting portion π. Specifically, at this time, the 'inside prediction unit 74' supplies the predicted image generated using the adjacent image (4) to the predicted image selecting unit 77, which is interpolated by the 6P^ FIR waver and linearly interpolated. In the offset direction corresponding to the best intra prediction mode, the phase is offset by the optimal offset. Further, the predicted image is collectively supplied with the cost function value of the optimum intra prediction mode to the predicted image selecting unit 77. ', : On the one hand, according to the calculated cost function value, the best (four) pre-dragon type is determined from the prediction, and the predicted image generated by the most (four) prediction mode and its cost function value are supplied to the prediction map. Like the selection part ^. In step S22, the predicted image selecting unit 77 selects the optimal tilting in-and-out mode and the optimal inter-prediction mode based on the cost function values outputted by the prediction unit Μ dynamic prediction and compensation unit 76. One of them decided to be the best prediction mode. Then, the predicted image selecting unit 77 selects the determined optimal predicted predicted image and supplies it to the computing unit for the operations of steps S13 and S18. As described above, the predicted image is wide, and the predicted image is selected. It is supplied to the in-building prediction unit "measurement/compensation unit 76." When the predicted image of the optimal (four) prediction mode has been selected, the in-prediction prediction unit 74 sets the asset indicating the optimal intra prediction mode (that is, , the intra-predictive mode information) and the amount of the offset that has been determined to be the best offset ^ i4545J.doc -26- 201105144 is supplied to the reversible coding unit 66. The prediction map and compensation unit has been selected for the best inter-block prediction mode. The 76 series will indicate the best inter-row prediction mode; 2 = the information breeding condition corresponding to the optimal inter-prediction mode, and the as needed 66. The A-shared M-output to the reversible coding part is used as the material. (4) _ mode corresponds to n information or flag information 'reference frame cites the predicted image of the moving 篁 mode as the best inter-day prediction mode day ^ selection (four) prediction unit 76 is the inter-block prediction mode information ::, dynamic prediction, compensation output to reversible The code portion 66. The reference frame information is in step S23, and the reversible coding unit 66 encodes the converted conversion coefficient. The output of the f 65 output is 咏P for the difference image. , reversible coding such as arithmetic coding, and ^ ς 99 . ^ /, tidy at this time 'will also be input to the reversible coding unit in the above step S22 " 〇 prediction mode information or the automatic deviation prediction, & ° ° ° 74 The data corresponding to the internal mode is fed. (4) The most (four) prediction of 76 is encoded and added to the header information. In step (4), the storage buffer (4) stores the image. The image stored in the storage buffer unit 6 is transmitted as a [reduction channel to the decoding side. The image is buffered and buffered by the rate-controlled compressed image storage buffer 67 in step S25. The method of the adult position 'control Quantification secret [Description of prediction processing] line;:: Referring to the flowchart of Fig. 8, the prediction processing of step (2) of Fig. 7 is entered into 145451.doc • 27· 201105144 in the self-screen rearrangement buffer When the image of the processing object supplied by 62 is the image of the block to be processed in the needle width, the self-frame memory (4) outputs the decoded image of the test box and is supplied to the inside via the switch 73. Prediction section 74. In step S31, the intra-frame prediction unit 74 uses the supplied image to perform intra-prediction prediction on the pixels of the block to be processed in the all-intra prediction mode. Further, as the referenced decoded pixel, a pixel which is not subjected to deblocking filtering by the deblocking filter 71 is used. In the frame prediction in S3 1 , the intra prediction is scared... 丨月凡胂|亏 _~打卜3 is described, but by this processing, the 忐 helmet 4 becomes the candidate in the pre-existing mode of the building. The intra prediction, and then, the cost function value is raised for all (four) intra prediction formulas that become candidates, and the optimal intra prediction mode is never determined based on the calculated cost function value. Then, by the 6-pin _, the waver and the linear interpolation, the phase of the adjacent pixel is shifted by the optimum offset in the offset direction corresponding to the determined optimal tilt prediction mode. The predicted image is generated by _prediction in the optimal intra-prediction mode using the adjacent pixels whose phase has been shifted. The generated predicted image and the cost function value of the optimum_pre-type are supplied to the predicted image selecting unit 77. When the image to be processed supplied from the screen rearranging buffer unit 62 is an image to be subjected to the internal P processing, the reference frame memory 72 reads the referenced image and supplies it to the dynamic via the switch 73. Forecast and compensation unit%. According to the image, in step S32, the dynamic prediction and compensation unit 76 performs the inter-time dynamic prediction, that is, the dynamic prediction and compensation unit 76 refers to the image of the eighth frame of the frame memory 72, and performs all of the candidates. The dynamic prediction of the inter prediction mode is I4545I.doc *28- 201105144. The details of the inter-building dynamic prediction processing in step S32 will be described below with reference to Fig. 22, but by the processing "dynamic prediction processing is performed in all of the inter prediction modes that are candidates, and all the candidates are candidates. The diurnal prediction mode calculates the cost function value. In step S33, the dynamic prediction and compensation unit 76 compares the cost function values calculated by the relative (four) pre-forms calculated in step s32, and determines the prediction mode assigned to the minimum value as the optimal inter-temporal prediction mode. Then, the motion prediction/compensation unit 76 supplies the predicted image generated in the optimal (four) prediction mode and the cost function value thereof to the predicted image selecting unit 77. [Explanation of intra prediction processing in H.264/AVC method] Next, each mode of intra-tree prediction specified in H.264/AVC side will be described. First, explain the prediction mode of the nickname. The luminance signal 'in the intra-predictive mode' specification depends on the internal 4χ4 prediction mode, the intra-building (four) prediction mode, and the intra-building 16X16 prediction mode. It is a block unit, mode' and is set for each macro block. Moreover, the intra-prediction independent of the luminance signal can be set for the parent macroblock relative to the color difference signal, and in the case of the 4×4 prediction mode in the frame, for each object block of 4χ4 pixels, The nine prediction modes set one prediction mode. In the case of _ 8x8, ·, and clearing, for each 8 χ 8 pixel object block, one prediction 、 and mode can be set from 9 prediction modes. In addition, in the case of the intra-frame 16x16 prediction mode, the target macroblocks in the 仏 蚪 I I I I I I I I I I I I I 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 145 Set one prediction mode. In addition, in the following, the intra 4x4 prediction mode, the intra 8 χ 8 prediction mode, and the intra 16 X16 prediction mode are also appropriately referred to as an intra prediction mode of 4×4 pixels, an intra prediction mode of pixels, and a block of 16 χί6 pixels, respectively. The prediction mode is shown in the example of Fig. 9. The number 25 marked for each block indicates the bit stream order of the blocks (the processing order on the decoding side). Further, regarding the luminance signal, the macroblock is divided into 4x4 pixels to perform DCT of 4x4 pixels. Then, in the case of only the 16χ16 prediction mode in the amplitude, as shown by the block of _丨, the DC components of the respective blocks are concentrated to generate 4χ4 rows and columns, and orthogonal conversion is performed thereon. On the other hand, the color difference signal 'divided the macroblock into 4 χ 4 pixels and performs the DCT of the (four) pixel, and then, as shown in each of the blocks 16 and 17, concentrates the DC components of each block to generate 2 χ 2 rows and columns. This further performs orthogonal transformation. In addition, this case may be applied only to the case where the 8 x 8 orthogonal transform is performed on the object macroblock in the high-level specification or the above specification with respect to the prediction mode. Fig. 10 and Fig. 11 are diagrams showing an intra-prediction mode (Intrajx4-pred_m〇de) of 4 to 4 pixels of nine kinds of luminance signals. Each of the eight modes except the mode 2 indicating the average value (DC) prediction corresponds to the direction indicated by the numbers 〇, 1, 3 to 8, respectively. The nine Intra_4x4-pred-mode' will be described with reference to FIG. In the example of Fig. 12, 'pixels 3 to p indicate pixels of a target block to be subjected to intra processing, and pixel values A to Μ indicate pixel values of pixels belonging to adjacent blocks. That is, like 145451.doc 201105144 prime a to P system from the face rearranging buffer unit 62 Jingshan # research suction.图像W the image of the object to be processed, the pixel value Α to the 自 system frame memory 72 read 夕痛 4 # Α k贱/ζ „buy the pixel of the decoded circular image In the case of the intra prediction modes shown in FIG. 10 and FIG. U, the predicted pixel values of the pixels are generated using the pixels belonging to the pixels of the adjacent blocks, and are generated as follows. In the case of "available", there is no reason why it is not at the end of the frame or has not been encoded. In contrast, the pixel value is "deleting vai secret e", which means that it is at the end of (4). Or A can not be used for reasons such as coding. Mode 0 is Vertical Prediction mode, only Putian recognizes you

1重週用於像素值A至D 為「available」之情形。於該情形時,他 〜守像素a至p之預測像素 值係如下式(7)般生成。 ...(7) 僅適用於像素值I至L 像素a至P之預測像素 像素a、e、i、m之預測像素值=a 像素b、f、j、n之預測像素值=b 像素c、g、k、〇之預測像素值=c 像素d、h、1、p之預測像素值=d 模式 1係 Horizontal Prediction mode 為「available」之情形。於該情形時, 值係如下式(8)般生成。 像素a、b、c、d之預測像素值=I 像素e、f、g、h之預測像素值=j 像素i、j、k、1之預測像素值 =κ 像素m ' η、〇、Ρ之預測像素值=L ... •31 - 145451.doc 201105144 模式 2係 DC Prediction mode,於像素值a、B、C、D、I、 J、K、L全部為「available」時,預測像素值係如式(9)般生 成。 (A+B + C+D+I+J+K+L+4) >> 3 …⑼ 又,於像素值A、B、C、D全部為「unavai丨able」時,預 測像素值係如式(1 0)般生成。 (I+J+K+L+2) » 2 ...(1〇) 又’於像素值I、J、K、L全部為「unavailable」時,預測 像素值係如式(11)般生成。 (A+B + C+D + 2) » 2 ,..(11) 另外,於像素值八'8、(:、〇'1、>[、尺、1^全部為 「unavailable」時,將128用作預測像素值。 模式 3係 Diagonal一DownJLeft prediction mode,僅適用於 像素值八、8、(:、0、1、】、〖、1^、^1為「&乂&以1^_1之情 形。於該情形時,像素a至p之預測像素值係如下式(12)般生 成0 (A+2B+C+2)» 2 (B+2C+D+2)» 2 (C + 2D+E+2) » 2 (D + 2E+F + 2) >> 2 (E+2F+G+2) » 2 (F + 2G+H+2) » 2 (G+3H+2) » 2 ••*(12) 像素a之預測像素值 像素b、e之預測像素值 像素c、f、i之預測像素值 像素d、g、j、m之預測像素值 像素h、k、η之預測像素值 像素1、〇之預測像素值 像素Ρ之預測像素值 145451.doc •32- 201105144 模式 4係 Diagonal—Down_Right Prediction mode,僅適用於 像素值A、B、C、D、I、J、K、L、M為「available」之情 形°於該情形時’像素a至p之預測像素值係如下式(π)般生 成0 (J+2K+L + 2) » 2 (I + 2J+K+2) » 2 (M+2I + J+2) » 2 (A+2M+I+2) » 2 (M+2A+B + 2) » 2 (A+2B+C+2)» 2 (B + 2C+D + 2) » 2 像素m之預測像素值 = 像素i、η之預測像素值 = 像素e、j、〇之預測像素值 = 像素a、f、k、p之預測像素值 = 像素b、g、1之預測像素值 = 像素c、h之預測像素值 = 像素d之預測像素值 = …(13) 模式5係 Diagonal_Vertical_Right Prediction mode,僅適用 於像素值八、8、(:、0、1、>[、〖、]^、^4為「&乂3以1^」之 情形。於該情形時,像素a至p之預測像素值係如下式(14)般 生成。 像素a、j之預測像素值 像素b、k之預測像素值 像素c、1之預測像素值 像素d之預測像素值 像素e、η之預測像素值 像素f、〇之預測像素值 像素g、P之預測像素值 像素h之預測像素值 =(M+A+l) » 1 =(A+B + l) » 1 =(B+C+l) » 1 =(C+D+l) » 1 =(I+2M+A+2) >> 2 =(M+2A+B+2) » 2 =(A+2B+C+2.) >> 2 =(B+2C+D+2) » 2 145451.doc •33· 201105144 像素i之預測像素值 像素m之預測像素值 =(M+2I+J+2) » 2 =(I+2J+K+2) » 2 •••(14) 模式 6係 Horizontal_Down Prediction mode,僅適用於像素 值A、b、C、D、I、j、k、 L、Μ為「available」之情形 於該情形時,像素a至P之預測像素值係如下式(15)般生成 像素a、g之預測像素值 =(M+I+1) » 1 像素b、h之預測像素值 =(I+2M+A+2) » 2 像素c之預測像素值 =(M+2A+B+2) » 2 像素d之預測像素值 =(A+2B+C+2)» 2 像素e、k之預測像素值 =(I+J+l)»l 像素f、1之預測像素值 =(M+2I+J+2) » 2 像素i、〇之預測像素值 =(J+K+1) » 1 像素j、p之預測像素值 =(I + 2J+K+2) » 2 像素m之預測像素值 =(K+L+1) » 1 像素η之預測像素值 =(J+2K+L+2) » 2 …(15) 模式7係Vertical一Left Prediction mode,僅適用於像素值 A、B、C、D、I、J、k、L、Μ為「available」之情形。於 該情形時’像素&至1>之預測像素值係如下式(16)般生成。 像素a之預測像素值 =(a+B + 1) » 1 像素b、i之預測像素值 =(B + C + l) >> 1 像素c、j之預測像素值 =(C+D+l) >> 1 像素d、k之預測像素值 =(D+E+1) >> 1 145451.doc -34 201105144 像素1之預測像素值 像素e之預測像素值 像素f、m之預測像素值 像素g、η之預測像素值 像素h、〇之預測像素值 像素p之預測像素值 =(E+F+l)>> 1 =(A+2B + C+2) >> 2 =(B+2C+D+2)» 2 =(C+2D+E+2) » 2 =(D+2E+F+2) » 2 =(E+2F+G+2) » 2 ..(16) 模式8係Horizontal_Up Prediction mode,僅適用於像素值 入、3、(3、0、1、>!'、1^、[、]\^為「&¥3以1316」之情形。於 該情形時’像素a至p之預測像素值係如下式(17)般生成。 =(I+.J+1) » 1 =(I+2J+K+2) » 2 =(J+K+1) » 1 =(J+2K+L+2) » 2 =(K+L + l) » 1 =(K+3L+2) » 2 像素a之預測像素值 像素b之預測像素值 像素c、e之預測像素值 像素d、f之預測像素值 像素g、i之預測像素值 像素h、j之預測像素值 像素k、1、m、n、〇、p之預測像素值=[ …(17) 其次’參考圖13 ’就亮度信號之4x4像素之幀内預測模式 (Intra一4x4_pred_mode)之編碼方式加以說明。於圖13之例 中’表示有包含4M像素且成為編碼對象之對象區塊C,並 表示有與對象區塊C鄰接且包含4x4像素之區塊A及區塊B。 於該情形時’可認為對象區塊C中之Intra_4x4_pred_mode 與區塊A及區塊B中之Intra_4x4_pred_mode具有較高之相 145451.doc •35· 201105144 關。使用該相關性,如下所述般進行編碼處理,藉此可實現 更高之編碼效率。 亦即,於圖13之例中,將區塊A及區塊B中之Intra_4x4_ pred_mode分別作為 Intra_4x4_pred_modeA及 Intra_4x4_pred_ modeB,將 MostProbableMode 定義為下式(1 8)。One heavy week is used when the pixel values A to D are "available". In this case, the predicted pixel values of the guard pixels a to p are generated as shown in the following equation (7). (7) Applicable only to pixel values I to L Predicted pixel values of predicted pixel pixels a, e, i, m of pixels a to P = a predicted pixel values of pixels b, f, j, n = b pixels Predicted pixel value of c, g, k, 〇 = c Predicted pixel value of pixel d, h, 1, p = d Mode 1 is the case where Horizontal Prediction mode is "available". In this case, the value is generated as in the following equation (8). Predicted pixel values of pixels a, b, c, d = predicted pixel values of I pixels e, f, g, h = j predicted pixel values of pixels i, j, k, 1 = κ pixels m ' η, 〇, Ρ Predicted pixel value = L ... • 31 - 145451.doc 201105144 Mode 2 is a DC Prediction mode. When the pixel values a, B, C, D, I, J, K, and L are all "available", the predicted pixel The value is generated as in equation (9). (A+B + C+D+I+J+K+L+4) >> 3 (9) Further, when the pixel values A, B, C, and D are all "unavai丨able", the predicted pixel value is It is generated as in equation (10). (I+J+K+L+2) » 2 ...(1〇) And when the pixel values I, J, K, and L are all "unavailable", the predicted pixel values are generated as in equation (11). . (A+B + C+D + 2) » 2 ,..(11) In addition, when the pixel value is eight '8, (:, 〇 '1, > [, ruler, 1^ are all "unavailable", Use 128 as the predicted pixel value. Mode 3 is Diagonal-DownJLeft prediction mode, which is only applicable to pixel values of eight, 8, (:, 0, 1, ], [1, ^1, "&乂& In the case of 1^_1, in this case, the predicted pixel values of the pixels a to p are 0 (A+2B+C+2)» 2 (B+2C+D+2)» 2 as in the following equation (12). (C + 2D+E+2) » 2 (D + 2E+F + 2) >> 2 (E+2F+G+2) » 2 (F + 2G+H+2) » 2 (G+ 3H+2) » 2 ••*(12) Predicted pixel value of pixel a pixel b, e predicted pixel value pixel c, f, i predicted pixel value pixel d, g, j, m predicted pixel value pixel h Predicted pixel value of k, η, pixel 1, predicted pixel value of pixel Ρ predicted pixel value 145451.doc • 32- 201105144 Mode 4 is Diagonal—Down_Right Prediction mode, only for pixel values A, B, C, D , I, J, K, L, M are "available". In this case, the predicted pixel values of the pixels a to p are 0 (J+2K+L + 2) as follows: (π) » 2 (I + 2J+K+2) » 2 (M+2I + J+2) » 2 (A+2M+I+2) » 2 (M+2A+B + 2) » 2 (A+2B +C+2)» 2 (B + 2C+D + 2) » Predicted pixel value of 2 pixels m = predicted pixel value of pixel i, η = predicted pixel value of pixel e, j, = = pixel a, f, Predicted pixel value of k, p = predicted pixel value of pixel b, g, 1 = predicted pixel value of pixel c, h = predicted pixel value of pixel d = ... (13) Mode 5 is Diagonal_Vertical_Right Prediction mode, only for pixels The values 八, 8, (:, 0, 1, > [, 〖, ]^, ^4 are the cases of "& 乂3 to 1^". In this case, the predicted pixel values of pixels a to p are The predicted pixel value pixels b, k are predicted pixel value pixels c, the predicted pixel value of the pixel d, the predicted pixel value of the pixel d, the predicted pixel value pixel e, η, the predicted pixel value pixel f, 〇 Predicted pixel value pixels g, P predicted pixel value pixel h predicted pixel value = (M + A + l) » 1 = (A + B + l) » 1 = (B + C + l) » 1 = ( C+D+l) » 1 =(I+2M+A+2) >> 2 =(M+2A+B+2) » 2 =(A+2B+C+2.) >> 2 =(B+2C+D+2) » 2 145451.doc •33· 201105144 Like The predicted pixel value of the predicted pixel value pixel m = (M+2I+J+2) » 2 = (I+2J+K+2) » 2 •••(14) Mode 6 is the Horizontal_Down Prediction mode, only Applicable to the case where the pixel values A, b, C, D, I, j, k, L, and Μ are "available". In this case, the predicted pixel values of the pixels a to P are generated as follows (15). , predicted pixel value of g = (M + I + 1) » 1 pixel b, h predicted pixel value = (I + 2M + A + 2) » 2 pixel c predicted pixel value = (M + 2A + B + 2) » 2 pixel d predicted pixel value = (A + 2B + C + 2) » 2 pixels e, k predicted pixel value = (I + J + l) » l pixel f, 1 predicted pixel value = ( M+2I+J+2) » 2 pixels i, predicted pixel value of = = (J+K+1) » 1 pixel j, p predicted pixel value = (I + 2J + K + 2) » 2 pixels m Predicted pixel value = (K + L + 1) » 1 pixel η predicted pixel value = (J + 2K + L + 2) » 2 ... (15) Mode 7 is a Vertical-Left Prediction mode, only for pixel values A, B, C, D, I, J, k, L, and Μ are "available". In this case, the predicted pixel value of 'pixel & to 1> is generated as in the following equation (16). Predicted pixel value of pixel a = (a + B + 1) » 1 predicted pixel value of pixel b, i = (B + C + l) >> 1 predicted pixel value of pixel c, j = (C + D +l) >> 1 pixel d, k predicted pixel value = (D + E + 1) >> 1 145451.doc -34 201105144 pixel 1 predicted pixel value pixel e predicted pixel value pixel f, m predicted pixel value pixel g, η predicted pixel value pixel h, 〇 predicted pixel value pixel p predicted pixel value = (E + F + l) >> 1 = (A + 2B + C + 2) >> 2 =(B+2C+D+2)» 2 =(C+2D+E+2) » 2 =(D+2E+F+2) » 2 =(E+2F+G+2 » 2 ..(16) Mode 8 is a Horizontal_Up Prediction mode, only for pixel values, 3, (3, 0, 1, >; ', 1^, [,]\^ is "&¥3 In the case of 1316", in this case, the predicted pixel values of the pixels a to p are generated as in the following equation (17): = (I + .J + 1) » 1 = (I + 2J + K + 2) » 2 =(J+K+1) » 1 =(J+2K+L+2) » 2 =(K+L + l) » 1 =(K+3L+2) » 2 pixels a predicted pixel value pixel b Predicted pixel value pixels c, e predicted pixel value d, f predicted pixel value pixel g, i predicted pixel value pixel h, j predicted pixel value The predicted pixel value of the prime k, 1, m, n, 〇, p = [... (17) Next, the encoding method of the intra prediction mode (Intra-4x4_pred_mode) of 4x4 pixels of the luminance signal will be described with reference to FIG. In the example of FIG. 13, 'there is a target block C which includes 4M pixels and is a coding target, and indicates a block A and a block B which are adjacent to the target block C and include 4×4 pixels. In this case, ' It is considered that Intra_4x4_pred_mode in the object block C and the Intra_4x4_pred_mode in the block A and the block B have a higher phase 145451.doc • 35· 201105144. Using this correlation, encoding processing is performed as follows, thereby realizing Higher coding efficiency. That is, in the example of FIG. 13, Intra_4x4_pred_mode in block A and block B is defined as Intra_4x4_pred_modeA and Intra_4x4_pred_modeB, respectively, and MostProbableMode is defined as the following equation (18).

MostProbableMode=Min (Intra_4x4_pred_modeA > Intra_4x4_pred_modeB) …(18) 亦即,將區塊A及區塊B中之分配有更細之mode_number者 設為 MostProbableMode 〇 於位元串流中,作為對於對象區塊C之參數,定義有 prev_intra4x4_pred_mode_flag[luma4x4BlkIdx]及 rem_intra4x 4_pred_mode[luma4x4BlkIdx]之 2種值,並藉由基於下式(19) 所示之虛擬碼之處理而進行解碼處理,可獲得對於對象區塊 C 之 Intra_4x4_pred_mode、Intra4x4PredMode [luma4x4BlkIdx] 之值。 if(prev_intra4x4_pred_mode_flag[luma4x4B lkldx])MostProbableMode=Min (Intra_4x4_pred_modeA > Intra_4x4_pred_modeB) (18) That is, the block A and the block B are assigned a finer mode_number as the MostProbableMode in the bit stream as the target block C. The parameters are defined as two values of prev_intra4x4_pred_mode_flag[luma4x4BlkIdx] and rem_intra4x 4_pred_mode[luma4x4BlkIdx], and the decoding process is performed based on the processing of the virtual code shown in the following formula (19), and Intra_4x4_pred_mode for the target block C can be obtained. , the value of Intra4x4PredMode [luma4x4BlkIdx]. If(prev_intra4x4_pred_mode_flag[luma4x4B lkldx])

Intra4x4PredMode[luma4x4BlkIdx]=MostProbableMode else if(rem_intra4x4_pred_mode[luma4x4BlkIdx]<MostProbableMode)Intra4x4PredMode[luma4x4BlkIdx]=MostProbableMode else if(rem_intra4x4_pred_mode[luma4x4BlkIdx]<MostProbableMode)

Intra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4B lkldx] elseIntra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4B lkldx] else

Intra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4B lkldx]+l …(19) 145451.doc -36- 201105144 其次,就16x16像素之幀内預測模式加以說明。圖14及圖 1 5係表示4種免度信號之16 X 16像素之巾貞内預測模式 (Intra_16xl6_pred_mode)之圖。 就4種幀内預測模式,參考圖丨6加以說明。於圖丨6之例 中’表示有需進行幀内處理之對象巨集區塊A,P(x,y);x, y=-i、ο、…、15表示與對象巨集區塊A鄰接之像素之像素 值。 模式 0係 Vertical Prediction mode ’ 僅適用於 p(x,_ 1); x, y--l、0、…、15為「available」之情形。於該情形時,對象 巨集區塊A之各像素之預測像素值pred(x,丫)係如下式(2〇)般 生成。Intra4x4PredMode[luma4x4BlkIdx]=rem_intra4x4_pred_mode[luma4x4B lkldx]+l ...(19) 145451.doc -36- 201105144 Next, the intra prediction mode of 16x16 pixels is explained. Fig. 14 and Fig. 15 are diagrams showing the intra prediction mode (Intra_16xl6_pred_mode) of the 16 X 16 pixels of the four kinds of immunity signals. The four intra prediction modes are described with reference to FIG. In the example of Fig. 6, 'represents the object macroblock A, P(x, y); x, y=-i, ο, ..., 15 representing the intra macro processing block A and the object macro block A The pixel value of the adjacent pixel. Mode 0 is a Vertical Prediction mode ’ only for p(x, _ 1); x, y--l, 0, ..., 15 are “available”. In this case, the predicted pixel value pred(x, 丫) of each pixel of the target macroblock A is generated as shown in the following equation (2〇).

Pred(x , y)=P(x , _l);x , y=〇 、 、 15 ••(20) 模式 1係 Horizontal Prediction mode ,僅適用於卩(_1,丫). x ’ y=-l、0、...、15為「avaUable」之情形。於該情形時, 對象巨集區塊A之各像素之預測像素值pred(x,y)係如下式 (21)般生成。Pred(x , y)=P(x , _l);x , y=〇, , 15 ••(20) Mode 1 is a Horizontal Prediction mode, only for 卩(_1,丫). x ' y=-l , 0, ..., 15 is the case of "avaUable". In this case, the predicted pixel value pred(x, y) of each pixel of the object macroblock A is generated as shown in the following equation (21).

Pred(x,y)=P(-l,,y=〇、 、15 •••(21) 模式 2係 DC Prediction mode,於 P(x,」)及卩㈠,y); χ, y=-l、0、…、15全部為「availab丨e」之情形時,對象巨集區 塊A之各像素之預測像素值pred(x,y)係如下式(22)般生成。 145451.doc 37- (22) 201105144 [數5] »Pred(x,y)=P(-l,,y=〇, , 15 •••(21) Mode 2 is DC Prediction mode, at P(x,”) and 卩(一), y); χ, y= When all of -1, 0, ..., and 15 are "availab丨e", the predicted pixel value pred(x, y) of each pixel of the target macroblock A is generated as shown in the following equation (22). 145451.doc 37- (22) 201105144 [Number 5] »

Pred(x,y) = ·=0 y,=0 其中x,y = 0,...,15 又’於P(x,-1); x,y=-l、0、...、15為「unavailable」之 情形時’對象巨集區塊A之各像素之預測像素值Pred(x,y) 係如下式(23)般生成。 [數6]Pred(x,y) = ·=0 y,=0 where x,y = 0,...,15 and 'for P(x,-1); x,y=-l,0,..., When 15 is "unavailable", the predicted pixel value Pred(x, y) of each pixel of the target macroblock A is generated as shown in the following equation (23). [Number 6]

Pred(x5y) = ΣΡ(-i,y,)+8 一 y,=0 »4 其中 x,y = 〇,...,15 (23) 於P(-l ’ y); X,y=-l、〇、…、15為「unavailable」之情形 時’對象巨集區塊A之各像素之預測像素值pred(x,y)係如 下式(24)般生成。 [數7]Pred(x5y) = ΣΡ(-i,y,)+8 y,=0 »4 where x,y = 〇,...,15 (23) at P(-l ' y); X,y= When -l, 〇, ..., 15 is "unavailable", the predicted pixel value pred(x, y) of each pixel of the object macroblock A is generated as shown in the following equation (24). [Number 7]

Pred(x,y): 1$ ΣΡ(χ,,-ι)+8 y,=0 »4 其中 x,y = 0,...,15 .⑻ 於 P(X ’ -1)及 P(-l,y); χ,y = _l、0、…、15 全部為 「unavailable」之情形時,使用128作為預測像素值。 模式 3係 Plane Prediction mode ’ 僅適用於 p(x,_i)及 p(] ,y); x,y=-l、0、…、15全部為「available」之情形。於該 情形時,對象巨集區塊A之各像素之預測像素值Pred(x,y) 係如下式(25)般生成。 14545l.doc •38· 201105144 [數8]Pred(x,y): 1$ ΣΡ(χ,,-ι)+8 y,=0 »4 where x,y = 0,...,15 .(8) at P(X ' -1) and P( -l,y); χ, y = _l, 0, ..., 15 When all are "unavailable", use 128 as the predicted pixel value. Mode 3 is Plane Prediction mode ’ only applies to p(x, _i) and p(], y); x, y=-l, 0, ..., 15 are all "available". In this case, the predicted pixel value Pred(x, y) of each pixel of the object macroblock A is generated as shown in the following equation (25). 14545l.doc •38· 201105144 [Number 8]

Pred(x,y):=ciipl((a + b.(x_7)+c.(y_7) + l ^ a = 16.(p(-isi5)+p(l55-l)) b = (5*H + 32)»6 c = (5.V + 32)»6 8 Η:=Σ^·(Ρ(,7 + χ-ΐ)-Ρ(7-~χ^ι)) 8 …(25) v=^y(P(-i,7 + y)-p(—i,7~y)) 其次,就對於色差信號之巾貞内預測模式加以說明。圖17係 表不4種色差信號之幅内預測模式咖⑽―则叫 ^圖。色差信號之㈣預測模式係可與亮度信號之賴内預測 才:式獨,地叹疋。對於色差信號之幀内預測模式係依照上述 焭度彳§ ?虎之16x16像素之幀内預測模式。 其中’売度信號之16x16像素之幀内預測模式係以16χΐ6像 素之區塊為對象’與此相對’對於色差信號之+貞内預測模式 係以8X8像素之區塊為對象。進而,如上述圖14及圖17所 示,於兩者中,模式序號並不對應。 於此依照參考圖16已進行敎述之亮度信號之ι6χΐ6像素 之巾貞=預測模式之對象巨集區塊Α之像素值及所鄰接之像素 值之夂義你丨如,將與需進行幀内處理之對象巨集區塊a (色 差信號之情形時,8x8像素)鄰接之像素之像素值設為ρ(χ, y);x,y=-l、0、…、7 〇 模式 0係 DC Predicti〇n m〇de,於 ρ(χ,_1)AP㈠汴 χ, Y 〇…7王°P為「available」之情形時,對象巨集區 塊A之各像素之預測像素值pfed(x,_如下式(26)般生成。 I45451.doc •39- 201105144 [數9]Pred(x,y):=ciipl((a + b.(x_7)+c.(y_7) + l ^ a = 16.(p(-isi5)+p(l55-l)) b = (5* H + 32)»6 c = (5.V + 32)»6 8 Η:=Σ^·(Ρ(,7 + χ-ΐ)-Ρ(7-~χ^ι)) 8 ...(25) v=^y(P(-i,7 + y)-p(—i,7~y)) Next, the intra-frame prediction mode of the color difference signal is explained. FIG. 17 shows that there are four kinds of color difference signals. The intra-prediction mode coffee (10) - is called ^ map. The color difference signal (4) prediction mode can be compared with the brightness signal: the sigh. The intra prediction mode for the color difference signal is based on the above 彳 彳§ ? Tiger's 16x16 pixel intra prediction mode. The 'intra prediction mode of 16x16 pixels of the 売 signal is based on the block of 16 χΐ 6 pixels. 'The opposite' is for the color difference signal + 贞 intra prediction mode The blocks of 8×8 pixels are targeted. Further, as shown in FIG. 14 and FIG. 17 described above, the mode numbers do not correspond to each other. Here, in accordance with the luminance signal of FIG. = the pixel value of the object of the prediction mode, and the pixel value of the adjacent pixel, you will, for example, The pixel value of the adjacent macroblock a (in the case of a color difference signal, 8x8 pixels) is set to ρ(χ, y); x, y=-l, 0, ..., 7 〇 mode 0 system DC Predicti〇nm〇de, when ρ(χ,_1)AP(一)汴χ, Y 〇...7 王°P is “available”, the predicted pixel value pfed of each pixel of the object macro block A ( x, _ is generated as shown in the following equation (26). I45451.doc •39- 201105144 [Number 9]

Pred(x5 y) = ((Σ(Ρ(-1,η)+Ρ(η^1)))+^ ^ 4 其中 x,y = 0,…,7 ...(26) 又,於P(-l,y); x,y=-l、〇、. ’馬 1 unavailable」之 預測像素值Pred(x,y) 情形時,對象巨集區塊A之各像素之 係如下式(27)般生成。 [數 10] [(11,。. .)] 中 x,y = 0,…,7 “.(27) 又 於 P(x,-1); X ’ y=-1、0 ' 7 為「unavailable」 情形時,對象巨集區塊A之各像素之預測像素值he·, 係如下式(28)般生成。 之 y) [數 11]Pred(x5 y) = ((Σ(Ρ(-1,η)+Ρ(η^1))))+^ ^ 4 where x,y = 0,...,7 (26) Also, in P (-l, y); x, y=-l, 〇, . 'Pan 1 unavailable' predicted pixel value Pred(x, y) In the case, each pixel of the object macro block A is as follows (27) ) [Generation 10] [(11, . . .)] in x, y = 0,...,7 ".(27) again in P(x,-1); X ' y=-1,0 When '7' is "unavailable", the predicted pixel value he· of each pixel of the target macro block A is generated as shown in the following equation (28). y) [Number 11]

Pred(x,y)=[(gpH,n))+4]》3 其中 x,y = 〇,..'7 …⑽) 模式 1 係 Horizontal Prediction mode,僅適用於 p㈠力; X’ y=-l、0、...、7為「available」之情形。於該情形時,對 象巨集區塊A之各像素之預測像素值Pred(x,y)係如下式 般生成。Pred(x,y)=[(gpH,n))+4]”3 where x,y = 〇,..'7 (10)) Mode 1 is a Horizontal Prediction mode, only for p(1) force; X' y= -l, 0, ..., 7 are "available" cases. In this case, the predicted pixel value Pred(x, y) of each pixel of the macroblock A is generated as follows.

Pred(x,y)=P(-l,y);x,y=0、…、7 ••(29) x 模式 2係Vertical Prediction mode,僅適用於 ι χ -1), 於該情形時,對象 'y)係如下式(3〇)般 y=-1、0' …、7為「available」之情形。 巨集區塊A之各像素之預測像素值Pred(x 145451.doc •40. 201105144 生成。Pred(x,y)=P(-l,y);x,y=0,...,7 ••(29) x Mode 2 is the Vertical Prediction mode, only for ι χ -1), in this case The object 'y' is a case where y=-1, 0' ..., and 7 are "available" as in the following equation (3). The predicted pixel value of each pixel of macro block A is generated by x 145451.doc •40. 201105144.

Pred(x , y)=P(x,_l);x,y=〇、...、7 ••(30) 模式 3 係 Plane Prediction mode 僅適用於P(x,-1)及P(-l, y); x’m、…、7為「available」之情形。於該情形 時’對象巨集區塊A之各像素之預測像素值,力係如 下式(3 1)般生成。 [數 12]Pred(x , y)=P(x,_l);x,y=〇,...,7 ••(30) Mode 3 is Plane Prediction mode Only for P(x,-1) and P(- l, y); x'm, ..., 7 are "available". In this case, the predicted pixel value of each pixel of the object macroblock A is generated as follows (3 1). [Number 12]

Pred(x,y) = Ciipl(a4-b· (x-3)+c-(y-3)+16) »5;x,y = 0,...,7 a= 16 · (P(-l,7)+P(7,-l)) b = (17 -H+16)»5 c= (17 · V+16)»5 4 H = § x · [P(3+x,-l)-P(3-x,-l)] 4 V = §ye[P('^3+y)-P(-l,3-y)] …(3l) 如上所述,免度信號之帕内預測模式有9種4χ4像素及㈣ 像素之區塊單元 '以及4種16xl6像素之巨集區塊單元之預測 模式。該區塊單it之模式係、針對每個巨集區塊單元而設定。 色差信號之_預測模式有4種8χ8像素之區塊單元之預測模 式。該色差信號之幀内預測滋 只j模式係可與亮度信號之幀内預測 模式獨立地設定。 又 關於亮度信號之 測模式)及8x8像素之幀内 對每個4x4像素及8x8像素 測模式。關於亮度信號之 4 x 4像素之幀内預測模式(幀内4 X 4預 預測模式(幀内8x8預測模式),針 之亮度信號之區塊設定1種幀内預 16x 16像素之幀内預測模式(幀内 145451.doc •41 · 201105144 16xl6預測模式)及色差信號之幀内預測模式,對於一個巨集 區塊設定1種預測模式。 另外,預測模式之類型係與上述圖it以序號〇、丨、3至8 所表示之方向對應。預測模式2係平均值預測。 如上所述,H.264/AVC方式中之t貞内預測係以整數像素精 度進行。與此相對’於圖像編碼裝置51中,進行小數像素精 度之幀内預測。 [小數像素精度之幀内預測之動作] 其次,參考圖18,就用以實現小數像素精度之鳩内預測之 =以說明。另夕卜’於圖18之例中,表示有對象區塊為 4x4像素之情形之例。 :圖18之例之情形時’黑圓點表示幀内預測之對象區塊之 像素,白圓點表示與對象 ▲ 興對象Q塊鄰接之鄰接像素。更詳細而 :部鄰接像素中,與對象區塊之左上部鄰接之左 辛。於白圓'Μ·1且,該像素相當於圖12之像素值Μ之像 素於白圓點之鄰接像素中,盥對象Fl 鄰接像素係A0、Ai' A、:、對“塊之上部鄰接之上部 A至H之偯夸^ °亥等像素相當於圖12之像素值 主H之像素。於白圓點鄰 鄰接之左部鄰接像素係lG、r/象素I與對象區塊之左部 之像素值1至乙之像素· 1 2 .··,该等像素相當於圖12 又,於鄰接像素之間所 ^、·..表示Μ像素精度像、a·。5、、…及U5、 及‘〜5、…之像素。進而,於一一 〇·5 ·_.之像素之間所 • ^ -0·75 a-0.25 ' a + 〇.25 ' 、】-〇_25、i…·、: 3+0 75、及· · '、疋 a·0.75、a-0.25、a+0.25 .•幻-0.75、l-〇_25、i . 之 + ·25、】 + 〇·75、…表示】/4像素精度 14545 丨.doc •42· 201105144 像素。 首先,作為第1動作,於幀内預測部74中,使用圖丨2所示 之像素值A至Μ,對於各幀内預測模式進行幀内預測,自各 幀内預測模式中決定最佳幀内預測模式。於對象區塊為4><4 之情形時,該最佳幀内預測模式係圖1〇或圖11之9種預測模 式之任一種。 例如,選擇模式0(Vertical Predicti〇n m〇de)作為最佳幀内 預測模式。此時,對象區塊之預測中所使用之鄰接像素係圖 之像素值A至D之像素,成為圖18之像素a0、八丨、A2 ' A3。 作為第2動作,於鄰接像素内插部乃中,藉由參考圖4已進 行敘述之H.264/AVC方式中之6階FIR濾波器,生成有圖18之 像素精度之像素a_G 5、a+Q 5、.。亦即,像素a.w係以下 式(32)所表示。 a-o.5=(A.2-5*A.1 + 20*A〇 + 20*A1-5*A1+A2+16) » 5 (32) 其他1/2像素精度之像素3 + 〇 5、5等亦相同。 作為第3動作’於鄰接像素内插部75中,自像素Α〇、Αι、 八2、八3及像素a·0.5、等,藉由線性内插生成有圖18之1/4 像素精度之像素a·。75、a·。25、a+。μ、^ ”。亦即,像素 a+ο.25係以下式(33)所表示。 a-o.5=(A〇+a+〇.5+i) » 2 ...(33) 其他1/4像素精度之像素亦相同。 作為第4動作’於幀内預測部74中,於模式0之情形時,將 整數像素與各小數像素之精度之相位差即_0.75、.0.50、-0.25 14545I.doc -43- 201105144 、+0.25、+0.50、 + 0.75之值作為水平方向 t偏移量之候補 而決定最佳偏移量。 例如’於偏移量為+0.25之情形時,佶 1文用像素a+0.25、 a+丨.25、a+2.25、a+3 25之像素來替代像素A 0 〜、A2、A3之像 素值,進行幀内預測。 如此,對第I動作中所選擇之最佳幀内預測模式,決定最 佳偏移量。例如,亦可存在如下情形,即,將偏移量為。之 情形设為最佳情形’使用整數像素之像素值。 關於模式 另外’於圖10或圖11所示之9種預測模式中 2(DC prediction mode),可進行平均值處理。因此,即便已 進行偏移,亦不會直接有助於提高編碼效率,故而上述動作 遭到禁止而不進行。 關於模式 0(vertical Prediction mode) ni3(Diag〇nai— D〇wn—Left Prediction mode)、或模式 7(Verticai—Left Prediction mode),圖18中之僅上部鄰接像素A〇、Ai、a 之偏移成為候補。 關於模式 l(H〇ri_tal Prediction mode)、或模式 8 (Horizontal—Up Prediction mode),圖 18 中之僅左部鄰接像素 1〇、1丨' 12、…之偏移成為候補。 關於其他模式(模式4至6),必需考慮上部鄰接像素及左部 鄰接像素之雙方之偏移。 又,關於上部鄰接像素,僅決定水平方向之偏移量,關於 左部鄰接像素,僅決定垂直方向之偏移量。 進行以上之第1至第4動作,決定最佳偏移量,藉此可增加 145451.doc • 44· 201105144 幀内預測模式中使用之像素值之選擇項,可進行更佳之巾貞内 預測。藉此’可進一步提高幀内預測之編碼效率。 又,於H.264/AVC方式中,如參考圖4已進行敘述般,亦 可將只能應用於幀間動態預測補償之6階打尺濾波器之電路有 效應用於巾貞内預測。藉此,不必使電路增大,便可改盖% 率。 進而,能夠以與H.264/AVC方式中規定之幀内預測之分辨 率即22.5°相比更細之分辨率進行幀内預測。 [小數像素精度之幀内預測之效果例] 於圖19之例中,虛線表示參考圖丨已進行敘述之 H.264/AVC方式之幀内預測之預測模式之方向。對虛線所標 註之序號係與圖10或圖11所示之9種預測模式之序號對應。 另外,因模式2為平均值預測,故而不顯示其序號。 於h.264/avc方式中,只能以虛線所示之22 5;之分辨率進 行幀内預測。與此相對,於圖像編碼裝置51中,藉由進行小 數像素精度之幀内預測,如粗線所示般能夠以較Μ 5。更細 之分辨率進行㈣預測。藉此,尤其可提高對於具有傾斜^ 緣之紋理之編碼效率。 [幀内預測處理之說明] 明 其次,參考圖20之流程圖,對作為上述動作之悄内預測處 理進行說明。另外,㈣内預測處理係圖8之步驟如中之幢 内預測處理,於圖20之例令,以亮度信號之情況為例進行說 對4M像素、8x8像素 最佳模式決定部82係於步驟S4 1中 14545I.doc -45· 201105144 及16 χ 16像素之各幀内預測模式進行幀内預測。 如上所述,幀内4x4預測模式及幀内8x8預測模式有9種預 測模式,可分別針對每個區塊定義丨種預測模式。幀内Μ叫6 預測模式及色差信號之幀内預測模式有4種預測模式,可對 一個巨集區塊定義1種預測模式。 最佳模式決定部82係對處理對象之區塊之像素,參考自鄰 接像素緩衝部81所讀出之已解碼之鄰接圖像,以各幀内預測 模式之全部的類型之預測模式進行幀内預測。藉此生成有 各巾貞内預測模式之全部的類型之預測模式中之預測圖像。另 外,作為所參考之已解碼之像素,制未藉由解㈣波器Μ 進行解塊濾波之像素。 最佳模式決定部82係於步驟S42中’計算對於4M像素、 Μ像素及16X16像素之各傾内預測模式之成本函數值。'於 此,作為成本函數值’根據High c〇mplexity(高複雜度)模式 或Low CompIexity(低複雜度)模;式之任一種方法進行。节等 模式係由作為H.264/AVC方式中之參考軟體之jm丈 Mode〗,聯合模型)中規定。 亦即,於High C〇mplexity模式甲,作為步驟州之處理, 對於成為候補之全料㈣m時進行至編碼處理為 止。繼而,對各預測模式計算出以下式(34)所表示之成本函Pred(x,y) = Ciipl(a4-b· (x-3)+c-(y-3)+16) »5;x,y = 0,...,7 a= 16 · (P( -l,7)+P(7,-l)) b = (17 -H+16)»5 c= (17 · V+16)»5 4 H = § x · [P(3+x,- l)-P(3-x,-l)] 4 V = §ye[P('^3+y)-P(-l,3-y)] (3l) As described above, the signal is free The Parney prediction mode has nine prediction modes of 4 χ 4 pixels and (4) pixel block units and 4 16×16 pixel macro block units. The block mode of this block is set for each macro block unit. The prediction mode of the color difference signal has four prediction modes of block elements of 8 χ 8 pixels. The intra prediction mode of the color difference signal can be set independently of the intra prediction mode of the luminance signal. Also for the luminance signal measurement mode) and 8x8 pixel intraframe for each 4x4 pixel and 8x8 pixel measurement mode. Regarding the 4 x 4 pixel intra prediction mode of the luminance signal (intra 4×4 pre-prediction mode (intra 8×8 prediction mode), the block of the luminance signal of the pin sets an intra prediction of 16×16 pixels in the frame. Mode (intra 145451.doc •41 · 201105144 16xl6 prediction mode) and the intra prediction mode of the color difference signal, one prediction mode is set for one macro block. In addition, the type of prediction mode is the same as the above figure.丨, 丨, 3 to 8 correspond to the direction. Prediction mode 2 is the average prediction. As mentioned above, the t贞 prediction in the H.264/AVC method is performed with integer pixel precision. In the encoding device 51, intra prediction of fractional pixel precision is performed. [Operation of intra-prediction of fractional pixel precision] Next, with reference to Fig. 18, it is explained that the intra-prediction of fractional pixel precision is implemented. In the example of Fig. 18, an example is shown in which the target block is 4x4 pixels. In the case of the example of Fig. 18, the black dot indicates the pixel of the intra prediction target block, and the white dot indicates the object. ▲ Xing object Q block adjacency The adjacent pixel is more detailed: in the adjacent pixel, the left symplectic adjacent to the upper left portion of the target block. In the white circle 'Μ·1, the pixel corresponds to the pixel value of FIG. 12, and the pixel is at the white dot. Among the adjacent pixels, the 盥 object F1 is adjacent to the pixel system A0, Ai' A, and the pixel corresponding to the upper portion A to H of the upper portion of the block is equivalent to the pixel of the pixel value main H of FIG. The pixels adjacent to the left side of the white dot are adjacent to the pixel system lG, r/pixel I, and the pixel value 1 to the pixel of the left side of the target block are 1 2 . . . , and the pixels correspond to FIG. Between the adjacent pixels, ^, ··. represents the pixel precision image, a·.5, ..., and U5, and the pixels of '~5, .... Further, in the pixel of one by one · 5 · _. • ^ -0·75 a-0.25 ' a + 〇.25 ' , 】-〇_25, i...·,: 3+0 75, and · · ', 疋a·0.75, a-0.25, a +0.25 .•幻-0.75, l-〇_25, i. + 25, 】 + 〇·75, ... indicates] / 4 pixel accuracy 14545 丨.doc • 42· 201105144 pixels. First, as the first action In the intra prediction unit 74, the image shown in FIG. 2 is used. The values A to Μ are intra prediction for each intra prediction mode, and the optimal intra prediction mode is determined from each intra prediction mode. When the target block is 4><4, the optimal intra prediction The mode is either one of the nine prediction modes of Fig. 1 or Fig. 11. For example, mode 0 (Vertical Predicti 〇 nm〇de) is selected as the optimal intra prediction mode. At this time, the pixels of the pixel values A to D of the adjacent pixel pattern used in the prediction of the target block become the pixels a0, eight, and A2 'A3 of Fig. 18 . As a second operation, in the adjacent pixel interpolation unit, the pixel a_G 5, a having the pixel precision of FIG. 18 is generated by the sixth-order FIR filter in the H.264/AVC method described with reference to FIG. 4 . +Q 5,. That is, the pixel a.w is expressed by the following formula (32). Ao.5=(A.2-5*A.1 + 20*A〇+ 20*A1-5*A1+A2+16) » 5 (32) Other 1/2 pixel precision pixels 3 + 〇5, 5 is the same. As the third operation 'in the adjacent pixel interpolation unit 75, 1/4 pixel precision of FIG. 18 is generated by linear interpolation from pixels Α, Αι, 八 2, 八 3, and pixel a·0.5, and the like. Pixel a·. 75, a·. 25, a+. μ, ^ ”. That is, the pixel a+ο.25 is represented by the following formula (33): ao.5=(A〇+a+〇.5+i) » 2 (33) Other 1/4 The pixel of the pixel precision is also the same. As the fourth operation 'in the intra prediction unit 74, in the case of mode 0, the phase difference between the integer pixel and each fractional pixel is _0.75, .0.50, -0.25 14545I. Doc -43- 201105144 The value of +0.25, +0.50, + 0.75 is used as the candidate for the horizontal direction t offset to determine the optimal offset. For example, when the offset is +0.25, 佶1 is used. Pixels a+0.25, a+丨.25, a+2.25, a+3 25 pixels replace the pixel values of pixels A 0 、, A2, and A3 to perform intra prediction. Thus, the most selected one in the first action The best intra-prediction mode determines the optimal offset. For example, there may be a case where the offset is set to be the best case 'using pixel values of integer pixels. 10 or DC prediction mode 2 of the 9 prediction modes shown in Figure 11, the average value can be processed. Therefore, even if the offset has been performed, it will not directly contribute to the improvement of the coding. Rate, so the above action is prohibited without being performed. Regarding mode 0 (radial prediction mode) ni3 (Diag〇nai - D〇wn - Left Prediction mode), or mode 7 (Verticai - Left Prediction mode), in Figure 18 Only the offset of the upper adjacent pixels A〇, Ai, and a is a candidate. Regarding the mode 1 (H〇ri_tal Prediction mode) or the mode 8 (Horizontal-Up Prediction mode), only the left side adjacent pixel 1图 in FIG. The offset of 1丨', 12... is a candidate. For other modes (modes 4 to 6), it is necessary to consider the offset between both the upper adjacent pixel and the left adjacent pixel. Also, the upper adjacent pixel is determined only in the horizontal direction. For the left-hand adjacent pixel, only the offset amount in the vertical direction is determined. Perform the above first to fourth operations to determine the optimal offset, thereby increasing the 145451.doc • 44· 201105144 intra prediction The selection of the pixel values used in the mode allows for better intra-frame prediction. This can further improve the coding efficiency of intra prediction. Also, in the H.264/AVC method, as described with reference to FIG. General May also be applied to only the sixth-order dynamic inter prediction compensation circuit scale of the filter has the effect of playing a towel Chen intra prediction Thereby, the circuit need not be increased, the rate can be changed cover%. Further, intra prediction can be performed with a finer resolution than 22.5° which is the resolution of intra prediction defined in the H.264/AVC method. [Example of Effect of Intra Prediction of Fractional Pixel Accuracy] In the example of Fig. 19, the broken line indicates the direction of the prediction mode of the intra prediction of the H.264/AVC method which has been described with reference to Fig. The numbers indicated by the broken lines correspond to the numbers of the nine prediction modes shown in Fig. 10 or Fig. 11. In addition, since mode 2 is an average prediction, its serial number is not displayed. In the h.264/avc mode, intra prediction can only be performed with a resolution of 22 5; as indicated by a broken line. On the other hand, in the image coding device 51, intra prediction by decimal pixel precision can be performed as shown by a thick line. A finer resolution is used to make (four) predictions. Thereby, in particular, the coding efficiency for the texture having the sloped edge can be improved. [Description of Intra Prediction Processing] Next, the intra-prediction processing as the above operation will be described with reference to the flowchart of Fig. 20 . In addition, (4) the internal prediction processing is performed in the step of FIG. 8 as in the intra-frame prediction processing. In the example of FIG. 20, the case of the luminance signal is taken as an example, and the 4M pixel and 8×8 pixel optimal mode determining unit 82 are connected to the step. Intra prediction is performed in each intra prediction mode of 14545I.doc -45·201105144 and 16 χ 16 pixels in S4 1. As described above, the intra 4x4 prediction mode and the intra 8x8 prediction mode have nine prediction modes, and a prediction mode can be defined for each block separately. Intraframe squeaking 6 The intra prediction mode of the prediction mode and the color difference signal has four prediction modes, and one prediction mode can be defined for one macroblock. The optimum mode determining unit 82 refers to the decoded adjacent image read from the adjacent pixel buffer unit 81 for the pixels of the block to be processed, and performs intra-frame prediction in the prediction mode of all types of intra prediction modes. prediction. Thereby, a predicted image in a prediction mode having all types of prediction modes in each frame is generated. Further, as the referenced decoded pixel, pixels which are not subjected to deblocking filtering by the solution (four) waver 制 are produced. The optimum mode determining unit 82 calculates the cost function values for the respective intra-tilt prediction modes for 4M pixels, Μ pixels, and 16×16 pixels in step S42. Here, as the cost function value, it is performed according to either the High c〇mplexity mode or the Low CompIexity mode. The mode is defined by the jm mode, the joint model, which is the reference software in the H.264/AVC method. In other words, in the High C〇mplexity mode A, as the processing of the step state, the encoding process is performed when the candidate is all (4) m. Then, the cost function represented by the following formula (34) is calculated for each prediction mode.

值,選擇賦予其最小值之預測模式作為最佳預測模式。 Cost(Mode)=D+X,R D係原圓像與解碼圖像之差分(失真),:包含正交轉 換係數之產生編碼量,W作為量化參數奶啊如“ 14545 丨.doc •46· 201105144 parameter)之函數而提供之拉格朗日乘數(Lagmige mukipUer)。 另一方面,於Low Complexity模式中,作為步驟S41之處 理,對成為候補之全部的預測模式,生成預測圖像、以及計 算出移動向量資訊或預測模式資訊、旗標資訊等標頭位元。 繼而,對各預測模式計算出以下式(35)所表示之成本函數 值,選擇賦予其最小值之預測模式作為最佳預測模式。Value, select the prediction mode given to its minimum value as the best prediction mode. Cost(Mode)=D+X, RD is the difference (distortion) between the original circular image and the decoded image, including the amount of generated orthogonal transform coefficients, and W as the quantization parameter. For example, “14545 丨.doc •46· On the other hand, in the Low Complexity mode, as a process of step S41, a prediction image is generated for all prediction modes that are candidates, and Calculate the header information such as motion vector information or prediction mode information, flag information, etc. Then, calculate the cost function value represented by the following formula (35) for each prediction mode, and select the prediction mode given the minimum value as the best. Forecast mode.

Cost(Mode)=D+QPtoQuant(QP).Header—Bit· .(35) D係原圖像與解碼圖像之差分(失真),Header—Bit係相對於 預測模式之標頭位元,QPt〇Quant係作為量化參數Qp之函數 所提供之函數。 於Low Complexity模式中,僅對全部的預測模式生成預測 圖像’而無需進行編碼處理及解碼處理,故而運算量較少即 可 ° 最佳模式決定部82係於步驟S43中,對4x4像素、8x8像素 及16x16像素之各幀内預測模式分別決定最佳模式。亦即, 如上所述,於幀内4x4預測模式及幀内8χ8預測模式之情形 時’預測模式之類型有9種,於幀内16><16預測模式之情形 時,預測模式之類型有4種。因此,最佳模式決定部82係根 據步驟S42中所計算出之成本函數值,自彼等中決定最佳幀 内4x4預測模式、最佳幀内8χ8預測模式、最佳幀内16\16預 測模式。 最佳模式決定部82係於步驟S44中,自對4x4像素、8χ8像 素及16x16像素之各幀内預測模式所決定之各最佳模式中, 選擇基於步驟S42中所計算出之成本函數值之最佳幀内預測 145451.doc -47- 201105144 !二=’自對4X4像素、8x8像素及16x16像素所決定之 “巾’選擇成本函數值為最小值之模^作為最佳幀 内預測模式。 ~取住頓 之預測模式之資訊被供給至模式判別部9ΐ、最佳偏 ::=Μ3及預測圖像生成部84。又,向預測圖像生成部 亦ί、、,、。與預測模式對應之成本函數值。 中鄰^像素内插部75及最佳偏移量決定部83係於步驟S45 ::接内插處理。步驟S45中之鄰接内插處理之詳細 ^參考圖21於下文中進行敛述,但藉由該處理,於與所 量。與戶預測模式對應之偏移方向上,決定最佳偏移 1部84。卜疋之最佳偏移量相關之資訊被供給至預測圖像生 ^驟S46中’預測圖像生成部嶋使用相位以最佳偏移 里 仃偏移之鄰接像素,生成預測圖像。 亦(7予頁測圖像生成部84係自鄰接像素緩衝部⑴賣出盘需 内預測之對象區塊對應之鄰接像素。繼而,預測圖像 糊藉由6階FIR遽波器及線性内插,於與預測模式對應 :相位方向上’使已讀出之鄰接像素之相位以最佳偏移量: 移預測圖像生成部84係使用相位已進行偏移之鄰接像 素’於藉由最佳模式決定部82所決^之預測模式中進行幢内 預測’生成對象區塊之預測圖像,並將所生成之預測圖像及 所對應之成本函數值供給至預測圖像選擇部π。 另外’於最佳偏移量為〇之情形時,使用來自鄰接像素緩 衝部81之鄰接像素之像素值。 ’、 145451.doc -48· 201105144 於藉由預測圖像選擇部77已選擇以最佳幀内預測模式所生 成之預測圖像之情形時,藉由預測圖像生成部84,將表示該 等最佳幀内預測模式之資訊及偏移量之資訊供給至可逆編碼 部66。繼而,於可逆編碼部66中進行編碼並附加於壓縮圖像 之標頭資訊(上述圖7之步驟S23)。 另外,作為該偏移量之資訊之編碼,對所決定之對象區塊 之偏移量與參考圖13已進行敘述之賦予M〇stPr〇baMeM〇de之 區塊中之偏移量的差分進行編碼。 其中,例如於MostProbableMode為模式2(DC預測)且對象 區塊之預測模式為模式〇(Vertical預測)之情形時不存在賦 予MostProbableMode之區塊中之水平方向之偏移量。又亦 由於為内部片段中之幀内巨集區塊之情況,不存在賦予 MostProbableMode之區塊中之水平方向之偏移量。 於如此之情形時,將賦予M〇stPr〇bableM〇de之區塊中之水 平方向之偏移量設為〇,進行差分編碼處理。 [鄰接像素内插處理之說明] 其次,參考圖21之流程圖,就圖20之步驟S45之鄰接像素 内插處理加以說明β於圖21之例中,就對象區塊為々Μ之 形加以說明。 藉由最佳模式決定部82所決定之預測模式之資訊被供給至 模式判別部91。模式判別部91係於步驟S51中,判定最佳幀 内預測模式疋否為DC模式。於步驟S5 1中,於判定出最佳巾貞 内預測模式並非為DC模式之情形時,處理進入步驟§52。 於步驟S52中,模式判別部91判定最佳幀内預測模式是否 145451.doc -49- 201105144 為 Vertical Prediction mode、Diagonal_Down_Left Prediction mode、或 VerticaI_Left Prediction mode。 於步驟S52中,於判定出最佳幀内預測模式為VerUcaiCost(Mode)=D+QPtoQuant(QP).Header—Bit· .(35) D is the difference (distortion) between the original image and the decoded image. The Header-Bit is the header bit relative to the prediction mode, QPt 〇Quant is a function provided as a function of the quantization parameter Qp. In the Low Complexity mode, the prediction image is generated only for all the prediction modes, and the encoding process and the decoding process are not required. Therefore, the amount of calculation is small. The optimal mode determining unit 82 is in step S43, for 4×4 pixels, The intra prediction modes of 8x8 pixels and 16x16 pixels respectively determine the best mode. That is, as described above, in the case of the intra 4x4 prediction mode and the intra 8 χ8 prediction mode, there are 9 types of prediction modes, and in the case of the intra 16 < 16 prediction mode, the types of prediction modes are 4 kinds. Therefore, the optimal mode determining unit 82 determines the optimal intra 4x4 prediction mode, the optimal intra 8 χ 8 prediction mode, and the optimal intra 16  16 prediction from among the cost function values calculated in step S42. mode. The optimal mode determining unit 82 selects the cost function value calculated based on the step S42 from among the best modes determined by the intra prediction modes of 4x4 pixels, 8χ8 pixels, and 16x16 pixels in step S44. The best intra prediction is 145451.doc -47- 201105144 ! 2 = 'Second from the 4X4 pixel, 8x8 pixel and 16x16 pixel, the "canvas" selection cost function value is the minimum value of the modulus ^ as the best intra prediction mode. The information of the prediction mode of the hold is supplied to the mode determination unit 9ΐ, the optimum offset::=Μ3, and the predicted image generation unit 84. Further, the prediction image generation unit is also used, and the prediction mode is also used. Corresponding cost function value. The middle neighboring pixel interpolation unit 75 and the optimum offset amount determining unit 83 are connected to the interpolation processing in step S45: The detail of the adjacent interpolation processing in step S45 is as follows. In the text, the convergence is described, but by this processing, the optimum offset 1 portion 84 is determined in the offset direction corresponding to the amount and the household prediction mode. The information related to the optimal offset of the divination is supplied to In the predicted image generation step S46, the 'predicted image generation unit uses the phase to The predicted image is generated by the adjacent pixel of the optimum offset 。 offset. (7) The pre-measurement image generating unit 84 sells the adjacent pixel corresponding to the target block to be predicted by the disk from the adjacent pixel buffer unit (1). Then, the predicted image paste is correlated with the prediction mode by a 6th-order FIR chopper and linear interpolation: 'the phase of the adjacent pixel read out is optimally shifted in the phase direction: shift prediction image generation The portion 84 generates a predicted image of the target block by using the adjacent pixel whose phase has been shifted in the prediction mode determined by the optimal mode determining unit 82, and generates the predicted image. The image function value corresponding to the image is supplied to the predicted image selecting unit π. Further, when the optimum offset amount is 〇, the pixel value of the adjacent pixel from the adjacent pixel buffer unit 81 is used. ', 145451.doc -48· 201105144 When the predicted image selection unit 77 has selected the predicted image generated in the optimal intra prediction mode, the predicted image generation unit 84 will represent the optimal intra prediction. Mode information and offset The signal is supplied to the reversible encoding unit 66. Then, the reversible encoding unit 66 encodes and adds the header information of the compressed image (step S23 of Fig. 7 described above). The offset of the determined target block is encoded with the difference of the offset in the block given to M〇stPr〇baMeM〇de as described with reference to Fig. 13. For example, in the case of MostProbableMode, mode 2 (DC prediction) When the prediction mode of the target block is the mode Ver (Vertical prediction), there is no offset in the horizontal direction in the block given to the MostProbableMode. Also, since it is an intra macroblock in the inner segment, there is no offset in the horizontal direction in the block given to MostProbableMode. In such a case, the offset amount in the horizontal direction in the block given to M〇stPr〇bableM〇de is set to 〇, and differential encoding processing is performed. [Description of Adjacent Pixel Interpolation Processing] Next, with reference to the flowchart of Fig. 21, the adjacent pixel interpolation processing of step S45 of Fig. 20 will be described. In the example of Fig. 21, the object block is shaped like a crucible. Description. The information of the prediction mode determined by the optimum mode determining unit 82 is supplied to the mode determining unit 91. The mode determination unit 91 determines in step S51 whether or not the optimal intra prediction mode is in the DC mode. In step S51, when it is determined that the optimal intra prediction mode is not the DC mode, the processing proceeds to step §52. In step S52, the mode determination unit 91 determines whether the optimal intra prediction mode is 145451.doc -49 - 201105144 is Vertical Prediction mode, Diagonal_Down_Left Prediction mode, or VerticaI_Left Prediction mode. In step S52, it is determined that the optimal intra prediction mode is VerUcai.

Prediction mode、Diagonal_Down_Left Prediction mode、或Prediction mode, Diagonal_Down_Left Prediction mode, or

Vertical—Left Prediction mode之情形時,處理進入步驟 S53 〇 於步驟S53中,模式判別部91係對水平方向内插部92輸出 控制信號,以使水平方向内插部92進行水平方向之内插。亦 即,水平方向内插部92係根據來自模式判別部91之控制信 號,自鄰接像素緩衝部8丨讀出上部鄰接像素,並藉由6階打尺 濾波器及線性内插,使所讀出之上部鄰接像素之水平方向之 相位偏移。水平方向内插部92係將已進行内插之上部鄰接像 素之資讯供給至最佳偏移量決定部。 於步驟S54中,最佳偏移量決定部83係對藉由最佳模式決 定部82所決定之預測模式,於·〇75至+〇75中,決定上部鄰 接像素之最佳偏移量。另外,該衫中使用需進㈣内預測 之對象區塊之圖像、自鄰接像素緩衝部8 i所讀出之上部鄰接 像素、及已進行内插之上部鄰接像素之資訊。&,此時,使 針對左部鄰接像素之最佳偏移量為〇。所決定之最佳偏移量 之資訊被供給至預測圖像生成部84。 於乂驟S52中’於判定出最佳幀内預測模式並非為Vertica: ction mode ' Diag〇nal_Down_Left Prediction mode > 及In the case of the Vertical-Left Prediction mode, the processing proceeds to a step S53. In the step S53, the mode determining unit 91 outputs a control signal to the horizontal direction interpolating unit 92 to cause the horizontal direction interpolating unit 92 to interpolate in the horizontal direction. In other words, the horizontal direction interpolating unit 92 reads the upper adjacent pixels from the adjacent pixel buffer unit 8 based on the control signal from the mode determining unit 91, and causes the read by the 6-step scale filter and linear interpolation. The phase shift in the horizontal direction of the adjacent pixels in the upper portion. The horizontal direction interpolating unit 92 supplies information on the adjacent pixels that have been interpolated to the optimum offset amount determining unit. In step S54, the optimum offset amount determining unit 83 determines the optimum offset amount of the upper adjacent pixel in the prediction mode determined by the optimum mode decision unit 82 from 〇75 to +〇75. Further, in the shirt, an image of a target block to be subjected to (4) intra prediction, an adjacent pixel read from the adjacent pixel buffer portion 8 i, and information on which the adjacent pixel is interpolated are used. & In this case, the optimum offset for the left adjacent pixel is 〇. The information of the determined optimum offset amount is supplied to the predicted image generating unit 84. In step S52, it is determined that the optimal intra prediction mode is not Vertica: ction mode ' Diag〇nal_Down_Left Prediction mode >

VemCal_Left Predlcti〇n则如之情形時處理進入步驟 S55 ° 145451.doc •50· 201105144 於步驟S55中,模式判別部91判定最佳幀内預測模式是否 為 Horizontal Prediction mode 或 Horizontaj_Up Prediction mode。於步驟S55中,於判定出最佳幀内預測模式為In the case of VemCal_Left Predlcti〇n, the processing proceeds to step S55 ° 145451.doc • 50· 201105144 In step S55, the mode determining unit 91 determines whether the optimal intra prediction mode is the Horizontal Prediction mode or the Horizontaj_Up Prediction mode. In step S55, it is determined that the optimal intra prediction mode is

Horizontal Prediction mode 或 Horizontal一Up :predicti0n m〇de 之情形時,處理進入步驟S 5 6。 於步驟S56中,模式判別部91係對垂直方向内插部%輸出 控制信號,以使垂直方向内插部93進行垂直方向之内插。亦 即,垂直方向内插部93係根據來自模式判別部91之控制信 號,自鄰接像素緩衝部81讀出左部鄰接像素,並藉由6階打尺 濾波器及線性内插,使所讀出之左部鄰接像素之垂直方向之 相位進行偏移。垂直方向内插部93係將已進行内插之左部鄰 接像素之資訊供給至最佳偏移量決定部83。 於步驟S57中,最佳偏移量決定部83係對藉由最佳模式決 定部82所決定之預測模式,於_〇75至+〇75中決定左部鄰 接像素之最佳偏移量。另外,該決定中使用需進行幢内預測 之對象區塊之圖像、自鄰接像素緩衝部81所讀出之左部鄰接 像素、及已進行内插之左部鄰接像素之資訊。又,此時,使 、十對上。卩鄰接像素之最佳偏移量為〇 ^所決定之最佳偏移量 之資訊被供給至預測圖像生成部8 4。 於步驟S55中,於判定出最佳+貞内預測模^並非為 Horizontal Prediction mode^ Horizontal_Up Prediction mode 之情形時’處理進入步驟S58。 於步驟S58中,模式判別部91係對水平方向内插部%輸出 控制信號’以使水平方向内插部92進行水平方向之内插,並 I45451.doc •51 · 201105144 對垂直方向内插部93輸出控制信號,以使垂直方向内插部% 進行垂直方向之内插。 亦即,水平方向内插部92係根據來自模式判別部9丨之控制 信號’自鄰接像素緩衝部81讀出上部鄰接像素,並藉由6階 FIR濾波器及線性内插,使所讀出之上部鄰接像素之水平方 -向之相位偏移。水平方向内插部92係將已進行内插之上部鄰· 接像素之資訊供給至最佳偏移量決定部83。 又,垂直方向内插部93係根據來自模式判別部91之控制信 號,自鄰接像素緩衝部8 1讀出左部鄰接像素,並藉由6階FIR 濾波器及線性内插,使所讀出之左部鄰接像素之垂直方向之 相位進行偏移。垂直方向内插部93係將已進行内插之左部鄰 接像素之賀§11供給至最佳偏移量決定部8 3。 於步驟S59中,最佳偏移量決定部83係對藉由最佳模式決 疋部82所決定之預測模式,於_〇75至+〇75中,決定上部及 左部鄰接像素之最佳偏移量。該決定中使用需進行+貞内預測 之對象區塊之圖像、自鄰接像素緩衝部81所讀出之上部及左 部鄰接像素、以及已進行内插之上部及左部鄰接像素之資 Λ。所决疋之最佳偏移量之資訊被供給至預測圖像生成部 84 ° 另一方面,於步驟S51中,於判定出最㈣内預測模式為. DC模式之情形時’結束鄰接像素内插處理。亦即,水平方· 向内插部82及垂直方向内插部83不進行動作,於最佳偏移量 決定部83中,決定偏移量〇為最佳偏移量。 [幀間動態預測處理之說明j 145451.doc •52- 201105144 ,次’參考圖22之流程圖,就圖8之步驟如 劂處理加以說明。 貞]動心預 動態預測、補償部76係於 4x4像夸夕s接々u 5了包3 16x16像素至 像素之8種各幀間預測模 傻。介ρ„ 刀另疋移動向量及參考圖 像亦即,針對各賴間刊 定移動向量及參考圖像。式之處理對象之區塊,分別決 動態預測、補償部76係於步驟S62 =?:各_預測模式,根攄步驟心= 態預二償處===預測及補償處理,該動 各幀間預測模式中之預測圖像。 動態預測、補償部76係於步驟泊中, -像素之8種各㈣預測模式所決定之㈣向量 附加於壓縮圖像之移動向量 成用以 — 貝5fl。此時,利用參考圖5已進 仃敘述之移動向量之生成方法。 所生成之移動向量資訊亦於 ni m 個y驟S64中之成本函數 值汁异時制,於最終藉由預_像選擇㈣已選擇對應之 預測圖像之情形時,盥預測楛4、次i ί貝m式貧訊及參考訊框資訊一併輸 出至可逆編碼部66。 動態預測、補償部76係於步驟⑽中,對包含ΐ6χΐ6像素至 Μ像素之8種各巾貞間預測模式’計算以上述式⑽或式⑽ 所表不之成本函數值°於此所計算出之成本函數值係於上述 圖8之步驟S34巾決定最佳巾貞間_模式時使用。 另外,本發明之動作原理並不限定於參考圖Μ或圖2〇及圖 21已進行敘述之動作。例如,亦可相對於全部㈣内預測模 I45451.doc -53- 201105144 式计算成為候補之全部的偏移量之預測值,計算其餘差來 、疋最佳幀内預測模式及最佳偏移量。進行該動作時之幀内 預測部及鄰接像素内插部之構成例示於圖23中。 [賴内預測部及鄰接像素内插部之其他構成例] 圖23係表示幀内預測部及鄰接像素内插部之其他構成例之 方塊圖。 於圖23之例之情形時,巾貞内預測部74包括鄰接像素緩衝部 101最佳模式/最佳偏移量決定部1 02及預測圖像生成部 103 〇 鄰接像素内插部75包括水平方向内插部m及垂直方向内 插部1 1 2。 鄰接像素緩衝部1G1儲存來自訊㈣憶體72之㈣預測之 對象區:之鄰接像素。於圖23之情形時,亦省略開關73之圖 丁仁實際上,鄰接像素係自訊框記憶體”經由開關η而被 供給至鄰接像素緩衝部101。 對於最佳模式/最佳偏移量決定部1〇2 ’自畫面重排緩衝部 62輸入需進行幅内預測之對象區塊之像素。最佳模式/最佳 二移量決定部1〇2自鄰接像素緩衝部1〇1讀出與需進行賴内預 測之對象區塊對應之鄰接像素。 最佳模式/最佳偏移量衫部⑽係將候補之㈣預測模式 (以下稱作候補模式)之資訊供給至水平方向内插部⑴及垂 :内插邛112。對於最佳模式/最佳偏移量決定部⑻, :平方向内插部1U及垂直方向内插部ιΐ2輸入根據候補模 式進行内插之鄰接像素之資訊。 145451.doc •54· 201105144 最佳模式/最佳偏移量決定部1〇2係使用需進行幀内預測之 對象區塊之像素、所對應之鄰接像素、及已進行内插之鄰接 像素之像素值,對全部的候補模式及全部的候補偏移量進行 幀内預測,而生成預測圖像。繼而,最佳模式/最佳偏移= 決疋部1 02係計算成本函數值或預測餘差等,自全部的候補 模式及全部的偏移量中,歧最佳+貞内預測模式及最佳偏移 S。所決定之預測模式及偏移量之資訊係被供給至預測圖像 生成部103。另外,此時,與預測模式對應之成本函數值亦 被供給至預測圖像生成部1 〇3。 預測圖像生成部103係自鄰接像素緩衝部1〇1讀出與需進行 内預測之對象區塊對應之鄰接像素,並藉由6階打汉濾波^ 幀 及線性内插,於與預測模式對應之相位方向上,使已讀出之 鄰接像素之相位以最佳偏移量進行偏移。 預測圖像生成部103係使用相位經偏移之鄰接像素,於藉 由最佳模式/最佳偏移量決定部1()2所決^之最㈣内預測才: 式中進仃1&gt;貞内預測,而生成對象區塊之預測圖像。預測圖像 生成部103係將所生成之預測圖像及所對應之成本函數值輸 出至預測圖像選擇部77。 又=預測圖像生成部1 〇3係於藉由預測圖像選擇部而選 擇以最佳㈣預龍式所生成之㈣圖像之情料,將表示 最佳幢内預龍式之資訊及偏移量之f訊供給至可逆編碼部 水平方向内插部⑴及垂直方向内插部112係根據來自最佳 模式/最佳偏移量決定部1G2之㈣模式,自鄰接像素緩衝部 145451.d0) -55- 201105144 1 〇 1分別讀出鄰接像素。水平方向内插部u丨及垂直方向内插 部112係藉由6階FIR濾波器及線性内插,於水平方向及垂直 方向上’使所讀出之鄰接像素之相位分別偏移。 [幀内預測處理之其他說明] 其-人,參考圖24之流程圖,對圖23之幀内預測部74及鄰接 像素内插部75所進行之幀内預測處理進行說明。另外,該幀 内預測處理係圖8之步驟S31令之幀内預測處理之其他例。 最佳模式/最佳偏移量決定部1〇2係將候補之幀内預測模式 之資訊供給至水平方向内插部ln及垂直方向内插部ιΐ2。 於乂驟8101中’水平方向内插部U1及垂直方向内插部 係對全部的候補之_預_式,執行鄰接像素内插處理。 亦即於步驟S101中,對4M像素' 8x8像素及16x16像素之 各幢内預測模式,分別執行鄰接像素内插處理。 步驟S101中之鄰接内插處理之詳細情況將參考圖25於下文 中進仃敘述’但藉由該處理’於與各㈣預測模式對應之偏 移方向上進行内插之鄰接像素之資訊被供給至最佳模式/最 佳偏移量決定部1 02。 於步驟S 102中,最佳模故/异 往棋式/最佳偏移量決定部i〇2係對4x4 像素、8x8像素及16x16像音夕欠A占〜 1豕京之各幀内預測模式及各偏移量進 行幀内預測。 亦p最佳模式/最佳偏移量決定部係使用需進行巾貞 預測之對象區塊之像素、所對應之鄰接像素、及已進行内 之鄰接像素之像素值,對全 j i °丨的幀内預測模式及全部的候 偏移量進行幀内預測。其社 。果’相對於全部的幀内預測模 145451.doc •56- 201105144 及全部的候補偏移量, # μ 王成有預測圖像。 於步驟S103t,最佳模 有預測圖像之4M像素、s佳偏移量決定部102係對生成 模式及各偏移量,計算::素及16X16像素之各幢内預測 於步,_4中,最佳模tlC34)或式(35)之成本函數值。 算出之成本函數值進行^偏移量決定部⑽係將所計 #又,稭此對4x4像紊、德去》 “川像素之各賴内 料8x8像素及 量。 、、式刀別決定最佳模式及最佳偏移 於步驟S105中,最估捃/ r7 S104中所決定之各最估:最佳偏移量決定部102係自步驟 中所4 Μ 、 \式及最佳偏移量中,根據步驟S 1 03 移旦I I Μ本函數值’選擇最佳㈣預測模式及最佳偏 即^對4X4像素、8X8像素及16X16像素之各幅内 “所決疋之各最佳模式及最佳偏移量中 ^測模式及最佳偏移量。所選擇之預測模式及偏移量之資 。孔糸與所對應之成本函數值一併被供給至預測圖像生成部 103 〇 :步驟S106中’預測圖像生成部1〇3係使用相位以最佳偏 移量已進行偏移之鄰接像素,生成預測圖像。 兩’、卩予頁/貝]圊像生成部! 〇3係自鄰接像素緩衝部⑻讀出與 而進H内預測之對象區塊制之鄰接像素。繼而,預測圖 像生成部103係藉由6階FIR濾波器及線性内插,於與所決定 之預測柄式對應之相位方向,使已讀出之鄰接像素之相位以 最佳偏移量進行偏移。 預測圓像生成部1 〇 3係使用相位已進行偏移之鄰接像素, 145451.doc -57- 201105144 以藉由最佳模式/最佳偏移量決定部1 02所決定之預測模式進 /亍幀内預測,生成對象區塊之預測圖像。所生成之預測圖像 係與所對應之成本函數值一併被供給至預測圖像選擇部π。 [鄰接像素内插處理之說明] 其次,參考圖25之流程圖,就圖24之步驟sl〇1之鄰接像素 内插處理加以說明。另外,該鄰接像素内插處理係針對每個 候補之巾貞内預測模式所進行之處理。又,由於圖25之步驟 Sill至S116係進行與圖21之步驟351至853、S55、S56及S58 相同之處理,因此適當省略其詳細說明。 候補之幀内預測模式之資訊係自最佳模式/最佳偏移量決 定部102被供給至水平方向内插部!丨丨及垂直方向内插部 112。水平方向内插部丨n及垂直方向内插部^ 12係於步驟 si 11中,判定候補之幀内預測模式是否為Dc模式。於步驟 s 111中,於判定出候補之幀内預測模式並非為模式之情 形時’處理進入步驟s 112。 於步驟S112中,水平方向内插部ln及垂直方向内插部ιι2 係判疋候補之幢内預測模式是否為Verticai predicti〇n mode、Diag〇nal_D〇wnjLeft Prediction mode、或 Vertical_In the case of Horizontal Prediction mode or Horizontal-Up:predicti0n m〇de, the processing proceeds to step S56. In step S56, the mode determination unit 91 outputs a control signal to the vertical interpolation unit % so that the vertical interpolation unit 93 performs interpolation in the vertical direction. In other words, the vertical direction interpolating unit 93 reads out the left adjacent pixels from the adjacent pixel buffer unit 81 based on the control signal from the mode determining unit 91, and causes the read by the 6-step scale filter and linear interpolation. The phase in the vertical direction of the left adjacent pixel is shifted. The vertical direction interpolating unit 93 supplies the information of the left adjacent pixel that has been interpolated to the optimum offset amount determining unit 83. In step S57, the optimum offset amount determining unit 83 determines the optimum offset amount of the left adjacent pixel in _〇75 to +〇75 for the prediction mode determined by the optimum mode decision unit 82. Further, in this determination, the image of the target block to be intra-prediction, the left adjacent pixel read from the adjacent pixel buffer unit 81, and the left adjacent pixel of the interpolation are used. Also, at this time, make ten pairs. The information indicating that the optimum offset amount of the adjacent pixels is the optimum offset amount determined by 〇 ^ is supplied to the predicted image generating portion 84. In step S55, when it is determined that the optimum + 预测 prediction mode is not the case of the Horizontal Prediction mode ^ Horizontal_Up Prediction mode, the processing proceeds to step S58. In step S58, the mode determining unit 91 outputs a control signal ' to the horizontal direction interpolating portion % to interpolate the horizontal direction interpolating portion 92 in the horizontal direction, and I45451.doc • 51 · 201105144 for the vertical direction interpolating portion 93 outputs a control signal to interpolate the vertical interpolation portion % in the vertical direction. In other words, the horizontal direction interpolating unit 92 reads the upper adjacent pixels from the adjacent pixel buffer unit 81 based on the control signal from the mode determining unit 9丨, and reads out by the 6th-order FIR filter and linear interpolation. The upper side of the pixel is adjacent to the horizontal direction of the pixel. The horizontal direction interpolating unit 92 supplies information for interpolating the upper adjacent pixels to the optimum offset amount determining unit 83. Further, the vertical direction interpolating unit 93 reads the left adjacent pixel from the adjacent pixel buffer unit 81 based on the control signal from the mode determining unit 91, and reads it by the 6th-order FIR filter and linear interpolation. The phase of the left side adjacent to the pixel in the vertical direction is shifted. The vertical direction interpolating portion 93 supplies the left side adjacent pixel to which the interpolation has been performed, to the optimum offset amount determining portion 83. In step S59, the optimum offset amount determining unit 83 determines the optimum of the upper and left adjacent pixels in _〇75 to +〇75 for the prediction mode determined by the optimum mode decision unit 82. Offset. In this determination, an image of a target block to be subjected to +贞 prediction, an upper portion and a left adjacent pixel read from the adjacent pixel buffer unit 81, and a resource for interpolating the upper portion and the left adjacent pixel are used. . The information of the optimum offset amount is supplied to the predicted image generating unit 84. On the other hand, in step S51, when it is determined that the most (four) intra prediction mode is the DC mode, the end of the adjacent pixel is ended. Insert processing. In other words, the horizontal interpolation/interpolation unit 82 and the vertical interpolation unit 83 do not operate, and the optimum offset amount determination unit 83 determines the offset amount 〇 to be the optimum offset amount. [Description of Inter-frame Dynamic Prediction Processing j 145451.doc • 52- 201105144, </ RTI> Referring to the flowchart of Fig. 22, the steps of Fig. 8 are explained as 劂 processing.贞] The eccentric pre-motion prediction and compensation unit 76 is based on 4x4 images of 夸 s 5 5 5 5 5 包 3 16x16 pixels to 8 kinds of inter-frame prediction modes. In the case of the motion vector and the reference image, the motion vector and the reference image are reported for each location, and the block of the processing target is determined by the dynamic prediction and compensation unit 76 in step S62 =? : each _ prediction mode, root step = state pre-compensation === prediction and compensation processing, the predicted image in the inter-frame prediction mode. The dynamic prediction and compensation unit 76 is in the step mooring, - The (four) vector determined by the (four) prediction mode of the pixel is added to the motion vector of the compressed image to be used as the -5 5fl. At this time, the generation method of the motion vector described with reference to Fig. 5 is used. The vector information is also the time function of the cost function value in the ni y step S64. When the corresponding predicted image is selected by the pre-image selection (4), the prediction 楛4, the second i ί贝The lean information and the reference frame information are output to the reversible coding unit 66. The dynamic prediction and compensation unit 76 is configured to calculate the above-described equation (10) for each of the eight types of prediction modes including ΐ6χΐ6 pixels to Μ pixels in the step (10). Or the cost function value expressed by equation (10) The calculated cost function value is used in the step S34 of the above-mentioned Fig. 8. The operation principle of the present invention is not limited to the reference figure or Fig. 2 and Fig. 21 For example, the predicted value of all the offsets of the candidate may be calculated with respect to all (four) intra prediction modes I45451.doc -53 - 201105144, and the remaining difference, the best intra prediction mode, and the best are calculated. The configuration of the intra prediction unit and the adjacent pixel interpolation unit when the operation is performed is shown in Fig. 23. [Other configuration examples of the intra prediction unit and the adjacent pixel interpolation unit] Fig. 23 shows the intra prediction. A block diagram of another configuration example of the portion and the adjacent pixel interpolation unit. In the case of the example of FIG. 23, the in-frame prediction unit 74 includes an adjacent mode pixel buffer unit 101 optimal mode/optimal offset amount determining unit 102 and The predicted image generating unit 103 includes the horizontal interpolating unit m and the vertical interpolating unit 1 1 2 . The adjacent pixel buffer unit 1G1 stores the (four) predicted object area from the (4) memory 72: Adjacent pixels. In the case of Figure 23 Also, the diagram of the switch 73 is omitted. In fact, the adjacent pixel system frame memory is supplied to the adjacent pixel buffer unit 101 via the switch η. The optimum mode/optimal offset amount determining unit 1〇2' inputs the pixels of the target block to be intra-predicted from the screen rearranging buffer unit 62. The optimum mode/optimal second shift amount determining unit 1〇2 reads out adjacent pixels corresponding to the target block to be predicted from the adjacent pixel buffer unit 1〇1. The optimal mode/optimal shifting shirt unit (10) supplies information of the candidate (four) prediction mode (hereinafter referred to as candidate mode) to the horizontal direction interpolating unit (1) and the vertical interpolating unit 112. For the optimum mode/optimal offset amount determining unit (8), the flat direction interpolating unit 1U and the vertical interpolating unit ι2 input information of adjacent pixels interpolated in the candidate mode. 145451.doc •54· 201105144 The best mode/optimal offset determination unit 1〇2 uses the pixels of the target block to be intra predicted, the corresponding adjacent pixels, and the adjacent pixels that have been interpolated. The pixel value is intra-predicted for all candidate modes and all candidate offsets, and a predicted image is generated. Then, the best mode/optimal offset=decision section is the calculation of the cost function value or the prediction residual, etc., among all the candidate modes and all the offsets, the optimal optimal + intra prediction mode and the most Good offset S. The information of the determined prediction mode and offset is supplied to the predicted image generation unit 103. Further, at this time, the cost function value corresponding to the prediction mode is also supplied to the predicted image generating unit 1 〇3. The predicted image generating unit 103 reads the adjacent pixels corresponding to the target block to be intra-predicted from the adjacent pixel buffer unit 〇1, and performs the prediction mode by the 6-step hit filter and the linear interpolation. In the corresponding phase direction, the phase of the adjacent pixel that has been read is shifted by the optimum offset. The predicted image generation unit 103 uses the adjacent pixel whose phase is shifted, and the most (four) intra prediction by the optimal mode/optimal offset amount determining unit 1() 2: Intra-prediction, and the predicted image of the object block is generated. The predicted image generation unit 103 outputs the generated predicted image and the corresponding cost function value to the predicted image selection unit 77. Further, the predicted image generating unit 1 〇3 selects the (four) image generated by the optimal (four) pre-dragon type by the predicted image selecting unit, and displays the information of the optimal in-building pre-dragon type and The offset amount f is supplied to the reversible coding unit horizontal direction interpolating unit (1) and the vertical direction interpolating unit 112 based on the (fourth) mode from the optimal mode/optimal offset amount determining unit 1G2, from the adjacent pixel buffer unit 145451. D0) -55- 201105144 1 〇1 reads out adjacent pixels separately. The horizontal interpolating portion u and the vertical interpolating portion 112 shift the phases of the read adjacent pixels in the horizontal direction and the vertical direction by a sixth-order FIR filter and linear interpolation. [Others of the intra prediction processing] The intra prediction processing performed by the intra prediction unit 74 and the adjacent pixel interpolation unit 75 of Fig. 23 will be described with reference to the flowchart of Fig. 24 . Further, the intra prediction processing is another example of the intra prediction processing in step S31 of Fig. 8. The optimum mode/optimal offset determining unit 1〇2 supplies the information of the candidate intra prediction mode to the horizontal interpolating unit ln and the vertical interpolating unit ι2. In the step 8101, the horizontal interpolation unit U1 and the vertical interpolation unit perform the adjacent pixel interpolation processing for all the candidates. That is, in step S101, adjacent pixel interpolation processing is performed for each intra prediction mode of 4M pixels '8x8 pixels and 16x16 pixels, respectively. The details of the adjacent interpolation processing in step S101 will be described with reference to FIG. 25, hereinafter, but the information of the adjacent pixels interpolated in the offset direction corresponding to each (four) prediction mode is supplied. The best mode/optimal offset determination unit 102. In step S102, the best mode/disparate chess/optimal offset determining unit i〇2 is for each 4x4 pixel, 8x8 pixel, and 16x16 image singer A ~1 豕The mode and each offset are intra predicted. Also, the p-optimal mode/optimal offset determining unit uses the pixel of the target block to be predicted by the frame, the corresponding adjacent pixel, and the pixel value of the adjacent pixel that has been performed, for the entire ji ° The intra prediction mode and all the candidate offsets are intra prediction. Its society. For each of the intra prediction modes 145451.doc •56- 201105144 and all candidate offsets, # μ王成 has a predicted image. In step S103t, the optimal mode predictive image 4M pixel and s-good offset amount determining unit 102 calculate the generation mode and each offset amount, and calculate: each of the pixels and 16×16 pixels are predicted in steps, _4. , the best mode tlC34) or the cost function value of equation (35). The calculated cost function value is calculated by the offset amount determining unit (10), which is the number of the 4x4 pixels, and the amount of the 8x8 pixels and the amount of the respective pixels. The best mode and the best offset are estimated in step S105, and the most estimated: the optimal offset determining unit 102 is the 4th, the following, and the optimal offset in the step. According to the step S 1 03, the transfer function value of the shifting II function selects the best (four) prediction mode and the best bias is the best mode for each of the 4×4 pixels, 8×8 pixels and 16×16 pixels. The optimum offset is the mode and the best offset. The forecast mode and offset amount selected. The aperture is supplied to the predicted image generation unit 103 together with the corresponding cost function value. In step S106, the prediction image generation unit 1〇3 uses adjacent pixels whose phase has been shifted by the optimum offset. , generating a predicted image. Two ', 卩 页 page / shell] 圊 image generation department! The 〇3 system reads out adjacent pixels of the target block generated by the H prediction from the adjacent pixel buffer unit (8). Then, the predicted image generation unit 103 uses the sixth-order FIR filter and the linear interpolation to make the phase of the read adjacent pixel at the optimum offset in the phase direction corresponding to the determined prediction stalk. Offset. The predicted circular image generation unit 1 〇3 uses adjacent pixels whose phases have been shifted, and 145451.doc -57- 201105144 is predicted by the optimal mode/optimal offset determination unit 102. Intra prediction, generating a predicted image of a target block. The generated predicted image is supplied to the predicted image selecting unit π together with the corresponding cost function value. [Description of Adjacent Pixel Interpolation Processing] Next, the adjacent pixel interpolation processing of the step sl1 of Fig. 24 will be described with reference to the flowchart of Fig. 25. Further, the adjacent pixel interpolation processing is performed for each candidate intra prediction mode. Further, since steps Sill to S116 of Fig. 25 are the same as steps 351 to 853, S55, S56, and S58 of Fig. 21, detailed description thereof will be omitted as appropriate. The information of the candidate intra prediction mode is supplied from the optimal mode/optimal offset determination unit 102 to the horizontal interpolation unit!丨丨 and the vertical direction interpolating portion 112. The horizontal direction interpolating unit 丨n and the vertical direction interpolating unit 12 are determined in step si11 to determine whether or not the candidate intra prediction mode is the Dc mode. In step s111, when it is determined that the intra prediction mode of the candidate is not the mode of the mode, the processing proceeds to step s112. In step S112, the horizontal direction interpolation unit ln and the vertical direction interpolation unit ιι2 determine whether the candidate intra prediction mode is Verticai predicti〇n mode, Diag〇nal_D〇wnjLeft Prediction mode, or Vertical_

Left Prediction mode。 於步驟SI 12中,於判定出候補之幀内預測模式為VerUcalLeft Prediction mode. In step S12, the intra prediction mode of the candidate is determined to be VerUcal.

Prediction m〇de、Diag0nal_Down—Left Predicti〇n 猶心、或Prediction m〇de, Diag0nal_Down—Left Predicti〇n still, or

Vertical—Left Prediction m〇de之情形時,處理進入步驟 SI 13。 於步驟SI 13中,水平方向内插部U1係根據候補之幀内預 145451.doc -58- 201105144 測模式’進行水平方向之内插。水平方向内插部1 i丨係將已 進行内插之上部鄰接像素之資訊供給至最佳模式/最佳偏移 量決定部102。此時,垂直方向内插部n2不進行垂直方向之 内插處理。 於步驟S 112中,於判定出候補之幀内預測模式並非為In the case of Vertical_Left Prediction m〇de, the processing proceeds to step SI13. In step S113, the horizontal direction interpolating unit U1 performs interpolation in the horizontal direction in accordance with the intra-frame candidate 145451.doc - 58 - 201105144 test mode '. The horizontal direction interpolating unit 1 i is supplied to the optimal mode/optimal offset amount determining unit 102 by interpolating the information of the adjacent pixels in the upper portion. At this time, the interpolation unit n2 in the vertical direction does not perform the interpolation processing in the vertical direction. In step S112, the intra prediction mode in which the candidate is determined is not

Vertical Prediction mode、Diagonal一Down一Left Prediction mode、及 Vertical—Left Prediction mode之情形時,處理進入 步驟S 11 4。 於步驟S114中,水平方向内插部U1及垂直方向内插部112 係判定候補之幀内預測模式是否為Horizontal Predicti〇n mode、或 Horizontal_Up Prediction mode。於步驟 S114 中, 於判定出候補之幀内預測模式為Horizontal predicti⑽ mode、或Horizontal_Up Prediction mode之情形時,處理進 入步驟S 11 5。 於步驟S115中,垂直方向内插部112係根據候補之幀内預 測模式,進行垂直方向之内插。垂直方向内插部112係將已 進盯内插之左部鄰接像素之資訊供給至最佳模式/最佳偏移 量決定部102。此時,水平方向内插部U1不進行水平方向之 内插。 於步驟S114中,於 Horizontal Prediction mode之情形時,處理進入步驟S116。 判疋出候補之幀内預測模式並非為 mode、及 H〇rizontal_Up Predicti〇n 於步驟SU6中,水平方向内插部lu及垂直方向内插部ιι2 係根據候補之_預測模式,分別進行水平方向之内插及垂 145451.doc -59. 201105144 直方向之内插。水平方向内插部&quot;i及垂直方向内插部ιΐ2係 將已進行内插之上部鄰接像素及左部鄰接像素之資訊分別供 給至最佳模式/最佳偏移量決定部1〇2。 經編碼之壓縮圖像經由特定之傳輪路而傳輸,並藉由圖像 解碼裝置進行解碼。 [圖像解碼裝置之構成例] 圖26表示作為適用本發明之圖像處理裝置之圖像解碼裝置 之一實施形態之構成。 圖像解碼裝置15 1包括儲存緩衝部丨6丨、可逆解碼部丨、 逆量化部163、逆正交轉換部164、運算部165、解塊濾波器 咖、晝面重排緩衝部167、D/A(digital/anai〇g,數位/類比) 轉換。Μ 68、汛框g己憶體丨69、開關丨7〇、幀内預測部m、鄰 接像素内插部172、動態預測、補償部173及開關174〇 儲存緩衝部161係儲存傳輸來之壓縮圖像。可逆解碼部162 係將自儲存緩衝部161所供給且藉由圖2之可逆編碼部66已進 行編碼之資訊,以與可逆編碼部66之編碼方式對應之方式進 行解碼。逆量化部163係將藉由可逆解碼部162已進行解碼之 圖像,以與圖2之量化部65之量化方式對應之方式進行逆量 化。逆正交轉換部164係以與圖2之正交轉換部64之正交轉換 方式對應之方式,將逆量化部163之輸出進行逆正交轉換。 經逆正交轉換之輸出係藉由運算部165與自開關174所供給 之預測圖像相加而進行解碼。解塊濾波器166係將經解碼之 圖像之區塊失真去除後,供給並儲存至訊框記憶體1 ,並 且輸出至晝面重排緩衝部167。 14545 丨.doc -60- 201105144 晝面重排緩衝部167進行圖像之重排。亦即,將藉由圖2之 畫面重排緩衝部6 2而重排為編碼之順序之訊框之順9序,°重排 為原顯示順序。D/A轉換部168係將自晝面重排緩衝部所 供給之圖像進行D/A轉換,輸出並顯示於未圖示之顯示器。 開關1 70係自訊框記憶體! 69讀出需進行内部處理之圖像及 所參考之圖像,並輸出至動態預測、補償部173,並且自訊 框記憶體169讀㈣内預測中所使用之圖像並供給至幅内 預測部171。 ' 向t貞内預測部171 ’自可逆解碼部162供給將標頭資訊進行 解:所獲得之表㈣⑽賴式之f訊及㈣像素之偏移量 賓 幀内預測邛171係亦將該等資訊供給至鄰接像素内 插部172。 μ 幢内預測。[U71係基於該等資訊,視需要,使鄰接像素内 ㈣172㈣㈣素之相位加以偏移’使用鄰接像素或相位 已進行偏移之鄰接像素’生成_圖像,絲所生成之預測 圖像輸出至開關174。 鄰接像素内插部172係於與自巾貞内預測部π供給之.貞内預 測模式對應之偏移方向上,使鄰接像素之相位以自巾貞内預測 部171所供給之偏移量進行偏移。實際上,鄰接像素内插部 172係於與㈣預測模式對應之偏移方向上,對鄰接像素使 用6階F職波器進行據出,並進行線性内插,藉此使鄰接像 、 為移至小數像素精度。鄰接像素内插部1 72係將相 位已進行偏移之鄰接像素供給關内預測部m。 向動怎預測、補償部173,自可逆解碼部ία供給將標頭資 145451.doc -61- 201105144 汛進行解碼所獲得之資訊(預測模 參考訊框資訊)。於被供給到表示_=^資訊、 形時,動態預測、補償部173係根據移動向量資訊之情 祐次μ 期向置資訊及參考訊 貝讯而對圖像實施動態預測及補償處理,生成預測 動態預測、補償部173係將藉由幀間 圖像輸出至開關174。 ’預測棋式所生成之預測 開關m係選擇藉由動態預測、補償部m或傾内預測部 171所生成之預測圖像,並供給至運算部165。 [幀内預測部及鄰接像素内插部之構成例] 圖27係表示幢内預測部及鄰接像素内插部之詳細構成例之 方塊圖。 於圖27之例之情形時,幢内預測部m包括預測模式接收 部⑻、偏移量接收部182及幢内預測圖像生成部•鄰接 像素内插部172包括水平方向内插部191及垂直方向内插部 192。 預測模式接收部181係接收藉由可逆解碼部162已進行解碼 之幀内預測模式資訊。預測模式接收部181將所接收之幀内 預測模式資sfl,供給至幀内預測圖像生成部】83、水平方向 内插部191及垂直方向内插部192。 偏移量接收部182係接收藉由可逆解碼部丨62已進行解碼之 偏移量(水平方向及垂直方向)之資訊。偏移量接收部182係 於所接收之偏移量令,將水平方向之偏移量供給至水平方向 内插部191 ’且將垂直方向之偏移量供給至垂直方向内插部 192。 14545i.doc -62· 201105144 向幀内預測圖像生成部183 ,輸入有藉由預測模式接收部 1所接收之幀内預測模式之資訊。又,向幀内預測圖像生 成部183,自水平方向内插部191輸入有上部鄰接像素或已進 /亍插之上。卩鄰接像素之資訊,且自垂直方向内插部192輸 入有左部鄰接像素或已進行内插之左部鄰接像素之資訊。 幀内預測圖像生成部183係以所輸入之幀内預測模式資訊 斤示之預測模式,使用鄰接像素或已進行内插之鄰接像素之 像素值進行巾貞㈣測’生成預測圖像,並將所生成之預測圖 像輸出至開關174。 水平方向内插部191係根據來自預測模式接收部181之預測 模式,自訊框記憶體169讀出上部鄰接像素。水平方向内插 部191係藉由6階以尺濾波器及線性内插,使所讀出之上部鄰 接像素之相位以來自偏移量接收部丨82之水平方向之偏移量 進行偏移。已進行内插之上部鄰接像素或未進行内插之上部 鄰接像素(亦即,來自訊框記憶體169之鄰接像素)之資訊被 供給至幀内預測圖像生成部183。於圖27之情形時,雖然省 略開關170之圖示,但鄰接像素係自訊框記憶體169經由開關 1 7 0而被讀出。 垂直方向内插部192係根據來自預測模式接收部181之預測 模式,自訊桓記憶體169讀出左部鄰接像素。垂直方向内插 部192係藉由6階FIR濾波器及線性内插,使所讀出之左部鄰 接像素之相位以來自偏移量接收部182之垂直方向之偏移量 進行偏移。已進行線性内插之左部鄰接像素或未進行内插之 左部鄰接像素(亦即,來自訊框記憶體169之鄰接像素)之資 145451.doc -63· 201105144 訊被供給至幀内預測圖像生成部1 83。 [圖像解石馬裝置之解碼處理之說明] ,、人參考圖2 8之流程圖,就圖像解碼裝置} 5丨所執行之 解碼處理加以說明。 於步驟S131中,儲存緩衝部161係儲存傳輸來之圖像。於 v WS 132中,可逆解碼部162對自儲存緩衝部】6丨所供給之壓 縮圖像進行解碼°亦即,藉自®2之可逆編碼祕已進行編 碼之I晝面、p畫面及B畫面被解碼。 此時移動向量資訊、參考訊框資訊、預測模式資訊⑽ 内預測模式、或表示t貞間預測模式之資訊)、旗標資訊及偏 移量之資訊等亦被解碼。 '、P於預測模式資訊為情内預測模式資訊之情形時,預 測模式資訊及偏移量之資訊被供給至.貞内預測部171。於預 測模式資说為巾貞間預測模式:#訊之情料,與制模式資訊 對應之移動向量資訊及參考訊框資訊被供給至 償部173。 於步驟S 1 3 3 Φ,技旦„ — 逆里化部163係將藉由可逆解碼部1 62已進 订解碼之轉換係數,以與圖2之量化㈣之特性對應之特性 進行逆量化。於步驟 曰 、7 ^M34中,逆正交轉換部164係將藉由逆 里化部16 3已進行读吾几&amp; 里化之轉換係數,以與圖2之正交轉換部 64之特性對應之特 吁生進仃逆正交轉換。藉此,與圖2之正交 轉換部64之輸入(運算Δ 异# 63之輸出)對應之差分資訊得以解 碼。 於步驟S135中,運筲加, %异。Ρ 165將於下述步驟S141之處理中選 145451.doc -64 - 201105144 擇且經由開關174所輸入之預測圖像與差分資訊進行相加。 藉此’原圖像得以解碼。於步驟S136中,解塊濾波器166係 將自運算部165所輸出之圖像進行濾波。藉此,將區塊失真 去除。於步驟S137中,訊框記憶體169記憶經濾波之圖像。 於步驟S138中,幀内預測部171及動態預測、補償部173與 自可逆解碼部162所供給之預測模式資訊對應,分別進行圖 像之預測處理。 亦即,於自可逆解碼部162供給有幀内預測模式資訊之情 形時’幀内預測部171進行幀内預測模式之幀内預測處理。 此時,幀内預測部171使用鄰接像素進行幀内預測處理,該 鄰接像素係於與幀内預測模式對應之偏移方向上,使相位以 自可逆解碼部162所供給之偏移量進行偏移者。 步驟S138中之預測處理之詳細情況將參考圖μ於下文中進 行敘述,但藉由該處理,藉由幀内預測部17】所生成之預測 圖像、或藉由動態預測、補償部173所生成之預測圖像被供 給至開關174。 於步驟S139中,開關174選擇預測圖像。亦即,供給有藉 由t貞内預測部171所生成之預測圖像、或藉由動態預測、^ 償部173所生成之預測圖像。因此,選擇所供給之預測圖像 並供給至運算部165,如上所述,於步驟8134中,與逆正交 轉換部16 4之輸出進行相加。 於步驟S140中,晝面重排緩衝部167進行重排。亦即,將 藉由圖像編碼裝置5 1之畫面重排緩衝部62為編碼而加以重排 之訊框之順序’重排為原顯示順序。 145451.doc -65· 201105144 於步驟S14i中,D/A轉換部168係將來自畫面重排緩衝部 167之圖像進行D/A轉換。該圖像輸出至未圖示之顯示器並 顯不圖像。 [預測處理之說明] 其次,參考圖29之流程圖,對圖28之步驟SU8之預測處理 進行說明。 預測模式接收部181於步驟S171中’判定對象區塊是否進 仃幀内編碼。當幀内預測模式資訊自可逆解碼部162供給至 預測模式接收部181時,預測模式接收部181於步驟sm中, 判定為對象區塊已進行幀内編碼,處理進入步驟§172。 預測模式接收部181於步驟Si72中,接收並取得來自可逆 解碼部16 2之幀内預測模式資訊。預測模式接收部丨8丨將所接 收之幀内預測模式資訊,供給至幀内預測圖像生成部183、 水平方向内插部191及垂直方向内插部192。 偏移量接收部182於步驟S173中,接收並取得藉由可逆解 碼部16 2已進行解碼之鄰接像素之偏移量(水平方向及垂直方 向)之資訊。偏移量接收部1 82係於所接收之偏移量中,將水 平方向之偏移量供給至水平方向内插部191,且將垂直方向 之偏移量供給至垂直方向内插部〗92。 水平方向内插部191及垂直方向内插部I%係自訊框記憶體 169讀出鄰接像素,並於步驟8174中,執行鄰接像素内插處 理。步驟s 1 74中之鄰接内插處理之詳細情況係與參考圖25已 進行敘述之鄰接内插處理基本上相同之處理,故而省略其說 明及圖示。 145451.doc -66 · 201105144 藉由該處理’於與來自預測模式接收部181之幀内預測模 式對應之偏移方向上已進行内插之鄰接像素、或者根據帽内 預測模式未進行内插之鄰接像素將供給至幀内預測圖像生成 部 183。 亦即’於幀内預測模式為模式2(DC預測)之情形時,水平 方向内插部191及垂直方向内插部192不進行鄰接像素之内 插’將自訊框記憶體1 69所讀出之上部及左部鄰接像素供給 至幀内預測圖像生成部1 83 9 於幀内預測模式為模式〇(Vertical預測)、模式3(Diagonal Down一Left預測)、或模式7(Vertical—Left預測)之情形時,僅 進行水平方向之内插。亦即,水平方向内插部191對自訊框 s己憶體169所讀出之上部鄰接像素,以來自偏移量接收部^ 之水平方向之偏移量進行内插,並將已進行内插之上部鄰接 像素供給至幀内預測圖像生成部丨83。此時,垂直方向内插 部192不進行左部鄰接像素之内插,將自訊框記憶體169所讀 出之左部鄰接像素供給至幀内預測圖像生成部丨83。 於幀内預測模式為模式l(H〇rizontai預測)、或模式8 (Honzontal_Up預測)之情形時,僅進行垂直方向之内插。亦 即,垂直方向内插部192對自訊框記憶體169所讀出之左部鄰 接像素,以來自偏移量接收部182之垂直方向之偏移量進行 内插,並將已進行内插之左部鄰接像素供給至悄内預測圖像 生成部183。此時,水平方向内插部191不進行上部鄰接像素 之内插,將自訊框記憶體169所讀出之上部鄰接像素供給至 幀内預測圖像生成部丨83。 145451.doc •67- 201105144 測模式為其他預測模式之In the case of the Vertical Prediction mode, the Diagonal-Down-Left Prediction mode, and the Vertical-Left Prediction mode, the processing proceeds to a step S11. In step S114, the horizontal direction interpolating unit U1 and the vertical direction interpolating unit 112 determine whether or not the intra prediction mode of the candidate is the Horizontal Predicti mode or the Horizontal_Up Prediction mode. In the case where it is determined in the step S114 that the intra prediction mode of the candidate is the Horizontal predicti (10) mode or the Horizontal_Up Prediction mode, the processing proceeds to a step S 11 5 . In step S115, the vertical direction interpolating unit 112 performs interpolation in the vertical direction in accordance with the candidate intra prediction mode. The vertical direction interpolating unit 112 supplies the information of the adjacent pixels to the left of the interpolated to the optimum mode/optimal offset determining unit 102. At this time, the horizontal direction interpolating portion U1 is not interpolated in the horizontal direction. In the case of the Horizontal Prediction mode in step S114, the processing proceeds to step S116. The intra prediction mode of the candidate is not mode, and H〇rizontal_Up Predicti〇n. In step SU6, the horizontal interpolating unit lu and the vertical interpolating unit ιι are horizontally oriented according to the candidate _ prediction mode. Inserted and hanged 145451.doc -59. 201105144 Inserted in the straight direction. The horizontal interpolating portion &quot;i and the vertical interpolating portion ι2 are supplied to the optimal mode/optimal offset determining unit 1〇2, respectively, by interpolating the information of the upper adjacent pixel and the left adjacent pixel. The encoded compressed image is transmitted via a specific transmission path and decoded by the image decoding device. [Configuration Example of Image Decoding Device] Fig. 26 shows a configuration of an embodiment of an image decoding device to which the image processing device of the present invention is applied. The image decoding device 15 1 includes a storage buffer unit 丨6丨, a reversible decoding unit 丨, an inverse quantization unit 163, an inverse orthogonal conversion unit 164, a calculation unit 165, a deblocking filter coffee, a face rearrangement buffer unit 167, and D. /A(digital/anai〇g, digital/analog) conversion. Μ 68, g frame g 忆 丨 69, switch 丨 7 〇, intra prediction unit m, adjacent pixel interpolation unit 172, dynamic prediction, compensation unit 173 and switch 174 〇 storage buffer 161 is stored and compressed image. The reversible decoding unit 162 decodes the information supplied from the storage buffer unit 161 and encoded by the reversible encoding unit 66 of Fig. 2 so as to correspond to the encoding method of the reversible encoding unit 66. The inverse quantization unit 163 inversely quantizes the image decoded by the reversible decoding unit 162 so as to correspond to the quantization method of the quantization unit 65 of Fig. 2 . The inverse orthogonal transform unit 164 performs inverse orthogonal transform on the output of the inverse quantization unit 163 so as to correspond to the orthogonal transform scheme of the orthogonal transform unit 64 of Fig. 2 . The output of the inverse orthogonal conversion is decoded by the arithmetic unit 165 adding the predicted image supplied from the switch 174. The deblocking filter 166 removes the block distortion of the decoded image, supplies it to the frame memory 1 and outputs it to the frame rearrangement buffer unit 167. 14545 丨.doc -60- 201105144 The face rearrangement buffer unit 167 performs image rearrangement. That is, the sequence of the frames in the order of encoding is rearranged by the screen rearranging buffer unit 6 of Fig. 2, and the rearrangement is the original display order. The D/A conversion unit 168 performs D/A conversion on the image supplied from the face rearrangement buffer unit, and outputs the image to a display (not shown). The switch 1 70 is the frame memory! 69 reads the image to be processed internally and the referenced image, and outputs it to the dynamic prediction and compensation unit 173, and reads the (four) intra prediction in the frame memory 169. The image used is supplied to the intra-frame prediction section 171. The 'intra-predicting unit 171' is supplied from the reversible decoding unit 162 to extract the header information: the obtained table (4) (10) and the (4) pixel offset intraframe prediction 171 are also used. Information is supplied to the adjacent pixel interpolation unit 172. μ Intra prediction. [U71 is based on such information, if necessary, offsets the phase of the (four) 172 (four) (four) elements in the adjacent pixels by using 'adjacent pixels' or adjacent pixels that have been phase-shifted to generate an image, and the predicted image generated by the silk is output to Switch 174. The adjacent pixel interpolation unit 172 is arranged such that the phase of the adjacent pixel is offset by the amount supplied from the frame prediction unit 171 in the offset direction corresponding to the intra prediction mode supplied from the frame prediction unit π. Offset. Actually, the adjacent pixel interpolation unit 172 is in the offset direction corresponding to the (four) prediction mode, and the adjacent pixel is used to perform data interpolation using the 6th-order F-employer, thereby performing linear interpolation, thereby causing the adjacent image to be shifted. To fractional pixel precision. The adjacent pixel interpolating unit 1 72 supplies the adjacent pixels whose phases have been shifted to the intra-predicting unit m. The prediction and compensation unit 173 supplies the information obtained by decoding the header 145451.doc -61 - 201105144 ( (predictive mode reference frame information) from the reversible decoding unit ία. When it is supplied to the _=^ information and shape, the dynamic prediction and compensation unit 173 performs dynamic prediction and compensation processing on the image based on the motion vector information, and generates and predicts the image. The dynamic prediction and compensation unit 173 outputs an interframe image to the switch 174. The prediction switch m generated by the predicted game type selects the predicted image generated by the motion prediction, the compensation unit m or the in-the-down prediction unit 171, and supplies it to the calculation unit 165. [Example of Configuration of Intra Prediction Unit and Adjacent Pixel Interpolation Unit] Fig. 27 is a block diagram showing a detailed configuration example of the intra prediction unit and the adjacent pixel interpolation unit. In the case of the example of FIG. 27, the intra-frame prediction unit m includes a prediction mode receiving unit (8), an offset receiving unit 182, and an intra-prediction image generating unit. The adjacent pixel interpolating unit 172 includes a horizontal direction interpolating unit 191 and The insertion portion 192 is vertically oriented. The prediction mode receiving unit 181 receives the intra prediction mode information that has been decoded by the reversible decoding unit 162. The prediction mode receiving unit 181 supplies the received intra prediction mode sfl to the intra prediction image generation unit 83, the horizontal interpolation unit 191, and the vertical interpolation unit 192. The offset receiving unit 182 receives the information of the offset (horizontal direction and vertical direction) that has been decoded by the reversible decoding unit 62. The offset receiving unit 182 supplies the offset amount in the horizontal direction to the horizontal direction interpolating portion 191' and the shift amount in the vertical direction to the vertical direction interpolating portion 192 in response to the received offset command. 14545i.doc - 62· 201105144 The intra prediction image generation unit 183 inputs the information of the intra prediction mode received by the prediction mode receiving unit 1. Further, the intra prediction image generation unit 183 receives the upper adjacent pixel or the input/interpolation from the horizontal direction interpolation unit 191. The information of the adjacent pixels is input from the vertical direction interpolating portion 192 with the information of the left adjacent pixel or the left adjacent pixel that has been interpolated. The intra prediction image generation unit 183 generates a predicted image by using the pixel value of the adjacent pixel or the adjacent pixel that has been interpolated, in the prediction mode of the input intra prediction mode information, and generates a predicted image. The generated predicted image is output to the switch 174. The horizontal direction interpolating unit 191 reads the upper adjacent pixels from the frame memory 169 based on the prediction mode from the prediction mode receiving unit 181. The horizontal interpolating portion 191 shifts the phase of the adjacent pixel in the upper portion by the offset amount from the horizontal direction of the offset receiving portion 82 by the sixth-order ruler filter and linear interpolation. The information of the adjacent pixels in the upper portion or the adjacent pixels (i.e., the adjacent pixels from the frame memory 169) that have not been interpolated is supplied to the intra prediction image generating unit 183. In the case of Fig. 27, although the illustration of the switch 170 is omitted, the adjacent pixel system frame memory 169 is read via the switch 170. The vertical direction interpolating unit 192 reads the left adjacent pixel from the buffer memory 169 based on the prediction mode from the prediction mode receiving unit 181. The vertical interpolating portion 192 shifts the phase of the read left adjacent pixel by the offset from the vertical direction of the offset receiving portion 182 by the sixth-order FIR filter and linear interpolation. The left adjacent pixel that has been linearly interpolated or the left adjacent pixel that is not interpolated (that is, the adjacent pixel from the frame memory 169) is supplied to the intra prediction. 145451.doc -63· 201105144 Image generation unit 1 83. [Description of Decoding Process of Image De-sparing Horse Device], a person performs a decoding process performed by the image decoding device} with reference to the flowchart of Fig. 28. In step S131, the storage buffer unit 161 stores the transmitted image. In the v WS 132, the reversible decoding unit 162 decodes the compressed image supplied from the storage buffer unit, that is, the I picture, the p picture, and the B that have been encoded by the reversible coding of the ®2. The picture is decoded. At this time, the motion vector information, the reference frame information, the prediction mode information (10) the intra prediction mode, or the information indicating the inter prediction mode, the flag information, and the information of the offset amount are also decoded. When the prediction mode information is the intra-prediction mode information, the information of the prediction mode information and the offset amount is supplied to the intra prediction unit 171. In the prediction mode, it is said that the mode of prediction is: the information of the message, and the motion vector information and the reference frame information corresponding to the mode information are supplied to the compensation unit 173. In step S1 3 3 Φ, the inverse derivation unit 163 performs inverse quantization by the characteristics corresponding to the characteristics of the quantization (4) of Fig. 2 by the conversion coefficient that has been decoded by the reversible decoding unit 1 62. In step 曰, 7 ^ M34, the inverse orthogonal transform unit 164 converts the transform coefficients that have been read and converted by the inverse quantization unit 16 3 to the characteristics of the orthogonal transform unit 64 of FIG. Correspondingly, the corresponding orthogonal conversion is performed, whereby the difference information corresponding to the input of the orthogonal conversion unit 64 of Fig. 2 (the output of the operation ΔΔ# 63) is decoded. In step S135, the operation is performed. , % is different. Ρ 165 will select 145451.doc -64 - 201105144 in the process of step S141 described below and add the predicted image input via the switch 174 to the difference information. Thereby the 'original image is decoded. In step S136, the deblocking filter 166 filters the image output from the arithmetic unit 165. Thereby, the block distortion is removed. In step S137, the frame memory 169 memorizes the filtered image. In step S138, the intra prediction unit 171, the dynamic prediction and compensation unit 173, and the self-reversible solution The prediction mode information supplied from the unit 162 corresponds to the prediction processing of the image. That is, when the intra-prediction mode information is supplied from the reversible decoding unit 162, the intra prediction unit 171 performs the frame of the intra prediction mode. At this time, the intra prediction unit 171 performs intra prediction processing using adjacent pixels in the offset direction corresponding to the intra prediction mode, and the phase is supplied by the autoreversible decoding unit 162. The shifting is performed by the shifting. The details of the predictive processing in step S138 will be described below with reference to Fig. μ, but by this processing, the predicted image generated by the intra prediction unit 17] or by The predicted image generated by the dynamic prediction and compensation unit 173 is supplied to the switch 174. In step S139, the switch 174 selects the predicted image. That is, the predicted image generated by the intra prediction unit 171 is supplied, Or the predicted image generated by the dynamic prediction and compensation unit 173. Therefore, the supplied predicted image is selected and supplied to the arithmetic unit 165, as described above, in step 8134, and inverse orthogonal transform The output of 16 4 is added. In step S140, the face rearrangement buffer unit 167 performs rearrangement, that is, the picture rearrangement buffer unit 62 of the image coding device 51 is coded and rearranged. The order of the frames is rearranged to the original display order. 145451.doc -65· 201105144 In step S14i, the D/A conversion unit 168 performs D/A conversion on the image from the screen rearrangement buffer unit 167. The image is output to a display (not shown) and an image is displayed. [Explanation of Prediction Process] Next, the prediction process of step SU8 of FIG. 28 will be described with reference to the flowchart of FIG. The prediction mode receiving unit 181 determines in step S171 whether or not the target block is intra-coded. When the intra prediction mode information is supplied from the reversible decoding unit 162 to the prediction mode receiving unit 181, the prediction mode receiving unit 181 determines in step sm that the target block has been intra coded, and the processing proceeds to step §172. The prediction mode receiving unit 181 receives and acquires the intra prediction mode information from the reversible decoding unit 16 2 in step S72. The prediction mode receiving unit 丨8丨 supplies the received intra prediction mode information to the intra prediction image generation unit 183, the horizontal direction interpolation unit 191, and the vertical direction interpolation unit 192. In step S173, the offset receiving unit 182 receives and acquires information on the offset (horizontal direction and vertical direction) of the adjacent pixels that have been decoded by the reversible decoding unit 16 2 . The offset receiving unit 1 82 supplies the offset amount in the horizontal direction to the horizontal direction interpolating portion 191 and the shift amount in the vertical direction to the vertical direction interpolating portion 〖92. . The horizontal direction interpolating portion 191 and the vertical direction interpolating portion I% read the adjacent pixels from the frame memory 169, and in step 8174, the adjacent pixel interpolation processing is executed. The details of the adjacent interpolation processing in the step s 1 74 are substantially the same as those of the adjacent interpolation processing described with reference to Fig. 25, and the description and illustration thereof are omitted. 145451.doc -66 · 201105144 by the process of 'interpolating adjacent pixels in the offset direction corresponding to the intra prediction mode from the prediction mode receiving unit 181, or not interpolating according to the in-cap prediction mode The adjacent pixels are supplied to the intra prediction image generation unit 183. That is, when the intra prediction mode is mode 2 (DC prediction), the horizontal direction interpolation unit 191 and the vertical direction interpolation unit 192 do not perform interpolation of adjacent pixels 'read the frame memory 1 69 The upper and left adjacent pixels are supplied to the intra prediction image generating unit 1 83 9 in the intra prediction mode as mode 〇 (Vertical prediction), mode 3 (Diagonal Down-Left prediction), or mode 7 (Vertical-Left) In the case of prediction, only the interpolation in the horizontal direction is performed. That is, the horizontal direction interpolating portion 191 interpolates the upper adjacent pixel of the auto frame 169, and interpolates with the offset from the horizontal direction of the offset receiving portion, and will be performed. The interpolated adjacent pixel is supplied to the intra prediction image generating unit 丨83. At this time, the vertical interpolation unit 192 does not perform interpolation of the left adjacent pixels, and supplies the left adjacent pixels read by the frame memory 169 to the intra prediction image generation unit 83. In the case where the intra prediction mode is mode 1 (H〇rizontai prediction) or mode 8 (Honzontal_Up prediction), only interpolation in the vertical direction is performed. That is, the left-side adjacent pixels read by the vertical direction interpolating portion 192 from the frame memory 169 are interpolated with an offset from the vertical direction of the offset receiving portion 182, and the interpolation is performed. The left adjacent pixel is supplied to the intra prediction image generation unit 183. At this time, the horizontal interpolation unit 191 does not perform interpolation of the upper adjacent pixels, and supplies the upper adjacent pixels read by the frame memory 169 to the intra prediction image generation unit 83. 145451.doc •67- 201105144 The test mode is other prediction mode

於*ι*貞内預測模式 及垂直方向之内插 、^ 卞’悄内預測圖像生成部183係以所輸入之中負 内預測核式資輯示之預測模式,使用來自水平方向内插部 191及垂直方向内插部192之鄰接像素或已進行内插之鄰接像 素之像素值進行幢内預測。藉由㈣内預測而生成預測圖 像,所生成之預測圖像被輸出至開關1 74。 另方面,於步驟S171中,於判定出未進行幀内編碼之情 形時,處理進入步驟8176。 間預測模式資訊 於處理對象之圖像為需進行内部處理之圖像之情形時,傾 、參考訊框資訊、移動向量資訊自可逆解碼 部162供給至動態預測、補償部173。於步驟S176中,動態預 測、補償部173取得來自可逆解碼部162之幀間預測模式資 訊、參考訊框資訊、移動向量資訊等。 繼而,動態預測、補償部173於步驟S177中,進行幀間動 態預測。亦即,於處理對象之圖像為需進行幀間預測處理之 圖像之情形時,自訊框記憶體169讀出所需之圖像,並經由 開關170供給至動態預測、補償部173。於步驟Μ”中,動熊 145451.doc • 68 - 201105144 預測、補償部173係根據步驟S176中所取得之移動向量、 行幀間預測模式之動態預測,生成預測圖像。所生成=進 圖像被輸出至開關174。 予員測 如上所述,於圖像編碼裝置51中,藉由“皆打汉濾波器及毛 性内插而求出小數像素精度之像素,並決定最佳偏移量,線 此可增加㈣預_式巾所制之像素值之選擇項。=此因 可進行最佳幢内預測,可進一步提高幢内預測之編碼二: 又’於H.264/AVC方式中’ #可將只能用於參考圖化進 行敘述之幀間動態預測補償之“皆打尺濾波器之電路,有效地 活用於幀内預測。藉此,不必使電路增大,便可改盖力文率 進而,能夠以與H.264/AVC方式中規定之懷内預^之分辨 率即22.5°相比更細之分辨率進行幀内預測。 另外,於圖像編碼裝置51中,與非專利文❹所揭示 案不同地,僅以下像素利用於巾貞内預測,該像素係以特定之 位置與H.264/AVC方式之t貞㈣測中所使用之對象區塊鄰接 者。亦即’自鄰接像素緩衝部81讀出之像素僅為鄰接像素即 〇J* 〇 因此,可避免非專利文獻2之提案中之除成為編碼對象之 區塊之鄰接像素以外之像素亦應❹預測之情況所引起Μ 憶體存取次數或處理之增加,亦即,處理效率之下降。° 另外,於上述說明中,作為鄰接像素内插處理以亮度作 號之㈣㈣預測模式之情況為例進行了說明,但本發明亦 可適=巾貞内8X8或幢内Μ&quot;6預測模式之情形。X,本發明 亦可適用於色差信號之幀内預測模式之情形。 145451.doc -69 · 201105144 另外’於幀内8x8預測模式之情形時,與幀内4x4預測模式 之情形相同地,關於模式2(DC prediction mode),進行平均 值處理。因此,即便已進行偏移,亦不會直接有助於提高編 碼效率,故而上述動作遭到禁止而不進行。 關於模式 〇(Vertical Prediction m〇de)、模式 3(Diagonal Down—Left Prediction mode)、或模式 7(Vertical_Left Predicti〇n mode) ’圖18中之僅上部鄰接像素a〇、A丨、A2、…之偏移成 為候補。 關於模式 l(Horizontal Prediction mode)、或模式 § (Horizontal一Up Prediction mode),圖 1 8 中之僅左部鄰接像素 Ι〇、Ιι、I2、…之偏移成為候補。 關於其他模式(模式4至6),必需考慮上部鄰接像素及左部 鄰接像素之雙方之偏移。 又,於幀内16 X 1 6預測模式及色差信號之幀内預測模式之 It形時,關於Vertical Prediction mode,僅進行上部鄰接像 素之水平方向之偏移。關於H〇riz〇ntal卩代⑴如的m〇de,僅 進行左部鄰接像素之垂直方向之偏移。關於DC Prediction m〇de ’不進行偏移處理。關於Plane Prediction mode,進行 上部鄰接像素之水平方向之偏移及左部鄰接像素之垂直方向 之偏移之雙方。 進而,如非專利文獻1所揭示,於動態預測中進行1/8像素 ,X之内插處理之情形時,於本發明中,亦進行1Μ像素精 度之内插處理。 於以上說明中,使用H 264/Avc方式作為編碼方式,但本 145451.doc -70· 201105144 發月並不限疋於此’可適用進行使用鄰接像素之巾貞内預測之 其他編碼方式/解碼方式。 3另外,本發明可適用於例如mpeg、h施等般,將藉由離 散餘弦轉換等正交轉換及動態補償而進行壓縮之圖像資訊 (位元串流),經由衛星廣播、有線電視、網際網路或行動電 話等網路媒體接收時使用之圖像編碼裝置及圖像解碼裝置。 :,本發明可適用於光、磁碟及快閃記憶體之記憶媒體上進 行處理k使用之圖像編碼裝置及圖像解碼裝置。進而,本發 明亦可制於彼等圖像編碼裝置及圖像解碼I置等巾所含之 動態預測補償裝置。 上述之-連串處理可藉由硬體執行,亦可藉由軟體執行。 於藉由軟體執行-連串處理之情形時,將構成該軟體之程式 安裝於電腦。於此,電腦包括可裝入專用硬體之電腦、或藉 由安裝各種程式而可執行㈣功能之通用之個人電腦等。 圖30係表示藉由程式執行上述之一連串處理之電腦之硬體 之構成例的方塊圖。 於電腦中,CPU(Central pr0cessing Unit,中央處理單元) 301、R0M(Read Only Memory,唯讀記憶體)3〇2、ram (Random Access Memory,隨機存取記憶體)3〇3係藉由匯流 排304而相互連接。 於匯流排304上,進而連接有輸入輪出介面3〇5。於輸入輸 出介面305上,連接有輸入部306、輸出部3〇7、記憶部3〇8、 通信部309及驅動部310。 輸入部306包括鍵盤、滑鼠、麥克風等。輸出部3〇7包括顯 145451.doc •71 · 201105144 示器、揚聲器等。記憶部308包括硬碟或非揮發性之記憶體 等。通信部3 09包括網路介面等。驅動部3丨〇係對磁碟、光 碟、磁光碟或半導體記憶體等可移除式媒體311進行驅動。’ 於如上所述構成之電腦中,CPU 3〇1例如將記憶於記憶部 308之程式經由輸入輸出介面3〇5及匯流排3〇4載入3〇3 而執行,藉此進行上述之一連串處理。 電腦(CPU 3(H)所執行之程式係例如可記錄於作為套裝軟 體媒體等之可移除式媒體311而提供。又,程式係可經由區 域網路、哪網路、數位廣播之有線或無線之傳輸媒體而提 供。 於電腦中,程式可藉由將可移除式媒體叫安裝於驅動部 31〇’而經由輸入輸出介面3〇5安裝於記憶部3〇8。又,程式 可經由有線或無線之傳輸媒體而由通信部_接收並安二、 於5己憶部3 0 8。此外,赶々亦π 4·〜ui 卜程式可預先安裝於ROM 302或記憶部 3 0 8 〇 另外,電腦所執行之程式可為按照本說明書中所說明之川員 序而時間序列性地進行處理的程式,亦可為並行或以進行調 換時等所需之時序進行處理的程式。 本發明之實施形態並不限定於上述實施形態,可於不脫離 本發明之主旨之範圍内進行各種變更。 【圖式簡單說明】 圖1係對4x4像素之_預測之方向進行說明之圖。 圖2係心㈣本發明之圖像編碼裝置之—實施形態 成之方塊圖。 再 145451.doc -72- 201105144 圖3係對〗/4像+养痒 明之 。 象素精度之動態預測、補償處理進行說 圖4係對多重史去i 圖 。 ,考Λ框之動態預測.補償方式進行說〖 =ΓΓ量資訊之生成方法之例進行說明之圖。 圖係表不頓内預測部及鄰接像素内插部之構成例之方塊 圖7係對圖2之圖像編瑪裝置之編碼處理進行說明之流程 圖8係對圖7之步驟S21之預測處理進行說明之 說==對咖6像素以貞__^之情敎“順序進行 圖1 0係表示亮度信號之4 X 4像素之幢内預測模式 圖 圖 圖 圖 之類型之 圖 圖11係表示亮度信號之4 X 4像素之巾貞内預測模式 之類型之 圖12係對4x4像素之幀内預測之方向進行說明之圖。 圖13係對4x4像素之幀内預測進行說明之圖。 圖14係對亮度信號之4 X 4像素之巾貞内預測模 說明之圖 、、瑪碼進行 之圖 圖15係表示亮度信號之16x16像素之幀内預測模式之類型 之圖 圖16係表示亮度信號之16x16像素之幀内預 rsi 供^之類型 145451.doc •73- 201105144 圖17係對16x16像素之幀内預測進行說明之圖。 圖1 8係對用以實現小數像素精度之幀内預測之動作進行古兒 明之圖。 圖19係對小數像素精度之幀内預測之效果例進行說明之 圖。 圖20係對圖8之步驟S3 1之幀内預測處理進行說明之节程 圖。 圖21係對圖20之步驟S45之鄰接像素内插處理進行說明之 流程圖。 圖22係對圖8之步驟S32之幀間動態預測處理進行說明之流 程圖。 圖2 3係表示幀内預測部及鄰接像素内插部之其他構成例之 方塊圖。 圖24係對圖8之步驟S3 1之幀内預測處理之其他例進行說明 之流程圖。 圖25係對圖24之步驟S 10 1之鄰接像素内插處理進行說明之 流程圖。 圖2 6係表示適用本發明之圖像解碼裝置之一實施形態之構 成之方塊圖。 圖27係表示幀内預測部及鄰接像素内插部之其他構成例之 方塊圖。 圖2 8係對圖2 6之圖像解碼裝置之解瑪處理進行說明之流程 圖。 圖2 9係對圖2 8之步驟S 13 8之預測處理進行說明之流程圖。 145451.doc -74- 201105144 圖30係表示電腦之硬體之構成例之方塊圖。 【主要元件符號說明】 51 圖像編碼裝置 66 可逆編碼部 74 幀内預測部 75 鄰接像素内插部 76 動態預測、補償部 77 預測圖像選擇部 81 鄰接像素緩衝部 82 最佳模式決定部 83 最佳偏移量決定部 84 預測圖像生成部 91 模式判定部 92 水平方向内插部 9 3 垂直方向内插部 151 圖像解碼裝置 162 可逆解碼部 171 幀内預測部 172 鄰接像素内插部 173 動態預測、補償部 174 開關 181 預測模式接收部 182 偏移量接收部 183 幀内預測圖像生成部 145451.doc -75 - 201105144 191 192 水平方向内插部 垂直方向内插部 145451.doc -76·In the *ι*贞 prediction mode and the vertical direction interpolation, the internal prediction image generation unit 183 uses the prediction mode in which the negative intra prediction kernel type is input, and uses the interpolation from the horizontal direction. The pixel values of the adjacent pixels of the portion 191 and the vertical interpolation portion 192 or the adjacent pixels that have been interpolated are intra-predicted. The predicted image is generated by (4) intra prediction, and the generated predicted image is output to the switch 1 74. On the other hand, in step S171, when it is determined that the intra-frame coding has not been performed, the processing proceeds to step 8176. Inter-prediction mode information When the image to be processed is an image to be internally processed, the tilt, reference frame information, and motion vector information are supplied from the reversible decoding unit 162 to the dynamic prediction and compensation unit 173. In step S176, the dynamic prediction and compensation unit 173 acquires the inter prediction mode information, the reference frame information, the motion vector information, and the like from the reversible decoding unit 162. Then, the dynamic prediction and compensation unit 173 performs inter-frame dynamic prediction in step S177. In other words, when the image to be processed is an image to be subjected to interframe prediction processing, the image obtained by the frame memory 169 is read out and supplied to the dynamic prediction/compensation unit 173 via the switch 170. In the step Μ", the bear 145451.doc • 68 - 201105144 The prediction and compensation unit 173 generates a predicted image based on the motion vector obtained in step S176 and the dynamic prediction of the inter-frame prediction mode. The image is output to the switch 174. As described above, in the image encoding device 51, the pixel of the decimal precision is obtained by "all the filters and the gross interpolation", and the optimum offset is determined. The amount, the line, can increase (4) the choice of the pixel value made by the pre-style towel. = This factor can be used for optimal intra-frame prediction, which can further improve the coding of the intra-frame prediction. 2: In the H.264/AVC mode, # can use the inter-frame dynamic prediction compensation that can only be used for reference graphization. "The circuit of the ruler filter is effectively used for intra prediction. Therefore, it is possible to change the force rate without having to increase the circuit, and it can be within the scope of the H.264/AVC method. The resolution of the pre-preparation is 22.5° compared to the finer resolution. In addition, in the image encoding device 51, unlike the disclosure of the non-patent document, only the following pixels are used for intra-frame prediction. The pixel is adjacent to the target block used in the measurement of the H.264/AVC method at a specific position. That is, the pixel read from the adjacent pixel buffer unit 81 is only the adjacent pixel, that is, 〇J. 〇 Therefore, it is possible to avoid the increase in the number of memory accesses or processing caused by the pixels other than the adjacent pixels of the block to be encoded in the proposal of Non-Patent Document 2, that is, processing The decrease in efficiency. ° In addition, in the above description, The case where the adjacent pixel interpolation processing uses the (4) (four) prediction mode of the luminance number is taken as an example, but the present invention can also be applied to the case of the 8X8 or the intra-frame prediction mode in the frame. X, the present invention is also applicable. In the case of the intra prediction mode of the color difference signal. 145451.doc -69 · 201105144 In addition, in the case of the intra 8x8 prediction mode, as in the case of the intra 4x4 prediction mode, regarding the mode 2 (DC prediction mode), The average value is processed. Therefore, even if the offset has been performed, it will not directly contribute to the improvement of the coding efficiency. Therefore, the above operation is prohibited without being performed. About the mode Ver (Vertical Prediction m〇de), Mode 3 (Diagonal Down -Left Prediction mode) or Mode 7 (Vertical_Left Predicti〇n mode) 'The offset of only the upper adjacent pixels a〇, A丨, A2, ... in Fig. 18 is a candidate. Regarding mode 1 (Horizontal Prediction mode), or In the mode § (Horizontal-Up Prediction mode), the offset of only the left adjacent pixels Ι〇, Ιι, I2, ... in Fig. 18 becomes a candidate. For other modes (modes 4 to 6), it is necessary. Regardless of the offset between the upper adjacent pixel and the left adjacent pixel, when the intra prediction mode of the 16×16 prediction mode and the intra prediction mode of the color difference signal are in the It shape, only the upper adjacent pixel is used in the Vertical Prediction mode. The offset in the horizontal direction. Regarding the M〇de of the H〇riz〇ntal generation (1), only the vertical direction of the left adjacent pixel is shifted. Regarding DC Prediction m〇de ', no offset processing is performed. Regarding the Plane Prediction mode, both the horizontally adjacent pixels are offset in the horizontal direction and the left adjacent pixels are shifted in the vertical direction. Further, as disclosed in Non-Patent Document 1, when 1/8 pixel and X interpolation processing are performed in dynamic prediction, in the present invention, interpolation processing of 1 pixel precision is also performed. In the above description, the H 264/Avc method is used as the encoding method, but this 145451.doc -70· 201105144 is not limited to this. It is applicable to other encoding methods/decodings using intra-frame prediction of adjacent pixels. the way. In addition, the present invention can be applied to, for example, mpeg, h, etc., image information (bit stream) compressed by orthogonal conversion and dynamic compensation such as discrete cosine transform, via satellite broadcasting, cable television, An image encoding device and an image decoding device used for receiving a network medium such as an internet or a mobile phone. The present invention is applicable to an image coding apparatus and an image decoding apparatus which are used for processing k on a memory medium of optical, magnetic disk and flash memory. Furthermore, the present invention can also be applied to dynamic predictive compensation devices included in such image encoding devices and image decoding devices. The above-described series of processing can be performed by hardware or by software. In the case of execution by software - serial processing, the program constituting the software is installed on a computer. Here, the computer includes a computer that can be loaded with a dedicated hardware, or a general-purpose personal computer that can perform the functions of (4) by installing various programs. Fig. 30 is a block diagram showing an example of the configuration of a hardware of a computer which executes one of the above-described series of processes by a program. In the computer, CPU (Central pr0cessing Unit) 301, ROM (Read Only Memory) 3〇2, ram (Random Access Memory) 3〇3 are connected by sink Rows 304 are connected to each other. On the bus bar 304, an input wheeling interface 3〇5 is further connected. An input unit 306, an output unit 3〇7, a memory unit 3〇8, a communication unit 309, and a drive unit 310 are connected to the input/output interface 305. The input unit 306 includes a keyboard, a mouse, a microphone, and the like. The output unit 3〇7 includes display 145451.doc •71 · 201105144 display, speaker, etc. The memory unit 308 includes a hard disk or a non-volatile memory or the like. The communication unit 309 includes a network interface or the like. The drive unit 3 drives the removable medium 311 such as a magnetic disk, a compact disk, a magneto-optical disk, or a semiconductor memory. In the computer configured as described above, the CPU 3〇1 executes, for example, the program stored in the memory unit 308 is loaded into the 3〇3 via the input/output interface 3〇5 and the bus bar 3〇4, thereby performing one of the above-described series. deal with. The computer (the program executed by the CPU 3 (H) can be recorded, for example, in the removable medium 311 as a packaged software medium. Further, the program can be wired via a local area network, a network, a digital broadcast, or Provided by a wireless transmission medium. In the computer, the program can be installed in the memory unit 3〇8 via the input/output interface 3〇5 by attaching the removable medium to the drive unit 31〇. Further, the program can be The wired or wireless transmission medium is received by the communication unit _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Further, the program executed by the computer may be a program that is processed in time series in accordance with the sequence of the Chuan people described in the present specification, or may be a program that is processed in parallel or at a timing required for the exchange or the like. The embodiment is not limited to the embodiment described above, and various modifications can be made without departing from the spirit and scope of the invention. Fig. 1 is a diagram for explaining the direction of prediction of 4x4 pixels. Heart (4) The image coding apparatus of the invention is a block diagram of the embodiment. 145451.doc -72- 201105144 Fig. 3 is a pair of 〗 / 4 image + itch itching. The dynamic prediction and compensation processing of pixel precision is shown in Fig. 4 For the multiple history, go to the i diagram. The dynamic prediction of the test frame. The compensation method is described as an example of the method for generating the volume information. The composition of the graph table and the adjacent pixel interpolation unit FIG. 7 is a flow chart 8 for explaining the encoding process of the image encoding device of FIG. 2, which is a description of the prediction process of step S21 of FIG. 7 ===6 pixels for 咖__^敎 敎 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 顺序 11 11 11 11 11 11 11 11 11 11 11 11 11 Fig. 12 is a diagram for explaining the direction of intra prediction of 4x4 pixels. Fig. 13 is a diagram for explaining intra prediction of 4x4 pixels. Fig. 14 is a description of intra prediction mode of 4×4 pixels of luminance signal. Figure 15 and Fig. 15 show the brightness letter Figure 16 shows the type of intra prediction mode of 16x16 pixels. Figure 16 shows the type of intra-pre-rsi for 16x16 pixels of luminance signal. 145451.doc •73- 201105144 Figure 17 illustrates the intra prediction of 16x16 pixels. Fig. 18 is a diagram showing the operation of intra prediction for achieving fractional pixel precision. Fig. 19 is a diagram for explaining an example of the effect of intra prediction of fractional pixel precision. Fig. 20 is a diagram of Fig. 8. The intra-prediction processing of step S31 will be described. Fig. 21 is a flow chart for explaining the adjacent pixel interpolation processing of step S45 of Fig. 20. Fig. 22 is a flow chart for explaining the interframe dynamic prediction processing of step S32 of Fig. 8. Fig. 2 is a block diagram showing another configuration example of the intra prediction unit and the adjacent pixel interpolation unit. Fig. 24 is a flow chart for explaining another example of the intra prediction processing of step S31 of Fig. 8. Fig. 25 is a flow chart for explaining the adjacent pixel interpolation processing of step S 10 1 of Fig. 24. Fig. 26 is a block diagram showing the configuration of an embodiment of an image decoding apparatus to which the present invention is applied. Fig. 27 is a block diagram showing another configuration example of the intra prediction unit and the adjacent pixel interpolation unit. Fig. 2 is a flow chart for explaining the gamma processing of the image decoding apparatus of Fig. 26. Fig. 2 is a flow chart for explaining the prediction process of step S 13 8 of Fig. 28. 145451.doc -74- 201105144 Figure 30 is a block diagram showing a configuration example of a hardware of a computer. [Description of main component symbols] 51 Image encoding device 66 Reversible encoding unit 74 Intra prediction unit 75 Adjacent pixel interpolation unit 76 Dynamic prediction and compensation unit 77 Predicted image selection unit 81 Adjacent to pixel buffer unit 82 Best mode determination unit 83 Optimal offset amount determining unit 84 predicted image generating unit 91 mode determining unit 92 horizontal direction interpolating unit 93 vertical direction interpolating unit 151 image decoding device 162 reversible decoding unit 171 intra prediction unit 172 adjacent pixel interpolating unit 173 Dynamic prediction and compensation unit 174 Switch 181 Prediction mode receiving unit 182 Offset receiving unit 183 Intra prediction image generating unit 145451.doc -75 - 201105144 191 192 Horizontal direction interpolating unit vertical direction interpolating unit 145451.doc - 76·

Claims (1)

201105144 七、申請專利範圍: 1. 一種圖像處理裝置,其包括: 模式決定機構,其係針對成為幀内預測之處理對象之 幢内預測區塊,對圖像資料決定幢内預測之預測模式; 相位偏移機構,其係依照與藉由上述模式決定機構所 決定之上述預測模式對應之偏移方向及成為候補之偏移 1使以特疋之位置關係與上述幀内預測區塊鄰接之鄰 接像素之相位偏移; 偏移董決定機構,其係使用上述鄰接像素及藉由上述 相位偏移機構偏移上述相位之鄰接像素,對上述鄰接像 素決定上述相位之最佳偏移量;及 預測圖像生成機構,其係使用依照藉由上述偏移量決 定機構所決定之上述最佳偏移量而偏移上述相位之鄰接 像素,生成上述幀内預測區塊之預測圖像。 2.如請求項丨之圖像處理裝置,其中更包括: 編碼機構,其係對上述幅内預測區塊之圖像與藉由上 述預測圖像生成機構所生成之上述預測圖像之差分資訊 進行編碼而生成編碼_流;及 3. —傳輸機構,其係將表示藉由上述偏移量決定機構所決 疋之上述取佳偏移量之偏移量資訊、及表示藉由上述模 =決定機構所衫之上述㈣模式之預_式資訊,與 藉由上述編碼機構所生成之編碼串流―併傳輸。 如請求項2之圖像處理裴置,其中 上述編碼機構係將表示針對上述巾貞内預測區塊所決定 145451.doc 201105144 之上述最佳偏移量與針對賦予MostProbableMode之區塊 所決定之最佳偏移量之差分的差分資訊作為上述偏移量 資訊進行編碼, 上述傳輸機構係傳輸藉由上述編碼機構所生成之編碼 串流及上述差分資訊。 4. 如請求項1之圖像處理裝置,其中 上述相位偏移機構係於藉由上述模式決定機構所決定 之上述預測模式為DC prediction模式之情形時,禁止上 述相位之偏移。 5. 如請求項1之圖像處理裝置,其中 上述相位偏移機構係於藉由上述模式決定機構所決定 之上述預測模式為Vertical prediction模式、Diag Down Left prediction模式、或 Vertical_Leftpredicti〇_ 式之情 形時,對於上述鄰接像素中之上部鄰接像素,使 H〇dzonta丨方向之相位依照上述候補之偏移量進行偏移, 且對於上述鄰接像素中之左部鄰接像素,禁止乂“…“方 向之相位之偏移。 6 ·如請求項1之圖像處理裝置,其中 上述相位偏移機構係於藉由上述模式決定機構所決定 之上述預測模式為H〇riz〇ntal predicti〇n模式或 Horizontal—Up predicti〇n模式之情形時,對於上述鄰接 像素中之左部鄰接像素,使Vertical方向之相位依照上述 候補之偏移量進行偏移,且對於上述鄰接像素中之上部 鄰接像素,禁止Horizontal方向之相位之偏移。 145451.doc 201105144 7 ·如請求項1之圖像處理裝置,其中 上述模式決定機構係決定上述幅内預測之全部之預測 模式, 、上述相位偏移機構係依照與藉由上述模式決定機構所 、疋曰述全σ卩預測模式對應之偏移方向及成為候補之 偏移塁,使上述鄰接像素之相位偏移, 上述偏移量決定機構係使用上述鄰接像素及藉由上述 相位偏移機構偏移上述相位之鄰接像素,對上述鄰接像 素決定上述相位之最佳偏移量及最佳預測模式。 8. 如請求項1之圖像處理裝置,其中更包括: 動態預測補償機構,其係針對上述圖像之幢間動態預 測區塊進行幀間動態預測;且 上述相位偏移機構係使用藉由上述動態預測補償機構 於預測小數像素精度時使用之遽波器,使上述鄰接像素 之相位偏移。 9. 一種圖像處理方法,其包含如下步驟: 由圖像處理裝置進行: 針對成為賴内預測之處理對象之幢内預測區塊,對圖 像資料決定幀内預測之預測模式; 依照與所決定之上述預測模式對應之偏移方向及成為 候補之偏移量’使以特定之位置關係與上述㈣預測區 塊鄰接之鄰接像素之相位偏移; 使用上述鄰接像素及已偏移上述相位之鄰接像素,對 上述鄰接像素決定上述相位之最佳偏移量;及. 145451.doc 201105144 使用依照所決定之上述最佳低 : 取佳偏移量而偏移上述相位之 鄰接像素,生成上述鳩内預測區塊之預測圖像。 ίο. —種圖像處理裝置,其包括: 接收機構,其接收預測模式資訊及偏移量資訊,該預 測模式資訊係表示針對成為幀内預測之處理對象之幀内 預測區塊之巾貞㈣測之制模式,該偏移量資訊係表示 使以特定之位置關係與上述t貞内預測區塊鄰接之鄰接像 素之相位根據上述預_式資訊所表示之預測模式而偏 移之偏移量; 相位偏移機構,其係依照與藉由上述接收機構接收之 上述預測模式對應之偏移方向及偏移量,使上述鄰接像 素之相位偏移;及 預測圖像生成機構,其係使用藉由上述相位偏移機構 偏移上述相位之鄰接像素,生成上述幀内預測區塊之預 測圖像。 11. 如請求項10之圖像處理裝置,其中 上述接收機構係接收表示針對上述幀内預測區塊之偏 移量與針對賦予MostProbableMode之區塊之偏移量之差 分的差分資訊作為上述偏移量資訊。 12. 如請求項1 〇之圖像處理裝置’其中更包括解碼機構其 係使用藉由上述預測圖像生成機構所生成之預測圖像, 對上述幀内預測區塊進行解碼。 13. 如請求項12之圖像處理裝置,其中 上述解碼機構係對藉由上述接收機構接收之預測模式 I45451.doc 201105144 資訊及上述偏移量資訊進行解碼。 14. 如請求項10之圖像處理裝置其中 上述相位偏移機構係於藉由上述解媽機構所解碼之上 述預測模式為DC predieti()n模式之情形時,禁止上述鄰 接像素之相位之偏移。 15. 如請求項1〇之圖像處理裝置,其中 上述相位偏移機構係於藉由上述解碼機構所解碼之上 述預測模式為Vertical predieticm模式、Diag—D。㈣—㈣ prediction;^ 式、或 Verticai—Left 模式之情形 時,對於上述鄰接像素中之上部鄰接像素,使h〇士⑽^ 方向之相位依照藉由上述解碼機構所解瑪之上述偏移量 進行偏移,且料域_像素巾之左部㈣像素,禁 止Vertical方向之相位之偏移。 16.如請求項1〇之圖像處理裝置,其中 上述相位偏移機構係於藉由上述解碼機構所解碼之上 述預測模式為H〇rizontal predicti〇n模式、或H〇dz她^仰 prediction模式之情形時,對於上述鄰接像素中之左部鄰 接像素’使Vertical方向之相位依照藉由上述解碼機構所 解碼之上述偏移量進行偏移,且對於上述鄰接像素令之 上部鄰接像素,禁止Horizontal方向之相位之偏移。 17.如請求項1〇之圖像處理裝置,其中更包括: 動態預測補償機構,其係使用經編碼之巾貞間動態預測 區塊及藉由上述解碼機構所料之移動向量,進行鳩間 動態預測;且 ' 145451.doc 201105144 上述相位偏移機構係使用藉由 於預測小數像素精卢日㈣用厂^ ]刻員機構 精度時使用之;慮波器,使上述鄰接像辛 之相位偏移。 印妖1豕京 18. -種圖像處理方法,其包含如下步驟: 由圖像處理裝置進行: 接收預測模式資訊及偏移量資訊,該預 表示針對成為幢内,,則$ _ τ审# &amp; ^ 勺預円預測之處理對象之幀内預測區塊之幀 内預測之預測模式,該偏移量資訊係表示使以特定之位 置關係與上述幀内預測區塊鄰接之鄰接像素之相位根據 上述預測模式資訊所表示之預測模式而偏移之偏移量; 依照與所接收之上述預測模式對應之偏移方向及偏移 里’使上述鄰接像素之相位偏移; 使用已偏移上述相位之鄰接像素,生成上述幀内預測 區塊之預測圖像。 145451.doc201105144 VII. Patent application scope: 1. An image processing apparatus, comprising: a mode determining mechanism for determining an intra-prediction prediction mode for image data for an intra-prediction block to be an object of intra prediction processing; And a phase shifting mechanism that is adjacent to the intra prediction block according to an offset direction corresponding to the prediction mode determined by the mode determining unit and an offset 1 that is a candidate a phase shift of the adjacent pixels; the offset determining means determines the optimum offset of the phase for the adjacent pixels by using the adjacent pixels and the adjacent pixels of the phase offset by the phase shifting mechanism; and The predicted image generating means generates a predicted image of the intra prediction block by using adjacent pixels shifted by the phase in accordance with the optimum offset amount determined by the offset amount determining means. 2. The image processing device of claim 1, further comprising: an encoding mechanism that is a difference information between the image of the intra-prediction block and the predicted image generated by the predicted image generating mechanism Encoding to generate a coded stream; and 3. a transmission mechanism, which is to indicate offset information of the above-mentioned preferred offset determined by the offset determining means, and to indicate by using the above-mentioned mode = The pre-form information of the above (4) mode determined by the institution is transmitted and transmitted with the encoded stream generated by the above coding mechanism. The image processing device of claim 2, wherein the encoding mechanism is to determine the optimal offset of 145451.doc 201105144 determined by the prediction block in the above-mentioned frame and the maximum determined for the block given to the MostProbableMode. The difference information of the difference of the good offset is encoded as the offset information, and the transmission mechanism transmits the encoded stream generated by the coding unit and the difference information. 4. The image processing device according to claim 1, wherein the phase shifting mechanism prohibits the shift of the phase when the prediction mode determined by the mode determining means is the DC prediction mode. 5. The image processing device of claim 1, wherein the phase shifting mechanism is a condition in which the prediction mode determined by the mode determining means is a Vertical prediction mode, a Diag Down Left prediction mode, or a Vertical_Leftpredicti 〇 _ In the case of the adjacent pixels in the adjacent pixels, the phase of the H〇dzonta丨 direction is shifted according to the offset of the candidate, and the left adjacent pixel of the adjacent pixels is prohibited from “...” Phase shift. 6. The image processing apparatus of claim 1, wherein the phase shifting mechanism is determined by the mode determining means to be the H〇riz〇ntal predicti〇n mode or the Horizontal-Up predicti〇n mode. In the case of the left adjacent pixel of the adjacent pixels, the phase of the vertical direction is shifted according to the offset of the candidate, and the phase shift of the horizontal direction is prohibited for the adjacent pixel of the adjacent pixel. . The image processing device of claim 1, wherein the mode determining means determines all of the prediction modes of the intra-frame prediction, and the phase shifting mechanism is in accordance with the mode determining means Excluding the offset direction corresponding to the full σ卩 prediction mode and the offset of the candidate, the phase of the adjacent pixel is shifted, and the offset determining means uses the adjacent pixel and is biased by the phase shifting mechanism The adjacent pixels of the phase are shifted, and the optimum offset and the optimal prediction mode of the phase are determined for the adjacent pixels. 8. The image processing apparatus of claim 1, further comprising: a dynamic prediction compensation mechanism for performing inter-frame dynamic prediction on the inter-block dynamic prediction block of the image; and the phase shifting mechanism is used by The dynamic predictive compensation mechanism uses a chopper for predicting the precision of the fractional pixels to shift the phase of the adjacent pixels. 9. An image processing method comprising the steps of: performing, by an image processing apparatus, determining a prediction mode of intra prediction for image data for an intra-prediction block to be processed by the intra-prediction; Determining an offset direction corresponding to the prediction mode and an offset amount to be a candidate' causes a phase shift of a neighboring pixel adjacent to the (4) prediction block by a specific positional relationship; using the adjacent pixel and offsetting the phase Adjacent pixels, determining the optimal offset of the phase for the adjacent pixels; and 145451.doc 201105144 using the above-mentioned optimal low according to the determined: offsetting the adjacent pixels of the phase by a good offset, generating the above-mentioned 鸠The predicted image of the intra prediction block. An image processing apparatus comprising: a receiving mechanism that receives prediction mode information and offset information, wherein the prediction mode information indicates a frame for an intra prediction block to be a processing target of intra prediction (4) In the measurement mode, the offset information is an offset indicating that the phase of the adjacent pixel adjacent to the intra prediction block in the specific positional relationship is offset according to the prediction mode indicated by the pre-form information. a phase shifting mechanism that shifts a phase of the adjacent pixel according to an offset direction and an offset amount corresponding to the prediction mode received by the receiving mechanism; and a predicted image generating mechanism The phase shifting mechanism shifts the adjacent pixels of the phase to generate a predicted image of the intra prediction block. 11. The image processing apparatus of claim 10, wherein the receiving mechanism receives differential information indicating a difference between an offset for the intra prediction block and an offset for a block given the MostProbableMode as the offset Information. 12. The image processing apparatus of claim 1 wherein the decoding means further decodes the intra prediction block using the predicted image generated by the predicted image generating means. 13. The image processing apparatus of claim 12, wherein the decoding means decodes the prediction mode I45451.doc 201105144 information received by the receiving means and the offset information. 14. The image processing apparatus of claim 10, wherein the phase shifting mechanism is in a DC predieti ()n mode when the prediction mode decoded by the mother disarming mechanism is disabled, the phase deviation of the adjacent pixels is prohibited. shift. 15. The image processing device of claim 1, wherein the phase shifting mechanism is configured by the decoding means to decode the prediction mode as a Vertical predieticm mode or a Diag-D. (4) - (4) prediction; ^, or Verticai-Left mode, for the adjacent pixels in the adjacent pixels, the phase of the h gentleman (10) ^ direction is in accordance with the above-mentioned offset by the decoding mechanism The offset is performed, and the left (four) pixel of the region_pixel towel prohibits the phase shift of the vertical direction. 16. The image processing device of claim 1, wherein the phase shifting mechanism is the H预测rizontal predicti〇n mode, or the H〇dzshe prediction mode, which is decoded by the decoding mechanism. In the case of the left adjacent pixel in the adjacent pixel, the phase of the vertical direction is shifted according to the offset amount decoded by the decoding mechanism, and the adjacent pixel is adjacent to the pixel, and horizontal is prohibited. The phase shift of the direction. 17. The image processing apparatus of claim 1 , further comprising: a dynamic prediction compensation mechanism that uses the encoded dynamic prediction block and the motion vector predicted by the decoding mechanism to perform the inter-day dynamics Predicting; and '145451.doc 201105144 The phase shifting mechanism described above is used by predicting the precision of the fractional pixel by the factory; the filter is used to shift the phase of the adjacent symplectic image. In the image processing method, the method includes the following steps: The image processing apparatus performs: receiving prediction mode information and offset information, wherein the pre-representation is for the inside of the building, and then $ _ τ # &amp; ^ The prediction mode of the intra prediction of the intra prediction block of the processing target of the scoop prediction, the offset information indicating that adjacent pixels adjacent to the intra prediction block are in a specific positional relationship Offset of the phase according to the prediction mode indicated by the prediction mode information; shifting the phase of the adjacent pixel according to the offset direction and the offset corresponding to the received prediction mode; using the offset The adjacent pixels of the phase generate a predicted image of the intra prediction block. 145451.doc
TW99108540A 2009-04-24 2010-03-23 Image processing apparatus and method TWI400960B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009105937A JP5169978B2 (en) 2009-04-24 2009-04-24 Image processing apparatus and method

Publications (2)

Publication Number Publication Date
TW201105144A true TW201105144A (en) 2011-02-01
TWI400960B TWI400960B (en) 2013-07-01

Family

ID=43011172

Family Applications (5)

Application Number Title Priority Date Filing Date
TW102111791A TWI524743B (en) 2009-04-24 2010-03-23 An image processing apparatus, an image processing method, a program, and a recording medium
TW99108540A TWI400960B (en) 2009-04-24 2010-03-23 Image processing apparatus and method
TW102111788A TWI531216B (en) 2009-04-24 2010-03-23 Image processing apparatus and method
TW102111790A TWI528789B (en) 2009-04-24 2010-03-23 Image processing apparatus and method
TW102111789A TWI540901B (en) 2009-04-24 2010-03-23 Image processing apparatus and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW102111791A TWI524743B (en) 2009-04-24 2010-03-23 An image processing apparatus, an image processing method, a program, and a recording medium

Family Applications After (3)

Application Number Title Priority Date Filing Date
TW102111788A TWI531216B (en) 2009-04-24 2010-03-23 Image processing apparatus and method
TW102111790A TWI528789B (en) 2009-04-24 2010-03-23 Image processing apparatus and method
TW102111789A TWI540901B (en) 2009-04-24 2010-03-23 Image processing apparatus and method

Country Status (12)

Country Link
US (4) US10755444B2 (en)
EP (3) EP3211896A1 (en)
JP (1) JP5169978B2 (en)
KR (5) KR101641400B1 (en)
CN (5) CN102396230B (en)
AU (1) AU2010240090B2 (en)
BR (1) BRPI1015330B1 (en)
CA (2) CA2755889C (en)
MX (1) MX2011010960A (en)
RU (3) RU2665876C2 (en)
TW (5) TWI524743B (en)
WO (1) WO2010123056A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI484442B (en) * 2012-09-20 2015-05-11 Univ Nat Taiwan Science Tech Optical illusion image generating device and the method thereof

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5169978B2 (en) * 2009-04-24 2013-03-27 ソニー株式会社 Image processing apparatus and method
CN101783957B (en) * 2010-03-12 2012-04-18 清华大学 Method and device for predictive encoding of video
KR101373814B1 (en) * 2010-07-31 2014-03-18 엠앤케이홀딩스 주식회사 Apparatus of generating prediction block
JP2012104945A (en) * 2010-11-08 2012-05-31 Sony Corp Image processing apparatus, image processing method, and program
CN103262539A (en) 2010-12-17 2013-08-21 三菱电机株式会社 Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method
US9049455B2 (en) * 2010-12-28 2015-06-02 Panasonic Intellectual Property Corporation Of America Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block
JP2012147332A (en) * 2011-01-13 2012-08-02 Sony Corp Encoding device, encoding method, decoding device, and decoding method
TW201235974A (en) * 2011-02-25 2012-09-01 Altek Corp Image processing apparatus and memory accessing method thereof
KR101383775B1 (en) 2011-05-20 2014-04-14 주식회사 케이티 Method And Apparatus For Intra Prediction
US9654785B2 (en) 2011-06-09 2017-05-16 Qualcomm Incorporated Enhanced intra-prediction mode signaling for video coding using neighboring mode
KR20120140181A (en) * 2011-06-20 2012-12-28 한국전자통신연구원 Method and apparatus for encoding and decoding using filtering for prediction block boundary
RU2627033C1 (en) 2011-06-28 2017-08-03 Самсунг Электроникс Ко., Лтд. Method and device for coding and decoding image using internal prediction
US20130016769A1 (en) 2011-07-17 2013-01-17 Qualcomm Incorporated Signaling picture size in video coding
GB2501535A (en) 2012-04-26 2013-10-30 Sony Corp Chrominance Processing in High Efficiency Video Codecs
US9420280B2 (en) * 2012-06-08 2016-08-16 Qualcomm Incorporated Adaptive upsampling filters
CN103152540B (en) * 2013-03-11 2016-01-20 深圳创维-Rgb电子有限公司 Resolution conversion method and device, ultra-high-definition television
US11463689B2 (en) * 2015-06-18 2022-10-04 Qualcomm Incorporated Intra prediction and intra mode coding
US10841593B2 (en) 2015-06-18 2020-11-17 Qualcomm Incorporated Intra prediction and intra mode coding
CN105338366B (en) * 2015-10-29 2018-01-19 北京工业大学 A kind of coding/decoding method of video sequence mid-score pixel
KR20170058837A (en) * 2015-11-19 2017-05-29 한국전자통신연구원 Method and apparatus for encoding/decoding of intra prediction mode signaling
CN109417637B (en) 2016-04-26 2021-12-07 英迪股份有限公司 Method and apparatus for encoding/decoding image
KR20230125341A (en) * 2016-10-11 2023-08-29 엘지전자 주식회사 Image decoding method and apparatus relying on intra prediction in image coding system
US11277644B2 (en) 2018-07-02 2022-03-15 Qualcomm Incorporated Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching
US11303885B2 (en) 2018-10-25 2022-04-12 Qualcomm Incorporated Wide-angle intra prediction smoothing and interpolation
US11431971B2 (en) 2019-06-24 2022-08-30 Industrial Technology Research Institute Method and image processing apparatus for video coding
US11846702B2 (en) * 2019-07-18 2023-12-19 Nec Corporation Image processing device and image processing method
CN113965764B (en) * 2020-07-21 2023-04-07 Oppo广东移动通信有限公司 Image encoding method, image decoding method and related device
WO2023192336A1 (en) * 2022-03-28 2023-10-05 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for high precision intra prediction
WO2023212254A1 (en) * 2022-04-27 2023-11-02 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for high precision intra prediction

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR970078657A (en) * 1996-05-20 1997-12-12 구자홍 Video data compression device
FR2756399B1 (en) * 1996-11-28 1999-06-25 Thomson Multimedia Sa VIDEO COMPRESSION METHOD AND DEVICE FOR SYNTHESIS IMAGES
JP2000041248A (en) * 1998-07-23 2000-02-08 Sony Corp Image decoder and image decoding method
US6418408B1 (en) * 1999-04-05 2002-07-09 Hughes Electronics Corporation Frequency domain interpolative speech codec system
US6987893B2 (en) * 2001-01-05 2006-01-17 Lg Electronics Inc. Image interpolation method and apparatus thereof
US7929610B2 (en) * 2001-03-26 2011-04-19 Sharp Kabushiki Kaisha Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding
JP2003037843A (en) * 2001-07-23 2003-02-07 Sony Corp Picture processor, method therefor, recording medium and program thereof
US7120196B2 (en) * 2002-04-29 2006-10-10 Ess Technology, Inc. Intra-prediction using intra-macroblock motion compensation
JP2003348595A (en) 2002-05-28 2003-12-05 Sony Corp Image processor and image processing method, recording medium and program
AU2003248913A1 (en) * 2003-01-10 2004-08-10 Thomson Licensing S.A. Defining interpolation filters for error concealment in a coded image
JP4144377B2 (en) * 2003-02-28 2008-09-03 ソニー株式会社 Image processing apparatus and method, recording medium, and program
KR101029762B1 (en) * 2003-03-03 2011-04-19 에이전시 포 사이언스, 테크놀로지 앤드 리서치 Fast mode decision algorithm for intra prediction for advanced video coding
JP3968712B2 (en) * 2003-04-28 2007-08-29 ソニー株式会社 Motion prediction compensation apparatus and method
US8417066B2 (en) * 2004-03-04 2013-04-09 Broadcom Corporation Method and system for polyphase filtering by combining IIR and FIR filters and its applications in video scaling
WO2005107267A1 (en) * 2004-04-28 2005-11-10 Hitachi, Ltd. Image encoding/decoding device, encoding/decoding program, and encoding/decoding method
JP4542447B2 (en) * 2005-02-18 2010-09-15 株式会社日立製作所 Image encoding / decoding device, encoding / decoding program, and encoding / decoding method
US8340177B2 (en) * 2004-07-12 2012-12-25 Microsoft Corporation Embedded base layer codec for 3D sub-band coding
CA2573990A1 (en) * 2004-07-15 2006-02-23 Qualcomm Incorporated H.264 spatial error concealment based on the intra-prediction direction
JP4501631B2 (en) * 2004-10-26 2010-07-14 日本電気株式会社 Image coding apparatus and method, computer program for image coding apparatus, and portable terminal
US7830960B2 (en) * 2005-01-13 2010-11-09 Qualcomm Incorporated Mode selection techniques for intra-prediction video encoding
EP2234405A3 (en) * 2005-04-13 2011-03-16 NTT DoCoMo, Inc. Image prediction on the basis of a frequency band analysis of the to be predicted image
KR100718135B1 (en) * 2005-08-24 2007-05-14 삼성전자주식회사 apparatus and method for video prediction for multi-formet codec and the video encoding/decoding apparatus and method thereof.
JP4650173B2 (en) * 2005-09-05 2011-03-16 ソニー株式会社 Encoding apparatus, encoding method, encoding method program, and recording medium recording the encoding method program
CN101385356B (en) * 2006-02-17 2011-01-19 汤姆森许可贸易公司 Process for coding images using intra prediction mode
US7929608B2 (en) * 2006-03-28 2011-04-19 Sony Corporation Method of reducing computations in intra-prediction and mode decision processes in a digital video encoder
US7545740B2 (en) 2006-04-07 2009-06-09 Corrigent Systems Ltd. Two-way link aggregation
US20070286277A1 (en) * 2006-06-13 2007-12-13 Chen Xuemin Sherman Method and system for video compression using an iterative encoding algorithm
US7840078B2 (en) * 2006-07-10 2010-11-23 Sharp Laboratories Of America, Inc. Methods and systems for image processing control based on adjacent block characteristics
CN102685496B (en) * 2006-07-10 2014-11-05 夏普株式会社 Methods and systems for combining layers in a multi-layer bitstream
EP2056606A1 (en) 2006-07-28 2009-05-06 Kabushiki Kaisha Toshiba Image encoding and decoding method and apparatus
JP4854019B2 (en) 2006-11-29 2012-01-11 独立行政法人情報通信研究機構 Opinion collection system, opinion collection method and opinion collection program
US8331448B2 (en) * 2006-12-22 2012-12-11 Qualcomm Incorporated Systems and methods for efficient spatial intra predictabilty determination (or assessment)
US8942505B2 (en) 2007-01-09 2015-01-27 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filter representation
RU2472305C2 (en) 2007-02-23 2013-01-10 Ниппон Телеграф Энд Телефон Корпорейшн Method of video coding and method of video decoding, devices for this, programs for this, and storage carriers, where programs are stored
JP4762938B2 (en) * 2007-03-06 2011-08-31 三菱電機株式会社 Data embedding device, data extracting device, data embedding method, and data extracting method
JP2008271371A (en) * 2007-04-24 2008-11-06 Sharp Corp Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method, moving image decoding method, and program
KR101362757B1 (en) 2007-06-11 2014-02-14 삼성전자주식회사 Method and apparatus for image encoding and decoding using inter color compensation
JP2009010492A (en) * 2007-06-26 2009-01-15 Hitachi Ltd Image decoder and image conversion circuit
CN101115207B (en) 2007-08-30 2010-07-21 上海交通大学 Method and device for implementing interframe forecast based on relativity between future positions
EP2210421A4 (en) * 2007-10-16 2013-12-04 Lg Electronics Inc A method and an apparatus for processing a video signal
EP2081386A1 (en) 2008-01-18 2009-07-22 Panasonic Corporation High precision edge prediction for intracoding
JP2009284275A (en) 2008-05-23 2009-12-03 Nippon Telegr & Teleph Corp <Ntt> Image encoding method, image decoding method, image encoder, image decoder, image encoding program, image decoding program, and recording medium recording programs and readable by computer
JP5169978B2 (en) * 2009-04-24 2013-03-27 ソニー株式会社 Image processing apparatus and method
CN102771125B (en) * 2009-12-10 2015-12-09 Sk电信有限公司 Use coding/decoding method and the device of tree structure
JP2012086745A (en) 2010-10-21 2012-05-10 Tenryu Kogyo Kk Headrest structure of passenger seat
JP5614233B2 (en) 2010-10-21 2014-10-29 トヨタ自動車株式会社 Heat insulation structure of exhaust parts
JP5488684B2 (en) 2012-12-28 2014-05-14 ソニー株式会社 Image processing apparatus and method, program, and recording medium
JP5488685B2 (en) 2012-12-28 2014-05-14 ソニー株式会社 Image processing apparatus and method, program, and recording medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI484442B (en) * 2012-09-20 2015-05-11 Univ Nat Taiwan Science Tech Optical illusion image generating device and the method thereof

Also Published As

Publication number Publication date
CN104320664A (en) 2015-01-28
KR20160086420A (en) 2016-07-19
TW201347560A (en) 2013-11-16
RU2015103369A3 (en) 2018-07-27
TW201347561A (en) 2013-11-16
BRPI1015330B1 (en) 2021-04-13
CN104320665A (en) 2015-01-28
KR20170005158A (en) 2017-01-11
US11107251B2 (en) 2021-08-31
US20120033736A1 (en) 2012-02-09
US20200320746A1 (en) 2020-10-08
WO2010123056A1 (en) 2010-10-28
JP2010258740A (en) 2010-11-11
KR101641474B1 (en) 2016-07-20
CN102396230A (en) 2012-03-28
CN104320664B (en) 2018-09-14
EP3211895A1 (en) 2017-08-30
TWI400960B (en) 2013-07-01
RU2665876C2 (en) 2018-09-04
CA2755889A1 (en) 2010-10-28
CN104320666A (en) 2015-01-28
US10755444B2 (en) 2020-08-25
US9123109B2 (en) 2015-09-01
TW201347558A (en) 2013-11-16
CN102396230B (en) 2015-06-17
RU2015103369A (en) 2015-06-20
CN104363457B (en) 2018-09-04
CN104363457A (en) 2015-02-18
KR101697056B1 (en) 2017-01-16
TWI524743B (en) 2016-03-01
RU2015103222A3 (en) 2018-07-27
US10755445B2 (en) 2020-08-25
KR20150022027A (en) 2015-03-03
CN104320666B (en) 2018-11-02
BRPI1015330A2 (en) 2016-05-31
EP2424246A1 (en) 2012-02-29
MX2011010960A (en) 2011-11-02
RU2547634C2 (en) 2015-04-10
TWI531216B (en) 2016-04-21
CA2755889C (en) 2017-08-15
US20130301941A1 (en) 2013-11-14
CA2973288A1 (en) 2010-10-28
KR101641400B1 (en) 2016-07-20
EP3211896A1 (en) 2017-08-30
RU2011142001A (en) 2013-04-27
TWI540901B (en) 2016-07-01
CA2973288C (en) 2018-12-04
CN104320665B (en) 2018-08-31
TWI528789B (en) 2016-04-01
JP5169978B2 (en) 2013-03-27
KR101690471B1 (en) 2016-12-27
KR20120027145A (en) 2012-03-21
US20190221007A1 (en) 2019-07-18
RU2015103222A (en) 2016-08-20
TW201347559A (en) 2013-11-16
AU2010240090B2 (en) 2015-11-12
KR20160086419A (en) 2016-07-19
AU2010240090A1 (en) 2011-10-27
KR101786418B1 (en) 2017-10-17
EP2424246A4 (en) 2013-10-02
RU2665877C2 (en) 2018-09-04

Similar Documents

Publication Publication Date Title
TW201105144A (en) Image processing apparatus and method
JP7111859B2 (en) Video decoding method, video encoding method and recording medium
KR101452860B1 (en) Method and apparatus for image encoding, and method and apparatus for image decoding
TW201127066A (en) Image-processing device and method
TW201127069A (en) Image-processing device and method
JP5488685B2 (en) Image processing apparatus and method, program, and recording medium
JP5488684B2 (en) Image processing apparatus and method, program, and recording medium
KR101886259B1 (en) Method and apparatus for image encoding, and computer-readable medium including encoded bitstream
JP6102978B2 (en) Image processing apparatus and method, program, and recording medium
JP5776804B2 (en) Image processing apparatus and method, and recording medium
JP5776803B2 (en) Image processing apparatus and method, and recording medium
JP2015167386A (en) Image processor, method, program and recording medium

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees