TWI249354B - A method for de-interlacing an interlaced image - Google Patents

A method for de-interlacing an interlaced image Download PDF

Info

Publication number
TWI249354B
TWI249354B TW93139457A TW93139457A TWI249354B TW I249354 B TWI249354 B TW I249354B TW 93139457 A TW93139457 A TW 93139457A TW 93139457 A TW93139457 A TW 93139457A TW I249354 B TWI249354 B TW I249354B
Authority
TW
Taiwan
Prior art keywords
pixel
picture
interlaced
block
unresolved
Prior art date
Application number
TW93139457A
Other languages
Chinese (zh)
Other versions
TW200623875A (en
Inventor
Jia-Jin Wu
Shen-Chuan Tai
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW93139457A priority Critical patent/TWI249354B/en
Application granted granted Critical
Publication of TWI249354B publication Critical patent/TWI249354B/en
Publication of TW200623875A publication Critical patent/TW200623875A/en

Links

Landscapes

  • Television Systems (AREA)

Abstract

A method for de-interlacing an interlaced image. The method is motion adaptive for better visual quality. First, motion detection is used to divide a picture into static and moving regions. Moving regions are performed with spatial direction interpolation, and static regions are performed with temporal direction interpolation. After suitable interpolation is applied and the two regions are merged, high quality frame image can be produced.

Description

1249354 九、發明說明: 【發明所屬之技術領域】 本發明是有關於-種視訊影像之去交錯方法, 有關於一種使用於交錯式影像之去交錯方法。 ’彳疋 【先前技術】 電視系統的發展迄今已有七十多年的歷史,以往為了 希望能在有限的資料傳輸頻寬之下能夠傳送更多的畫面資 料,因而採用了交錯式掃描(interlaced scan)方式。也就是 在第奇數張畫面可能僅傳送第奇數條掃描線的資料,而在 第偶數張畫面可能僅傳送第偶數條掃描線的資料。第奇數 張以及第偶數張畫面快速地交錯顯示,再加上人類視覺的 暫留現象,仍可使人們有觀賞到連續動態晝面的感覺,但 和連續傳送全畫面資料的方式比起來,能夠節省將近一半 的頻寬。 但這樣的方式雖然能夠節省了許多的畫面資料傳輸頻 寬,但也因為缺少傳送一部分的畫面資料,所以會有一些 不可避免的問題產生,例如畫面中物體的邊緣閃爍(edge fllcker)、線間閃爍(interline flicker)以及線條晃動(iine crawling)等問題。這些問題會使觀眾感覺畫面中的物體不 是那麼的自然’物體的移動也不是那麼的平順,進而降低 了影像晝面在視覺上的品質。 再者’隨著現今電視系統的快速發展,數位電視的技 術已日漸成熟。數位電視所標榜的便是高晝質效果,因此 1249354 數位電㈣統上必須採时交錯式的顯示方式,才能達 2對於視覺品質的要求。也就是數位電視必須以連續播放 :整^面的方式來顯示畫面,才能得到最佳的視覺品質。 另方面’為相容於舊有的電視系統規格,交錯式的 描資料傳輸方式仍然需要被保留。 為了以上兩點理由,為現有的交錯式電視系統上加上 又錯的機制是必要的。所謂的去交錯即是利用接收到的 交錯式畫面資料,經由計算補償成完整的畫面,如此,仍 然可利用原本的畫面資料傳輸線路來傳送交錯式的晝面資 ^ γ部可在電視上播放非交錯式的影像畫面。對傳統的 類比電視系統來說’播放的影像品質能夠提昇,而對先進 的數位電㈣統來說’其能夠與傳統交錯式之畫面資料傳 送方式相容。 在目前所存在有㈣交錯技術中,很多都只能在解算 靜態或動態畫面上提供良好的效果,因而無法達到全面的 ^交錯’使畫面無法達到真正的完美,或者是運算速度過 慢’無法提供即時視訊的需要。因此綜合這些理由,實項 :=夠!用於各類電視系統’並且要能同時解算靜態及、 動態畫面資料之去交錯方法。 【發明内容】 比電發明的主要目的就是在提供一種可改善傳統類 比電視糸統之視覺品質的去交錯方法。 本發明的再-目的就是在提供一種可使先進數位電視 1249354 系統與傳統交錯式畫面資料傳輸方式彼此相容之去交錯方 法。 本發明的另一目的就是在提供一種適用於各種靜態及 動態畫面之去交錯方法。 本發明的又一目的就是在提供一種可快速即時解算電 視畫面資料之去交錯方法。 為達到本發明之上述目的,在為一畫面資料進行去交 錯程序時,必須輸入該張、該張之後一張及該張之前兩張 等四張畫面資料。首先利用這些畫面資料進行該張畫面之 動作偵測(motion detection)步驟,用以將該張畫面中的靜態 區域及動態區域區分出來。在靜態區域方面,會在判斷該 靜態區域是攝影器材跟著物體移動的跟拍鏡頭或是純靜態 影像,若是該靜態區域屬於跟拍鏡頭,則利用平均線法(line average)來處理該靜態區域,而若是該靜態區域屬於純靜態 片段,則利用時間平均值法(temporal mean)來處理該靜態區 域。在動態區域方面,首先利用區塊式邊緣偵測法(block based edge detection)將該動態區域區分為正好位於物體邊 緣的部分,以及非位於物體邊緣的部分,若為位於物體邊 緣的部分,利用區塊式方向性内插法(block based direction Interpolation)來處理該部分,若為非位於物體邊緣的部分, 則採用取中值(median)的内插法來處理該部分。晝面在經過 這些處理之後,可將所有缺少的晝素解算出來,將解算出 的畫素與原有畫素結合在一起後,可得一完整的去交錯畫 1249354 【實施方式】 第圖3不了苻合本發明實施例之一方法流程圖。其 中假"又目别欲進行去交錯的畫面(field)為晝面fn,晝面 的後一張畫面為晝面f 品金二f从么 — n 、 —间ίη+1,而畫面fri的刖兩張晝面則分別 為畫面fn-Ι及畫面2。 首先必須將晝面fn+i、fn、fn_i及fn 2皆輸入步驟1〇2中, 般的動作彳貞W方式進行畫面fn的動作彳貞測,以判斷在書 面fn中哪些區域屬於靜態區域,哪些區域屬於動態區域。 在本實施例中採用了 BPPD (brightness profile pattern difference)法來進行晝面的動作偵測。BppD法是以亮度值 (brightness value)做為偵測的依據。 在此對BPPD法做一簡單的說明。請參閱第2圖,若 現欲判斷畫面4中,未經解算的晝素w是屬於靜態或動態 區域,必須將畫素W附近的晝素一併加入考量,如晝面fn』 和畫面fn中的畫素a-f,以及畫面fn-i和畫面fn+i中的晝素 h-i’其中畫素g與畫素w位於同一位置,且若畫素w的座 標位置為(x,y),則晝素a、晝素b及畫素i的座標便分別為 (x,y_z)、(x-z,y-z)及(x+z,y),其他畫素的座標以此類推,其 中z值為一預設之距離值。若在此定義符號fn(a)為晝面^ 中’畫素a的畫素值,其他以此類推,則以下所列的六個 參數可做為畫素w是否屬於動態區域的判斷依據: diffjn(a) = |fn⑷一 fn_2⑷| !249354 diff—fn(d) = |fn(d)-fn-2(d)| diff_fn+1(g) = lfn+l(g)-fn.1(g)| pn(a) = |[fn(a)-fn(b)]-[fn-2(a)-fn 2(b)]| + I fn(a)-fn(c)]-[fn.2(a)^fn.2(c)]| pn(d) = |[fn(d)-fn(e)]-[fn.2(d).fn_2(e)]| + | fn(d)-fn(f)]-[fn-2(d).fn 2(f)]|1249354 IX. Description of the Invention: [Technical Field] The present invention relates to a deinterlacing method for video images, and relates to a deinterlacing method for interlaced images. '彳疋[Prior Art] The development of television systems has been more than 70 years old. In the past, interlaced scanning was used in order to transmit more picture data under a limited data transmission bandwidth. Scan) way. That is, in the odd-numbered picture, only the data of the odd-numbered scanning lines may be transmitted, and in the even-numbered picture, only the data of the even-numbered scanning lines may be transmitted. The interlaced display of the odd-numbered sheets and the even-numbered pictures, together with the persistence of human vision, can still make people feel the continuous dynamic face, but compared with the way of continuously transmitting full-screen data, Save nearly half the bandwidth. However, although this method can save a lot of picture data transmission bandwidth, but because of the lack of transmission of part of the picture data, there will be some unavoidable problems, such as edge fluffing of objects in the picture (edge fllcker), line between Problems such as interline flicker and iine crawling. These problems will make the audience feel that the objects in the picture are not so natural. The movement of the objects is not so smooth, which reduces the visual quality of the images. Furthermore, with the rapid development of today's television systems, the technology of digital television has matured. The digital TV is advertised as a high-quality effect, so the 1249354 digital (four) system must be time-interleaved display mode to achieve 2 visual quality requirements. That is to say, the digital TV must display the picture in a continuous manner: in order to obtain the best visual quality. On the other hand, in order to be compatible with the old TV system specifications, the interleaved data transmission method still needs to be retained. For the above two reasons, it is necessary to add a wrong mechanism to the existing interlaced television system. The so-called de-interlacing is to use the received interlaced picture data to compensate for the complete picture through calculation. Therefore, the original picture data transmission line can still be used to transmit the interlaced face information γ part can be played on the television. Non-interlaced imagery. For traditional analog TV systems, the quality of the playback image can be improved, and for the advanced digital (four) system, it can be compatible with the traditional interlaced image data transmission method. In the current (4) interlacing technology, many can only provide good results on solving static or dynamic pictures, so it can not achieve a comprehensive ^ interlace 'so that the picture can not achieve true perfection, or the operation speed is too slow' Unable to provide instant video needs. So to combine these reasons, the real item: = enough! It is used in various TV systems' and it is necessary to solve the de-interlacing method of static and dynamic picture data at the same time. SUMMARY OF THE INVENTION The main purpose of the invention is to provide a deinterlacing method that can improve the visual quality of conventional analog television systems. A further object of the present invention is to provide a de-interlacing method that allows the advanced digital television 1249354 system to be compatible with conventional interlaced picture data transmission methods. Another object of the present invention is to provide a de-interlacing method suitable for various static and dynamic pictures. It is still another object of the present invention to provide a deinterlacing method for quickly and instantaneously solving television picture data. In order to achieve the above object of the present invention, when performing a de-interlacing procedure for a picture material, it is necessary to input four pictures of the picture, the next picture, and the previous two pictures. First, the motion detection step of the picture is performed by using the picture data to distinguish the static area and the dynamic area in the picture. In the static area, it is judged that the static area is a follow-up lens or a pure still image of the photographic equipment moving along with the object, and if the static area belongs to the follow-up lens, the static area is processed by a line average. If the static region belongs to a pure static segment, the static region is processed using a temporal mean. In terms of the dynamic region, the dynamic region is first divided into a portion just at the edge of the object and a portion not at the edge of the object by using block based edge detection, and if it is located at the edge of the object, A block based direction interpolation is used to process the portion. If it is not at the edge of the object, the median interpolation method is used to process the portion. After these processes, all the missing pixels can be solved, and the calculated pixels can be combined with the original pixels to obtain a complete deinterlaced 12493354. [Embodiment] 3 is not a flowchart of a method according to one embodiment of the present invention. Among them, false " also wants to deinterlace the picture (field) as the face fn, the next picture of the face is the face f, the gold, the f, the n, the ίη+1, and the picture fri The two faces of the picture are the picture fn-Ι and the picture 2. First, the facets fn+i, fn, fn_i, and fn 2 must be input to step 1〇2, and the action 彳贞W mode is used to perform the action speculation of the screen fn to determine which areas in the written fn belong to the static area. , which areas belong to the dynamic area. In the present embodiment, a BPPD (brightness profile pattern difference) method is employed to perform motion detection of the facet. The BppD method uses the brightness value as the basis for detection. Here is a brief description of the BPPD method. Please refer to Fig. 2. If you want to judge the picture 4, the unsolved alizarin w is a static or dynamic area, you must add the elements near the pixel W together, such as the face fn and the picture. The pixel af in fn, and the picture fn-i and the element h-i' in the picture fn+i, where the pixel g is at the same position as the pixel w, and if the coordinate position of the pixel w is (x, y) ), the coordinates of alizarin a, alizarin b, and pixel i are (x, y_z), (xz, yz), and (x+z, y), and the coordinates of other pixels are analogous, where z The value is a preset distance value. If the symbol fn(a) is defined here as the pixel value of the pixel a, and so on, the following six parameters can be used as the basis for determining whether the pixel w belongs to the dynamic region: Diffjn(a) = |fn(4)-fn_2(4)| !249354 diff-fn(d) = |fn(d)-fn-2(d)| diff_fn+1(g) = lfn+l(g)-fn.1( g)| pn(a) = |[fn(a)-fn(b)]-[fn-2(a)-fn 2(b)]| + I fn(a)-fn(c)]-[ Fn.2(a)^fn.2(c)]| pn(d) = |[fn(d)-fn(e)]-[fn.2(d).fn_2(e)]| + | fn (d)-fn(f)]-[fn-2(d).fn 2(f)]|

Pn+l(g) = l[fn+l(g)-fn+l(h)]-[fn.l(g).fnl(h)]| + I fn-M(g)-fn+l(i)]-[fn.i(g).fn.1(i)]| ,數 diff—fn(a)、diff—fn(d)及 diff—fn+1(g)當中有一個以上 大於一預設之門檻值,或是參數Pn(a)、Pn(d)或Pn+i(g)當 中有一個以上大於另一預設之門檻值,則可在步驟1 中 判疋晝面fn中的晝素w位於動態區域中,否則則判定畫素 x位於靜態區域之中。 在步驟104中將畫面匕中的靜態區域及動態區域區分 出來了之後,首先討論靜態區域的處理方式。一般靜態區 域可分為兩種類型,一種是攝影器材跟隨著物體移動的跟 拍鏡頭,代表影像中物體仍然有移動量的存在,另一種則 是物體純粹靜止不動的純靜態影像,代表影像中幾乎沒有 任何的移動量。因此,在步驟1〇6中判斷了晝面匕中的靜 態區域為何種類型之靜態區域,不同類型的靜態區域有不 同的處理方式。 請再參閱第2圖。現假設晝素w為晝面匕的靜態區域 !249354 中,未經解算之_查本 u 旦素’畫素a及畫素d分別為與畫素w 上下相鄰之蚩主 ^ ^ 而旦素g則為畫面fn-Ι及晝面fn+l中,與 呈素W相同位詈的佥| ^ ^ At 的旦素。可以理解的是,若畫素W所位於 、心區域為一跟拍鏡碩時,因為物體有移動量的存在, 所以畫素w與書音aJ?金i j 一畜及晝素d的關聯性較強,因此可在步 驟10 8中,η查± — 一 旦素a及畫素d的畫素值做為參數,利用平 句、良法將i素W的畫素值解算出來,本實施利所利用平均 線法公式如下·· fn(w) = fn(a) + fn(d) 2 另一方面,若畫素W所位於的靜態區域為一純靜態影 像時,因為物體無移動量的存在,所以晝素w與畫面 及晝面fn+1中晝素g的關聯性較強,因此可在步驟u〇中, 以晝面fn-1及畫面fn+1中晝素g的畫素值做為參數,利用時 間平均值法將畫素W的畫素值解算出來,本實施利所利用 時間平均值法公式如下: 户⑽=Γ J^lis) + fn + l(R) 2 接著討論動態區域的處理方法。在動態區域中,對於 位在物體邊緣部分以及非物體邊緣部分的畫素有不同的處 理方式,因此首先要在步驟112中,將位於物體邊緣部分 的畫素及非物體邊緣部分的晝素區分出來。 1249354 在本實施例中是採用區塊式邊緣福測法來做為步驟 H2中的實施方法。請參閲第3圖,假設現欲得知位於掃描 ,Ln上的未解算畫素w是否位於物體的邊緣部分,可藉: 綜合判斷畫素w·的已知畫素得知。在本實施例中^是 利用將晝素集合成區i鬼的方式加以綜合判冑,如帛3圖= 的區塊UrUs、區塊以及畫素w所位於的區塊γ:其 中區塊的大小可以視需要來加以改變。可以理解的是,物 體的邊緣可能會具有如方向3〇2、方向綱以及方向鳩 等所不的方向,因此須對每個方向都做檢測,以對方向 進行檢測的做法為例,其檢測方法為: min 一 AD = ηιίη{|[1-Υ|+|γ_υΐ|} 其中,若計算結果min_AD的值小於—預設之門檀值,則 可判斷畫素w位於-物體之邊緣部分,且該物體邊緣之方 向與方向302相同。利用相同的做法可對方向304及方向 306進行相同的檢測。 祆續上例,饭设已知未解算之晝素w位於一物體邊緣 上’且該物體的方向與方向302相同,接著便須進入步驟 116中解算出晝素w的畫素值。在本實施例中,是使用區 塊式方向性内插法來進行晝素w的解算。t先須先定義一 值為區塊U1、區塊L1與區塊γ的範圍大小,以及乂和y 刀别為旦素W在晝面fn中橫軸和縱軸上的座標,亦即fn(x,y) 代表了晝素w,而fn(x+1,y)代表了在橫軸上與畫素w相鄰 11 1249354 的畫素。其中若k值為偶數時, 方向302相同,因此畫素界之晝 時’且因為物體的邊 之畫素值計算方式如 緣方向與 下:Pn+l(g) = l[fn+l(g)-fn+l(h)]-[fn.l(g).fnl(h)]| + I fn-M(g)-fn+l (i)]-[fn.i(g).fn.1(i)]| , one or more of diff-fn(a), diff-fn(d), and diff-fn+1(g) are greater than If a preset threshold value, or one or more of the parameters Pn(a), Pn(d) or Pn+i(g) is greater than another preset threshold value, the step fn may be determined in step 1. The pixel w in the middle is located in the dynamic region, otherwise it is determined that the pixel x is located in the static region. After the static area and the dynamic area in the picture frame are distinguished in step 104, the processing method of the static area is first discussed. Generally, static areas can be divided into two types. One is a follow-up lens that the photographic equipment follows the movement of the object, which means that the object still has the amount of movement in the image, and the other is a pure still image in which the object is purely stationary, representing the image. There is almost no movement. Therefore, in step 1〇6, it is determined which type of static region is the static region in the facet, and different types of static regions have different processing modes. Please refer to Figure 2 again. Now suppose that the alizarin w is the static region of the 昼 匕 ! 249 354 354 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 249 The denier g is the 佥| ^ ^ At denier of the same position as the prime W in the picture fn-Ι and the facet fn+l. It can be understood that if the pixel W is located and the heart area is a time-lapse mirror, the correlation between the pixel w and the book sound aJ? Jin ij and the animal d is because the object has the amount of movement. Stronger, so in step 108, η check ± once the prime value of prime a and pixel d as parameters, using the flat sentence, good method to solve the pixel value of i prime W, this implementation The average line method is as follows: fn(w) = fn(a) + fn(d) 2 On the other hand, if the static region where the pixel W is located is a pure still image, because the object has no moving amount Existence, so the relationship between the elemental w and the pixel g and the surface fn+1 is strong. Therefore, in step u, the pixel of the element g in the face fn-1 and the picture fn+1 can be used. The value is used as a parameter, and the pixel value of the pixel W is solved by the time average method. The time average method using this method is as follows: Household (10)=Γ J^lis) + fn + l(R) 2 Next, the processing method of the dynamic area will be discussed. In the dynamic region, there are different ways of processing the pixels in the edge portion of the object and the edge portion of the non-object. Therefore, in step 112, the pixel and the non-object edge portion of the object are separated in the step 112. come out. In the present embodiment, the block edge method is used as the implementation method in step H2. Referring to Fig. 3, it is assumed that it is known whether the unsolved pixel w located on the scan, Ln is located at the edge portion of the object, and can be obtained by comprehensively determining the known pixel of the pixel w·. In the present embodiment, the method is to comprehensively judge the method of collecting the elements into the area i ghosts, such as the block UrUs of the 帛3 map=the block and the block γ where the pixel w is located: where the block The size can be changed as needed. It can be understood that the edge of the object may have a direction such as a direction of 3, a direction, a direction, and a direction, so it is necessary to test each direction to detect the direction as an example. The method is: min a AD = ηιίη{|[1-Υ|+|γ_υΐ|} wherein, if the value of the calculation result min_AD is smaller than the preset threshold value, it can be judged that the pixel w is located at the edge portion of the object, And the direction of the edge of the object is the same as the direction 302. The same detection can be performed for direction 304 and direction 306 using the same approach. In the following example, the rice set w is known to be located on the edge of an object and the direction of the object is the same as the direction 302. Then, the process proceeds to step 116 to calculate the pixel value of the element. In the present embodiment, the block type directional interpolation method is used to perform the solution of the pixel w. t must first define a range of the size of the block U1, the block L1 and the block γ, and the coordinates of the y and y knives on the horizontal and vertical axes of the face fn, that is, fn (x, y) represents the morpheme w, and fn(x+1, y) represents the pixel adjacent to the pixel w on the horizontal axis 11 1249354. Where k is an even number, the direction 302 is the same, so the pixel boundary is ’' and because the pixel values of the edges of the object are calculated as the edge direction and the following:

若k值為奇數時,畫素w之畫素值計算方式如下: = fn(x9y)=zIf the k value is an odd number, the pixel value of the pixel w is calculated as follows: = fn(x9y)=z

玉’少+ 1) + /Φ:-备+ 1,少+ 1) 如此可解算出位於物體邊緣之畫素w之畫素值。若書素w 所位於的物體邊緣是與方向304或306同方向的話,也可 利用相同的原理加以解算之。 接著討論畫素w非位於物體邊緣的情況,此時會進入 步驟118進行晝素w的處理。在本實施例中是利用中值内 插法進行畫素w的解算,亦即: fn(w) = median (A,B,C,D) 其中,A為晝素w經由步驟108,即平均線法解算之後的 結果,B為畫素w經由步驟116,即區塊式方向性内插法解 1249354 算之後的結果,C為第二圖中所示之f^Kg),即前一張畫面 中與畫素W同位置的畫素g的畫素值,D為第二圖中所示 之fn+1(g),即後一張畫面中與畫素W同位置的畫素g的書 素值。 ~ 再利用了上述之各個步驟將畫面fn中所缺少的晝素— 一解算出來之後,最後再於步驟120中把這些解算出來的 畫素與畫面fn中原有的畫素結合在一起,即可得到一完整 之去交錯畫面。 雖然本發明已以一較佳實施例揭露如上,然其並非用 以限定本發明,任何熟習此技藝者,在不脫離本發明之精 神和範圍内,當可作各種之更動與潤飾,因此本發明之保 護範圍當視後附之申請專利範圍所界定者為準。 【圖式簡單說明】 為讓本發明之上述和其他目的、特徵、優點與實施例 能更明顯易懂,所附圖式之詳細說明如下·· 第1圖為符合本實施例之去交錯方法流程圖。 第2圖為父錯式畫面中之畫素位置示意圖。 第3圖為以區塊方式判斷物體邊緣方向之示意圖。 【主要元件符號說明】 102_120 :步驟 302-306 :方向 fn-2 ·目刖晝面之前二張畫面fn l :目前晝面之前一張晝面 fη ·目則晝面 fn+l :目前晝面之下一張畫面 13 1249354 a-i :畫素 U1 -U3 ·區塊 Y :區塊 w :畫素 L1-L3 :區塊 Ln-3-Ln + 3 ·掃描線Jade 'less + 1) + / Φ: - preparation + 1, less + 1) This can be used to calculate the pixel value of the pixel w at the edge of the object. If the edge of the object where the book w is located is in the same direction as the direction 304 or 306, the same principle can be used to solve the problem. Next, the case where the pixel w is not located at the edge of the object will be discussed. At this point, the process proceeds to step 118 for the processing of the pixel w. In the present embodiment, the solution of the pixel w is performed by using the median interpolation method, that is, fn(w) = median (A, B, C, D), where A is a halogen, and via step 108, After the average line method solves the result, B is the result of the pixel w via step 116, that is, the block-wise directional interpolation method 12349354, and C is the f^Kg) shown in the second figure, that is, before The pixel value of the pixel g in the same position as the pixel W in one picture, D is fn+1(g) shown in the second picture, that is, the pixel in the same picture as the pixel W in the next picture. The book value of g. ~ After the above steps are used to calculate the missing pixels in the picture fn, finally, in step 120, the calculated pixels are combined with the original pixels in the picture fn. You can get a complete deinterlaced picture. Although the present invention has been described above in terms of a preferred embodiment, it is not intended to limit the invention, and it is obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the invention. The scope of the invention is defined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS In order to make the above and other objects, features, advantages and embodiments of the present invention more obvious, the detailed description of the drawings is as follows: FIG. 1 is a deinterlacing method according to the present embodiment. flow chart. Figure 2 is a schematic diagram of the pixel position in the parental error picture. Figure 3 is a schematic diagram of determining the edge direction of an object in a block manner. [Description of main component symbols] 102_120 : Steps 302-306: Direction fn-2 · Two frames before the target face fn l : The face of the face before the face fn · The face of the face fn+l : The current face The next picture 13 1249354 ai : Picture U1 -U3 · Block Y : Block w : Picture L1-L3 : Block Ln-3-Ln + 3 · Scanning line

1414

Claims (1)

1249354 9 2 2 ^ --- 年·月日修(¾正替換頁 十、申請專利範圍: 1· 一種去交錯方法,用以去交錯(de-interlacing)—交 錯式(interlaced)晝面,該方法包含: (a) 判斷該交錯式晝面中是否具有無畫素值之一未解晝 素’若為是,則執行步驟(b),若為否,則執行步驟⑴; (b) 執行 BPPD (brightness profile pattern difference) 法’用以判斷該未解畫素係位於靜態區域或動態區域,若 位於靜態區域則執行步驟(c),若位於動態區域則執行步驟 ⑴; (c) 判斷該未解畫素是否位於具有移動量的影像之中, 若為是,則執行步驟(d),若為否,則執行步驟(e); (d) 執行平均線(line average)法,用以解算出該未解畫 素之晝素值,接著並回到步驟(a); (e) 執行時間平均值(temporal mean)法,用以解算出該 未解畫素之晝素值,接著並回到步驟(a); (f) 執行區塊式邊緣偵測(block based edge detection) 法’用以判斷該未解畫素係位於物體邊緣或位於非物體邊 緣’若位於物體邊緣則執行步驟(g),若位於非物體邊緣則 執行步驟(h); (g) 執行區塊式方向性内插(block based directional interpolation)法,用以解算出該未解畫素之畫素值,接著並 回到步驟(a); (h) 執行中值(median)内插法,用以解算出該未解畫素 15 1249354 --一—一:,—·* .'-..1 ί年月p隹氣換頁丨 •V—一,--- ----------------------------------和 j 之晝素值,接著並回到步驟(a);以及 ⑴結合該交錯式畫面以及所有經解算出之該未解畫素 之晝素值,用以產生一去交錯畫面。 2·如申請專利範圍第1項所述之方法,其中於步驟(b) 中所執行之 BPPD (brightness profile pattern difference) 法,包含: 計算一 diff—fn(x,y-Z)、一 diff_fn(x,y+Z)、一 diff一fn+1(x,y)、一 pn(x,y-z)、一 pn(x,y+z)及一 pn+i(x y)晝 素值,其中: diff一fn(x,y-z) = |fn(x,y_z)_fn-2(x,y_z)|, diff—fn(x,y+z) = |fn(x,y+z)-fn-2(x,y+z)|, diff—fn+1(x,y) = |fn+1(x,y)-fn l(x,y)|, Pn(x,y-z)-|[fn(x,y-z)-fn(x-z,y-z)]-[fn.2(x,y-z)-fn.2(x-z,y-z)]|+ |fn(x,y-z)_fn(x+z,y_z)]-[fn_2(x,y-z)_fn-2(x+z,y_z)]|, Pn(x,y+z)=|[fn(x,y+z).fn(e)]-[fn.2(x?y+z).fn.2(x.z,y+z)]|+ I ^n(x5y+z)-fn(f)]-[fn-2(x,y+z)-fn.2(x-z,y+z)]| A Pn+i(x,y)=l[fn+i(x,y)-fn+1(x-z,y)Hfn_1(x,y)-fnl(x_z,y)]|+ |fn+l(X,y)-fn+i(X+Z,y)]-[fn-1(x,y)-fn l(x+z,y)]|, 其中,(x,y)代表該未解畫素於該交錯式畫面中之座標,Z 為預設之一距離值,fn、fn-1、fn_2及fn+1分別代表該交錯式 晝面、該交錯式晝面之前一晝面、該交錯式畫面之前二晝 面及該交錯式晝面之後一畫面;以及 判斷該 diff一fn(x,y-z)、該 diff—fn(x,y+z)、該 1249354 2 日修()¾正替換頁 diff」;+1(x,y)、該 Pn(x,y-Z)、該 Pn(x,y+Z)及該 Pn+1(x,y)之計 算結果,若該 diff—fn、該 diff_fn(x,y+z)或該 diff—fn+l(x,y) 之計算結果大於預設之一第一門檻值,或者是該 Pn(x,y-z)、該pn(x,y+z)或該pn+i(x,y)之計算結果大於預設之 一第二門檻值,則該未解畫素位於動態區域,否則該未解 晝素位於靜態區域。 3·如申請專利範圍第1項所述之方法,其中於步驟(d) 中為解算該未解畫素之畫素值fn(x,y)所執行之平均線(line average)法之計算方法為: 其中(x’y)代表該未解晝素於該交錯式畫面中之座標,fn 則代表該交錯式晝面。 4.如申請專利範圍第i項所述之方法,其中於步驟⑷ 中未解算該未解畫素之晝素值fn(x,y)所執行之時間平均值 (temporal mean)法之計算方法為: fh(x9y) J^Kx,y) + fn^l(x^ 其中,(X,y)代表該未解晝素於該交錯式畫面中之座標,f、 η]及。貝,i分別代表該交錯式晝面、該交錯式晝面之前 171249354 9 2 2 ^ --- Year · Month Day Repair (3⁄4 is replacing page 10, patent application scope: 1. A deinterlacing method for de-interlacing-interlaced surface, The method comprises the following steps: (a) determining whether the interlaced surface has one of the pixelless values, and if not, executing step (b); if not, performing step (1); (b) performing The BPPD (brightness profile pattern difference) method is used to determine that the unresolved element is located in a static area or a dynamic area, and if it is in a static area, step (c) is performed, and if it is in a dynamic area, step (1) is performed; (c) determining the Whether the unresolved pixels are located in the image with the amount of movement, if yes, step (d) is performed, if not, step (e) is performed; (d) performing a line average method for Solving the pixel value of the unresolved pixel, and then returning to step (a); (e) performing a temporal mean method for deciphering the pixel value of the unresolved pixel, and then Go back to step (a); (f) perform block based edge detecti On) method 'is used to judge that the unresolved element is located at the edge of the object or at the edge of the non-object'. If it is located at the edge of the object, perform step (g). If it is located at the edge of the object, perform step (h); (g) execution area a block based directional interpolation method for deciphering a pixel value of the unresolved pixel, and then returning to step (a); (h) performing a median interpolation method, Used to solve the unresolved pixels 15 1249354 -- one - one:, —·* .'-..1 ί年月隹隹换换丨•V—一,--- —------ --------------------------- and the pixel value of j, then return to step (a); and (1) combine the interlaced picture And all the calculated pixel values of the unresolved pixels are used to generate a deinterlaced picture. 2. The method of claim 1, wherein the BPPD (step) is performed in step (b) The brightness profile pattern difference method includes: calculating a diff—fn(x, yZ), a diff_fn(x, y+Z), a diff-fn+1(x, y), a pn(x, yz), a pn(x, y+z) and a pn+i(xy) pixel value, wherein: Diff-fn(x,yz) = |fn(x,y_z)_fn-2(x,y_z)|, diff-fn(x,y+z) = |fn(x,y+z)-fn-2 (x,y+z)|, diff-fn+1(x,y) = |fn+1(x,y)-fn l(x,y)|, Pn(x,yz)-|[fn( x,yz)-fn(xz,yz)]-[fn.2(x,yz)-fn.2(xz,yz)]|+ |fn(x,yz)_fn(x+z,y_z)] -[fn_2(x,yz)_fn-2(x+z,y_z)]|, Pn(x,y+z)=|[fn(x,y+z).fn(e)]-[fn. 2(x?y+z).fn.2(xz,y+z)]|+ I ^n(x5y+z)-fn(f)]-[fn-2(x,y+z)-fn .2(xz,y+z)]| A Pn+i(x,y)=l[fn+i(x,y)-fn+1(xz,y)Hfn_1(x,y)-fnl(x_z , y)]|+ |fn+l(X,y)-fn+i(X+Z,y)]-[fn-1(x,y)-fn l(x+z,y)]|, Where (x, y) represents the coordinates of the unresolved pixels in the interlaced picture, Z is a preset distance value, and fn, fn-1, fn_2, and fn+1 represent the interlaced side, a plane before the interlaced surface, a front surface of the interlaced picture, and a picture after the interlaced picture; and determining the diff-fn(x, yz), the diff-fn(x, y+z ), the 12439354 2 day repair () 3⁄4 positive replacement page diff"; +1 (x, y), the Pn (x, yZ), the Pn (x, y + Z) and the Pn + 1 (x, y The calculation result, The calculation result of the diff_fn, the diff_fn(x, y+z) or the diff_fn+l(x, y) is greater than a preset first threshold value, or the Pn(x, yz), the If the calculation result of pn(x, y+z) or the pn+i(x, y) is greater than a preset second threshold value, the unresolved pixel is located in the dynamic region, otherwise the unsolved pixel is located in the static region. . 3. The method of claim 1, wherein in step (d), a line average method is performed for solving the pixel value fn(x, y) of the unresolved pixel. The calculation method is: where (x'y) represents the coordinates of the unsolved element in the interlaced picture, and fn represents the interlaced face. 4. The method according to claim i, wherein the calculation of the temporal mean method performed by the pixel value fn(x, y) of the unresolved element is not solved in the step (4) The method is: fh(x9y) J^Kx, y) + fn^l(x^ where (X, y) represents the coordinates of the unsolved element in the interlaced picture, f, η] and . i represents the staggered face, respectively, before the staggered face 17 1249354 畫面及該交錯式晝面之後一畫面。 5.如申請專利範圍第1項所述之方法,其中於步驟⑴ 中所執行之區塊式邊緣偵測(block based edge detection) 法,包含: 計算一 min_AD參數,其中: min_AD = min{|L-Y| + |Y-U|} 其中,Y代表於該交錯式畫面中,該未解晝素與鄰近晝素 所集合形成之一第一區塊,L及U分別為該交錯式畫面中 之畫素所集合形成之一第二區塊及一第三區塊,該第二區 塊及該第三區塊之大小與該第一區塊相同且皆與該第一區 塊相鄰於同一直線上,該直線之方向可為任意方向;以及 判斷該min_AD參數之計算結果,若該min_AD參數 之計算結果大於預設之一第三門檻值,則該未解晝素位於 物體邊緣,否則該未解晝素位於非物體邊緣。 · 6·如申請專利範圍第5項所述之方法,其中於步驟(g) 中為解算該未解畫素之畫素值fn(x,y)所執行之區塊式方向 性内插(block based directional interpolation)法之計算方法 為. (a)判斷一 k參數之數值,其中該k參數為該第一區塊、 該第二區塊及該第三區塊之大小,若該k參數之數值為偶 18 1249354 9V 9. 21 日修正替換頁 =則執行步驟(b),若該k參數之數值為奇數,則執行步 鄉(c); (b)計算該未解晝素之畫素值fn(x,y),利用: k 1 fil(x9y)^ ^(^+τ·^-1)+^ί(χ-一,y + l) --L·,__2_ 2 八中’(X,y)代表該未解晝素於該交錯式晝面中之座標,fn 則代表該交錯式晝面;以及 · (c)計算該未解晝素之晝素值fn(x,y),利用: Mx,y): Ί + + βι(χ+——l9y- - _2 k 2 2 4 其中’(x,y)代表該未解晝素於該交錯式畫面中之座標,fn 則代表該交錯式畫面。 7.如申請專利範圍第5項所述之方法,其中於步驟(h) 中為解算該未解晝素之畫素值fn(x,y)所執行之中值(median) 内插法之計算方法為: fn(x,y) = median (A,B,C,D) 其中’(x,y)代表該未解畫素於該交錯式畫面中之座標,fn 19 1249354 i 年,i Γ:;轉..::缚 f \ !♦ 則代表該交錯式畫面,A參數代表該未解晝素經由平均線 (line average)法計算之後戶斤產生之結果,B參數代表該未解 晝素經由區塊式方向性内插(block based directional interpolation)法計算之後所產生的結果,C參數代表該交錯 式畫面之前一畫面中,座標同為(x,y)之晝素之晝素值,D 參數代表該交錯式晝面之後一晝面中,座標同為(x,y)之畫 素之晝素值。 201249354 The picture and the one after the interlaced picture. 5. The method of claim 1, wherein the block based edge detection method performed in the step (1) comprises: calculating a min_AD parameter, wherein: min_AD = min{| LY| + |YU|} where Y represents the first block formed by the undissolved and adjacent pixels in the interlaced picture, and L and U are the pixels in the interlaced picture, respectively. Forming a second block and a third block, the second block and the third block are the same size as the first block and are adjacent to the first block on the same line The direction of the line may be any direction; and the calculation result of the min_AD parameter is determined. If the calculation result of the min_AD parameter is greater than a preset third threshold, the unsolved element is located at the edge of the object, otherwise the unsolved The element is located at the edge of the non-object. 6. The method of claim 5, wherein in step (g), the block directional interpolation performed by the pixel value fn(x, y) of the unresolved pixel is solved. The calculation method of the block based directional interpolation method is: (a) determining the value of a k parameter, wherein the k parameter is the size of the first block, the second block, and the third block, if the k The value of the parameter is even 18 1249354 9V 9. 21 day correction replacement page = then step (b) is performed, if the value of the k parameter is odd, then step (c) is executed; (b) the unsolved mass is calculated The pixel value fn(x, y) is obtained by: k 1 fil(x9y)^ ^(^+τ·^-1)+^ί(χ-一, y + l) --L·,__2_ 2 '(X,y) represents the coordinate of the unsolved element in the interlaced facet, fn represents the staggered facet; and (c) calculates the elementary value fn(x, of the unsolved element y), using: Mx, y): Ί + + βι(χ+——l9y- - _2 k 2 2 4 where '(x,y) represents the coordinate of the unsolved element in the interlaced picture, fn Then represents the interlaced picture. 7. As described in claim 5 The method wherein the median interpolation method performed in step (h) for solving the unsolved pixel value fn(x, y) is: fn(x, y) = Median (A, B, C, D) where '(x, y) represents the coordinate of the unresolved pixel in the interlaced picture, fn 19 1249354 i year, i Γ:; turn ..:: binding f \ !♦ represents the interlaced picture, the A parameter represents the result of the uncalculated mass calculated by the line average method, and the B parameter represents the block-wise directional interpolation of the unsolved element ( The result of the calculation based on the block based directional interpolation method, the C parameter represents the pixel value of the pixel of the (x, y) coordinate in the previous picture of the interlaced picture, and the D parameter represents the interlaced surface. In one face, the coordinates are the prime values of the (x, y) pixels.
TW93139457A 2004-12-17 2004-12-17 A method for de-interlacing an interlaced image TWI249354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW93139457A TWI249354B (en) 2004-12-17 2004-12-17 A method for de-interlacing an interlaced image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW93139457A TWI249354B (en) 2004-12-17 2004-12-17 A method for de-interlacing an interlaced image

Publications (2)

Publication Number Publication Date
TWI249354B true TWI249354B (en) 2006-02-11
TW200623875A TW200623875A (en) 2006-07-01

Family

ID=37429544

Family Applications (1)

Application Number Title Priority Date Filing Date
TW93139457A TWI249354B (en) 2004-12-17 2004-12-17 A method for de-interlacing an interlaced image

Country Status (1)

Country Link
TW (1) TWI249354B (en)

Also Published As

Publication number Publication date
TW200623875A (en) 2006-07-01

Similar Documents

Publication Publication Date Title
KR100303728B1 (en) Deinterlacing method of interlaced scanning video
US5519451A (en) Motion adaptive scan-rate conversion using directional edge interpolation
KR100902315B1 (en) Apparatus and method for deinterlacing
US8559517B2 (en) Image processing apparatus and image display apparatus provided with the same
JP4847040B2 (en) Ticker processing in video sequences
US8497937B2 (en) Converting device and converting method of video signals
KR100403364B1 (en) Apparatus and method for deinterlace of video signal
US6897903B1 (en) Apparatus for detecting mixed interlaced and progressive original sources in a video sequence
US20080159395A1 (en) Video signal processing circuit, video signal processing apparatus, and video signal processing method
JP4510874B2 (en) Composite image detector
TW200400755A (en) Method and system for edge-adaptive interpolation for interlace-to-progressive conversion
JP2013030862A (en) Image processing apparatus, image processing method, and image display apparatus
USRE45306E1 (en) Image processing method and device thereof
US5001562A (en) Scanning line converting system for displaying a high definition television system video signal on a TV receiver
TWI471010B (en) A motion compensation deinterlacing image processing apparatus and method thereof
TWI249354B (en) A method for de-interlacing an interlaced image
TWI255140B (en) Caption detection and compensation for interlaced image
Tai et al. A motion and edge adaptive deinterlacing algorithm
JP2003289511A (en) Image scan converting method and apparatus
WO2004002148A1 (en) Motion vector detection device, detection method, motion compensation device, and motion compensation method
TWI245198B (en) Deinterlace method and method for generating deinterlace algorithm of display system
JP2005045700A (en) Motion estimation method for moving picture interpolation and motion estimation apparatus for moving picture interpolation
US8373798B2 (en) Text protection device and related motion adaptive de-interlacing device
KR100628190B1 (en) Converting Method of Image Data's Color Format
JPH11261973A (en) Scanning line interpolation method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees