TW201227602A - Method and computer-readable medium for calculating depth of image - Google Patents

Method and computer-readable medium for calculating depth of image Download PDF

Info

Publication number
TW201227602A
TW201227602A TW099145298A TW99145298A TW201227602A TW 201227602 A TW201227602 A TW 201227602A TW 099145298 A TW099145298 A TW 099145298A TW 99145298 A TW99145298 A TW 99145298A TW 201227602 A TW201227602 A TW 201227602A
Authority
TW
Taiwan
Prior art keywords
block
image
value
target
depth
Prior art date
Application number
TW099145298A
Other languages
Chinese (zh)
Inventor
Chen-Jhy Wu
Chien-Ming Huang
Original Assignee
Service & Quality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Service & Quality Technology Co Ltd filed Critical Service & Quality Technology Co Ltd
Priority to TW099145298A priority Critical patent/TW201227602A/en
Publication of TW201227602A publication Critical patent/TW201227602A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A method for calculating depth of an image includes: selecting a target block in a target image according to a defined block size, and determining whether the target block is smooth. Select a reference block in a reference image according to the defined block size when the target block is smooth, and compares the target block with the reference block to determine a characteristic. Filter out a check value corresponding to the displacement is via a threshold. Reduce the block size when the check value fails to pass the threshold and select the target block according to the reduced block size till the check value pass the threshold. Determine the depth of the target block according to the characteristic corresponding to the check value pass the threshold. Look for a table to get corresponding depth information when the target block is determined smooth.

Description

1 201227602 六、發明說明:1 201227602 VI. Description of invention:

【發明所屬之技術4員域J 、本發明關於1影像深度 / 是涉及一種二維影像轉換一 ° 方法及其系統,特別 及系統。 、〜二维影像的影像深度計算方法 L无前技術】 利用二個拍攝原點有間 影像的技術,係根據二個二纟^二維影像轉換出三維立體 影像中各物件的深度。相^衫像中物件的位移量來判斷 越大,度錢,反之购^件在二辦彡像之間的位移量 影像位移量比對手段勺 matching) , ^ 位移量後計算其雜。張財娜躺物件的 以一個像素為單位,於像素比對的手段’係每次 對,尋找兩個二維影像中^圍内對每一個像素進行比 大,進行比對所需的運算當影像的解析度越 另」 易對運算系統造成魔大負荷。 ,將方式則是採用區塊比雜一 ing) 行比Ϊ f組成一個群組’每次以一像素群組為單位進 仃^,相較於單點像素比對可有效減少比對的數量。然 =\-個區塊中的像素群組含括到影像中不同深度的物件 ±^+二等不同深度的物件會因位於同一區塊中而套用該區 二後的同—深度,將造成轉換後的三維立體影 面模糊不清。 — 此外,無論是單點像素比對或是區塊比對,當二維影 4/26 201227602 像中包含同色或是相似的晝 對應像素或相對應像素群組以;確的相 一像後,Μ呈現鱗同色或她晝面的正確景深。 【發明内容】 舰ίί!:揭露一種影像深度計算方法,用以快速且正 立體景^ 資訊’藉以轉換二維平面影像為三維 像深括發明的—種方案提供—種影 寸於-目卜㈣ 疋—區塊尺寸,並根據區塊尺 徵來判斷取一目標區塊;依據目標區塊的影像特 塊是否為均勾區塊:當目標區塊非為均勾 2 : 定的區塊尺寸選取-參考影像的-參考 塊及參考區塊進行影像特徵比對,= 寸目才示&塊及參考區塊的—影像差異值。再利用 異值的一檢測值,當檢測值未通過:選, 選取目標區塊,直到檢測值通過筛選; ,據通過喊之檢難所對應的影像差異值,計算 的影像特徵比對-對昭表’則根據目標區塊 本發明的另—種標區塊的深度。 ,儲存用以計算影像深度之電 取之記錄媒體 載式被 括:決定一區壞尺寸種影像深度計算方法,包 塊及參― 5/26 201227602 ^及參相_影像差異值;_ 差異值之-檢測值,當檢測 寸以對目鄉似參考f彡像奸料晴,錢2 =旦選;絲據碰後之區蚊稍目縣像及參考景^ 進灯衫像特姐對崎得目標區塊的深度。 ,明的另-種方案提供—種電腦可讀取之記錄媒體 都儲:用以:算影像深度之電腦可執行程式,當該裎式被 載入電之處理s時執行如上述的影像深度計算方法。 有關本發明的詳細實施手段,將在 中加以闡述。 u八 【實施方式】 第-圖為本發明所提供的—種影像深度計算方法實施 例的流程_。為做聊,町流㈣明糾時參照第二 圖所示的影像示意圖。本實施例運賴數二維平面影像之 間的像差(di_ty)來計算影像深度。如第二圖所示的目 &衫像20及參考影像24係分別由二個影像獅單元模擬 人類又眼視覺角度’擷取同—場景所產生的二維影像。因 二個影像娜裝置取餘置不同,使得產生的目標影像2〇 與^考影像24之間的晝面具有像差,晝面中像差越大的物 件深度越淺(距離影像擷取單元越近),像差越小的物件則 深度越深(距離影像擷取單元越遠)。 一接收目標影像20及參考影像24的處理單元用以計算 二個衫像_像差’以根據像差計算出目標影像的深度, 藉此可將—維f彡像巾的晝面轉變為具有立體感的三維影像 〇 6/26 201227602 值得一提的是,在本實施例中雖然係以影像擷取單元 操取影像的像素資料,以供處理單元運算處理,但在實作 上亚=限於以光學感測的方式來獲取影像資訊,本領域中 =通吊知識之技術人士亦可能以其他類型的感測元件感測 二門中的資汛,例如溫度感測器感測空間中的溫度變化、 麼力巧器感測空間中的壓力變化量等,並轉換成可量化 的電亂$或數值後,由處理單元根據所獲得的電氣量或數 值之差異進行不同景物在空間巾的深度運算分析。 茶照④-圖之流程所述,本實_可㈣已知的 刀斤方去,辨識出目標影像2〇中的 =出各特色區域的指標值及其相對的深度資訊,^建^ —鴨(_)。所述的特色區域係指目 3如〜Λ中’具有相同或相似之晝面表現的大範圍區塊 '衫像中所包含的天空、牆面等區域,嗜等區域因盘 具有相同或相似的書面㈣mm 寺£域因為 一丘、s沾此』 旦面表現故可將同一特色區域萃取出 θ標值,如亮度(1職)或彩度(chroma),也就 像素值;並根據該等特色區域之間的4 ==關係判斷出特色區域的深度資訊。 t 如:轉影像晝面中的線條收傲方向,判 斷離收斂,錢糾H域深度越淺。 目;的步驟還包括^義—區塊尺寸(S103),以對 才不办像20及參考影像24進 ),以便判斷目桿^鬼比對(block matching 資訊,例如影像24的像差而獲得深度 區塊尺寸為包括有y 16個像素的寬度’定義出的 塊尺寸在目素的矩陣;再根據定義好的區 W像2Q中選取_個目標區塊·(簡)。 7/26 201227602 為均==)=!=徵__辑 像畫面是否為均勻或平滑的影 無紋理的牆面。所⑽影像特、—衫早一且 的色彩、紋理或色塊形狀。例如為目標區塊20晝面令 判斷目標區塊細是否 對目標區塊200中每—像素的像;比 同或相近的亮度及彩度;若該箄偾去^_疋否具有相 同或十分相近,咐懦心值的7^度及衫度都相 勾區塊抬 選取的目標區塊200是為一均 度或彩度差異大則二亮 區嬙?n查品士 ^&塊。非均勻的目標 ▲旦面中,可能包括代表不同物件的不同色塊 =的邊緣。本實施例中會根據目標區塊200是否為均; 結果,進料_處科段,吨得目標區塊 德24^!_標區塊並料勻區塊,則更進—步在參考影 斗的士 I、比對區域242中,根據區塊尺 相大小分別選取出多個比龍塊(簡),並分別與目標 區塊200進行影像特徵的比對和運算,以獲得每一比對區 塊與目標區塊200之間的影像差異值’以及對應於影像差 異值的檢測值(Slll)。比對區域242用以供處理單元在比 子區或M2中找出與目標區塊200最相近的參考區塊。 比對影像特徵的方式,可以採用對目標區塊200與比 對區塊的像素進行運算的手段達成^完全搜尋演紐( full search algorithm)為例,處理單元將會在比對區域 8/26 201227602 當中根據區塊尺寸依序選取出每一比對區塊,分別與目標 區塊200進行像素比對,以找出在比對區域242當中,與 目才示區塊200最相付的區塊。例如:若比對區域242為x 軸及y軸各66個像素,組成具有4356個像素的矩陣,則 在比對區域242中,依據上述的區塊尺寸可以分別割分出 2500個比對區塊(每個比對區塊的起始點間距為1個像素 )’並與目標區塊200 — 一比對。 以目標區塊200及比對區塊240為例,處理單元一一 比對目標區塊200與比對區塊240的每一個像素,計算出 256個影像差異值,並進一步根據影像差異值計算出目標區 塊200與比對區塊240的檢測值。檢測值可為影像差異值 的誤差絕對值總和(Sum of Absolute Difference,SAD)、平 方誤差值總合(Sum of Squared Difference, SSD ),咬是目標 區塊200與比對區塊240之像素間的互相關係數((:r〇ss[Technical Field 4 of the Invention] The present invention relates to a 2D image depth method and system thereof, and a system and system. , ~ 2D image depth calculation method L no prior technology] Using two techniques to capture the origin of the image, the depth of each object in the three-dimensional image is converted according to the two two-dimensional image. The larger the amount of displacement of the object in the image of the shirt, the greater the amount of money, and the amount of displacement between the two images, the image displacement is better than the matching method, and the amount of displacement is calculated. Zhang Caina's lying object is in units of one pixel, and the method of pixel matching is 'separate each time, looking for two two-dimensional images in each square to make a larger ratio for each pixel, to perform the operation required for comparison. The more the resolution is, the more likely it is to overload the computing system. , the method is to use the block to be more than one ing), the line is compared to Ϊ f to form a group 'each time into a pixel group, ^ compared to the single point pixel comparison can effectively reduce the number of comparisons . However, the pixel group in the block contains the objects of different depths in the image. The objects of different depths will be caused by the same depth in the same block. The converted three-dimensional image is blurred. — In addition, whether it is single-point pixel alignment or block alignment, when the 2D image 4/26 201227602 image contains the same color or similar 昼 corresponding pixel or corresponding pixel group; , Μ shows the correct depth of field in the same color or her face. [Summary of the Invention] Ship ίί!: Uncovers an image depth calculation method for fast and positive stereoscopic scene information to convert a two-dimensional plane image into a three-dimensional image that is invented by the invention - a kind of shadow in the - (4) 疋—block size, and judge a target block according to the block rule; according to whether the image block of the target block is a uniform block: when the target block is not a check 2: a fixed block Size Selection - Reference image - reference block and reference block for image feature comparison, = inch to show & block and reference block - image difference value. Reusing a detected value of the different value, when the detected value does not pass: select, select the target block until the detected value passes the screening; according to the image difference value corresponding to the detection of the difficult image, the calculated image feature comparison-pair昭表' is based on the depth of the other type of block of the invention in the target block. The storage medium for storing the electrical depth for calculating the depth of the image is included: determining the image depth calculation method for the bad size of the region, the block and the parameter - 5/26 201227602 ^ and the phase difference _ image difference value; _ difference value The detection value, when the detection of the inch is like the reference to the township like the 彡 奸 奸 , , , , , , , 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱 钱Get the depth of the target block. Another type of program provided by Ming - a computer-readable recording medium is stored: a computer executable program for calculating the depth of the image, and performing the image depth as described above when the file is loaded into the processing s Calculation method. Detailed means for implementing the invention will be set forth herein. [Embodiment] The first diagram is a flow chart of an embodiment of the image depth calculation method provided by the present invention. In order to do the chat, the town flow (four) clearly corrects the reference to the image diagram shown in the second figure. This embodiment calculates the image depth by subtracting the aberration (di_ty) between the two-dimensional plane images. As shown in the second figure, the eye & shirt image 20 and the reference image 24 are respectively simulated by the two image lion units to capture the two-dimensional image generated by the same angle scene. Because the two images are different, the resulting image has an aberration between the target image 2 and the image 24, and the object with the larger aberration in the pupil surface is shallower (distance from the image capturing unit) The closer the object is, the deeper the object is (the farther it is from the image capture unit). A processing unit that receives the target image 20 and the reference image 24 is configured to calculate two shirt images _ aberrations to calculate the depth of the target image according to the aberrations, thereby converting the face of the image towel into having Three-dimensional image of three-dimensionality 〇6/26 201227602 It is worth mentioning that in this embodiment, although the image data of the image is taken by the image capturing unit for processing processing by the processing unit, the implementation is limited to Obtaining image information by optical sensing, the technical person in the field may also sense the resources in the two doors by other types of sensing elements, such as temperature changes in the temperature sensing space of the sensor. After the force sensor senses the amount of pressure change in the space and converts it into a quantifiable oscillating $ or value, the processing unit performs depth calculation of the space object according to the difference in the obtained electrical quantity or value. analysis. According to the process of the tea photo 4-picture, the actual _ can be used to identify the index value of the characteristic area and its relative depth information. duck(_). The characteristic area refers to the area of the sky, the wall surface, etc. contained in the large-scale block's shirt image having the same or similar facet performance, and the hobby area has the same or similar The written (four) mm temple field because of a hill, s smear this can be used to extract the same characteristic area θ value, such as brightness (1 job) or chroma (chroma), also pixel value; The 4 == relationship between the characteristic areas determines the depth information of the featured area. t For example: the line in the image plane is arrogant, the judgment is off, and the depth of the money correction H field is shallower. The steps of the method also include the meaning of the block size (S103), such as the image 20 and the reference image 24, in order to determine the target matching information (such as the image 24 aberration). Obtain the depth block size to include a matrix with a width of y 16 pixels' defined block size in the pixel; and then select _ target block according to the defined area W like 2Q. (Simplified). 201227602 is the average ==)=!= sign__ whether the picture is a uniform or smooth shadowless texture wall. (10) The color, texture or color block shape of the image and the shirt. For example, for the target block 20, it is determined whether the target block is thinner than the image of each pixel in the target block 200; the brightness and chroma are similar or similar; if the ^^疋 is the same or very Similar, the 7^ degree of the heart value and the degree of the shirt are all linked to the block. The selected target block 200 is a uniformity or a large difference in chroma. nCheck the quality of the ^& block. Non-uniform targets In the face, it may include edges that represent different patches of different objects. In this embodiment, it is determined according to whether the target block 200 is average; as a result, the feed _ section, the ton target block block 24^! _ block block and mix the block, then further step in the reference shadow In the bucket taxi I and the comparison area 242, a plurality of dragon blocks (simplified) are respectively selected according to the size of the block scale, and the image features are compared and operated with the target block 200 respectively to obtain each ratio. The image difference value ' between the block and the target block 200 and the detected value corresponding to the image difference value (S111). The alignment area 242 is used by the processing unit to find the reference block closest to the target block 200 in the ratio area or M2. For the way of comparing the image features, the full search algorithm can be used as an example to calculate the target block 200 and the pixels of the comparison block, and the processing unit will be in the comparison area 8/26. In 201227602, each of the comparison blocks is sequentially selected according to the block size, and pixel comparison is performed with the target block 200 to find the area most in the comparison area 242 and the destination block 200. Piece. For example, if the comparison area 242 is 66 pixels each of the x-axis and the y-axis, and a matrix having 4356 pixels is formed, in the comparison area 242, 2500 comparison areas can be separately divided according to the above-mentioned block size. The blocks (the starting point of each aligned block is 1 pixel apart) are compared to the target block 200. Taking the target block 200 and the comparison block 240 as an example, the processing unit compares 256 image difference values for each pixel of the target block 200 and the comparison block 240, and further calculates according to the image difference value. The detected values of the target block 200 and the comparison block 240 are output. The detection value may be a sum of Absolute Difference (SAD) and a Sum of Squared Difference (SSD) of the image difference value, and the bite is between the pixel of the target block 200 and the comparison block 240. Correlation coefficient ((:r〇ss

Correlation Coefficient) 〇 。月參照第二圖所示的目標區塊200’及比對區塊240,示 意圖,以具體例示說明檢測值的計算法。為便於說明,第 三圖所示的區塊的區塊尺寸僅為4x4的像素矩陣,其中, 目標區塊200,共包括Tl、丁2到Τ1ό共16個像素,而/比對區 塊240’則相對地包括Ri、I到心6共16個像素。 以SAD或SSD為例,當目標區塊200,與比對區塊24〇, 進行像素比對時,係分別計算Τι與Ri像素值的差值、丁2 與R2像素值的差值,到T!6與&像素值的差值,而分別獲2 得16個影像差異值。最後,若將這16個差值之絕對值= 加則為誤差絕對值總和;或是將上述16個差值一一取平方 值以後再相加則為平方誤差值總合,藉此可獲得目標區塊 9/26 201227602 200’及比對區塊240’的檢測值。在另一例示中,若以NCC 計算檢測值,則可將目標區塊2〇〇,與比對區塊240,的影像 差異值加權以取得互相關係數。 在比對區域242中的其他比對區塊亦分別與目標區塊 200進行相同的運算’以分別獲得相對應的檢測值。 為了獲得與目標區塊200最相似的比對區塊,因此根 據選用的手段不同’更從計算出的多個檢測值當中找出極 端值(S113)’以及該檢測值的極端值所對應到的比對區塊 。例如:以誤差絕對值總和或平方誤差值總合為檢測值時 ,计异出來的數值越小代表目標區塊與比對區塊的影 像差異越小,兩者越相像’此種手段中所要選擇的極端值 為多個檢測值中的最小值;反之,以互相關係數為檢測值 時,係數越大代表其線性相關程度越高,因此,此時所選 擇的極端值可為多個檢測值中的最大值。 如以誤差絕對值總和或平方誤差值總合為檢測值為例 ’假設第二圖中所示的比對區塊244與目標區塊200進行 像素比對後,所獲得的檢測值為整個比對區域242中的最 小值’即可決定比對區塊244為目標區塊200的相對應參 考區塊。 根據區塊尺寸圈選目標區塊2〇〇時,若區塊尺寸過大 ,可能會使目標區塊200包含到具有不同景深的晝面,例 如所選取的目標區塊2〇〇正好包含了目標影像2〇中的不同 深度的物件時。此種情況下,即使是目標區塊2〇〇與參考 區塊所對應的檢測值是目標區塊2〇〇與比對區域242中其 他比對區塊之檢測值相較之下的極端值,也僅表示出對應 到極端值的比對區塊相較於其他比對區塊與目標區塊2⑽ 10/26 201227602 的相似度更w ’但不代表對應到極端值的比對區塊確實與 目標區塊相似或相同,兩者間仍可能存在過大的誤差 在進行影像對應時,將含有不同深度的晝面歸納到同_ = 塊而進行深度計算,轉換出來的三維影像會產生遠、近不 同的景物具有相_景深的錯誤,因而使得影像模掏=、主 ’減損立體感。 ^ 因此,本實施例更進一步將檢測值的極端值與一預設 的門播值進行比對以_選極端值’判斷被比對的極端值是 否通過篩選(S115)。當極端值為檢測值中的最小值時,= 判斷最小檢測值是否小於或等於門檻值,若是,則所述的 最小檢測值通過門檻值的篩選;而當極端值為最大值時: 則判斷最大檢測值是否大於或等於門檻值,若是,則所述 的隶大檢測值可判斷為通過門檻值的篩選。 若比對結果顯示出目標區塊200與參考區塊所對應的 檢測值通過門檻值的篩選,代表參考區塊244確實是^比 對範圍242内與目標區塊200最相近的區塊,二者所涵蓋 的晝面最為相似,且未包含不同景深的影像。藉此,即可 根據目標區塊200的座標與參考區塊的座標計算出二者間 的影像差異值,在本例中係指二個區塊間的像差,並根據 擷取目標影像20與參考影像24的複數影像擷取單元間的 距離,以及影像擷取單元焦點交會的角度,而計算出目標 區塊200的深度(S117),並將被計算出來的深度加以記錄 〇 反之’若極端值與門檻值相較的結果未能通過門彳監值 的篩選,代表目標區塊200與參考區塊之間的影像特徵的 差異大於可谷忍的範圍,目標區塊2〇〇或參考區塊可能涵 11/26 201227602 t \ 蓋到不同景深的畫面。為了牌彳IT ^ 度’處理單元即縮小區塊尺寸'S1的對應區塊以計算深 =、縮減為糾像素的大小,_===象素 像素大小的矩陣(參考第 鬼尺寸凋正為64個 ,依據調整後的區塊2=)。接著再返回步驟Sl。5 64個像素的目掉免〇 目標影像20中選取包括 ,£塊2G2,再次進行比對和判斷。 經步驟_判:為!的目標區塊2〇。 __塊==:= 標區塊2°°進行像素比對’比對的==忿 二=1T8同’致使處理單元無法判斷檢二 ’改=表的方式來獲取該段 技術辨^目桿中根據其他影像分析 對财Γ 的至少—個特色區域並產生 標區塊20:的200被識別為均勻區塊,則可依據目 例如m象素值獲得該目標區塊200的一特徵值。 中標區塊200所涵蓋的影像為目標影像2〇 都部分,則整個目標區塊200中的256個像素 近近的亮度及彩度’對應到相同或相 區塊200、的特2相同或極為相近的像素值即可作為目標 的义处"^元乂目“區塊200的特徵值與比對表中所記錄 所-項特色區域的指標值進行比對(S121)。例如:第五圖 示的對照表30記錄在儲存單元當巾,分別記錄有a、b 12/26 201227602 c 共通'像素值°°°,或的值302,如該等特色區域300的 100、B被覆^及°玄等特色區域的深度資訊304,分別為 所述的二rredby)A、c被覆蓋於8等。 山密或道路以^ 3、3GG可分別為目標影像2G中的天空、 相對深度^ 辨識物件等區域,深度資訊遞可為 空間關係(如特:i::::、色區域與其他特色區域間的 度資气1304、~, )。其中,部分特色區域的深 可包括絕對深度(例如-數值),例如:根據 顆餘球時g的:遗f紋理而判斷出所述區域的晝面係為-Correlation Coefficient) 〇 . Referring to the target block 200' and the comparison block 240 shown in the second figure, the calculation method for detecting the detected value is specifically illustrated. For convenience of explanation, the block size shown in the third figure is only a pixel matrix of 4×4, wherein the target block 200 includes a total of 16 pixels of T1, D2, and Τ1ό, and the comparison block 240 'There are relatively 16 pixels including Ri, I to Heart 6. Taking SAD or SSD as an example, when the target block 200 is compared with the comparison block 24〇, the difference between the pixel values of Τι and Ri, and the difference between the values of D2 and R2 are respectively calculated. The difference between T!6 and the & pixel value is obtained as 2 image difference values. Finally, if the absolute value of the 16 differences is added, the sum of the absolute values of the errors is obtained; or if the 16 differences are squared one by one and then added, the sum of the squared error values is obtained. The detected value of the target block 9/26 201227602 200' and the comparison block 240'. In another example, if the detected value is calculated by the NCC, the image difference value of the target block 2〇〇 and the comparison block 240 can be weighted to obtain the correlation coefficient. The other alignment blocks in the alignment area 242 are also subjected to the same operation as the target block 200, respectively, to obtain corresponding detection values, respectively. In order to obtain the most similar alignment block with the target block 200, the extreme value (S113)' is found from among the plurality of calculated detection values according to the selected means, and the extreme value of the detected value corresponds to Comparison block. For example, when the sum of the absolute value of the error or the sum of the squared error values is used as the detected value, the smaller the difference is, the smaller the difference between the image of the target block and the comparison block, the more similar the two are. The extreme value of the selection is the minimum of the multiple detection values; conversely, when the cross-correlation coefficient is used as the detection value, the larger the coefficient, the higher the linear correlation degree. Therefore, the selected extreme value can be multiple detections. The maximum value in the value. For example, if the sum of the absolute value of the error or the sum of the squared error values is taken as the detection value, assuming that the comparison block 244 shown in the second figure is compared with the target block 200, the obtained detection value is the whole ratio. The comparison block 244 is the corresponding reference block of the target block 200 for the minimum value in the region 242. When the target block is circled according to the block size, if the block size is too large, the target block 200 may be included in the face with different depth of field. For example, the selected target block 2 contains the target. When the image is at a different depth in the object. In this case, even the detection value corresponding to the target block 2〇〇 and the reference block is the extreme value of the target block 2〇〇 compared with the detected value of the other comparison block in the comparison area 242. It also shows that the comparison block corresponding to the extreme value is more w' than the similarity of the other comparison block to the target block 2(10) 10/26 201227602 but does not represent the comparison block corresponding to the extreme value. Similar to or the same as the target block, there may still be excessive errors between the two. When the image correspondence is performed, the facets with different depths are summarized into the same _ = block for depth calculation, and the converted 3D image will produce far, Nearly different scenes have errors in phase and depth of field, thus making the image model 掏 =, the main 'decreases the three-dimensional sense. Therefore, the present embodiment further compares the extreme value of the detected value with a preset homing value to determine whether the compared extreme value passes the screening (S115). When the extreme value is the minimum value of the detected value, = determine whether the minimum detected value is less than or equal to the threshold value, and if so, the minimum detected value is filtered by the threshold value; and when the extreme value is the maximum value: Whether the maximum detected value is greater than or equal to the threshold value, and if so, the detected value of the large-scale detected value can be determined as a threshold for passing the threshold value. If the comparison result shows that the detection value corresponding to the target block 200 and the reference block is filtered by the threshold value, the reference reference block 244 is indeed the block closest to the target block 200 in the comparison range 242, The faces covered by the person are most similar and do not contain images of different depths of field. Thereby, the image difference value between the coordinates of the target block 200 and the reference block can be calculated, in this example, the aberration between the two blocks, and according to the captured target image 20 Calculating the depth of the target block 200 (S117) and calculating the depth of the target block 200 from the angle between the complex image capturing unit of the reference image 24 and the angle at which the image capturing unit focuses, and recording the calculated depth. The result of the extreme value compared with the threshold value fails to pass the screening of the threshold value, indicating that the difference in image characteristics between the target block 200 and the reference block is greater than the range of the valley, the target block 2〇〇 or reference The block may contain 11/26 201227602 t \ cover to different depth of field pictures. For the plaque IT ^ degree 'processing unit is the corresponding block of the reduced block size 'S1 to calculate the depth =, reduced to the size of the corrected pixel, _ = = = matrix of pixel size (refer to the first ghost size is 64, according to the adjusted block 2 =). Then return to step S1. 5 64 pixels of the target is removed. The target image 20 is selected to include , block 2G2, and the comparison and judgment are performed again. After step _: the target block of ! __块==:= The target block 2°° performs the pixel comparison 'Comparative ==忿二=1T8 with 'the processing unit can not judge the second change=the table to obtain the technology identification In the bar, according to the other image analysis, at least one characteristic region of the wealth and the generated block 20: 200 are identified as a uniform block, a feature value of the target block 200 can be obtained according to the mesh value, for example, the m pixel value. . The image covered by the winning block 200 is the target image 2〇, and the brightness and chroma of the near-near 256 pixels in the entire target block 200 correspond to the same or the same or the same as the phase block 200. The similar pixel value can be used as the target's meaning. The feature value of the block 200 is compared with the index value of the feature area recorded in the comparison table (S121). For example: fifth The illustrated comparison table 30 is recorded in the storage unit as a towel, and records a, b 12/26 201227602 c common 'pixel value ° ° °, or a value 302, such as 100, B of the characteristic area 300 and The depth information 304 of the characteristic areas such as Xuan, respectively, is the two rredby) A, c are covered by 8, etc. The mountain or road with ^ 3, 3GG can be respectively the sky in the target image 2G, relative depth ^ identification For objects and other areas, the depth information can be a spatial relationship (such as special: i::::, the degree of color between the color area and other characteristic areas 1304, ~, ). Among them, the depth of some characteristic areas may include absolute depth ( For example, - numerical value, for example: judging from the texture of g: the texture of the residual ball The face of the area is -

Sir依據標準藍球的尺寸與特色區域a的面積 明確的t A在目標影像20中的深度,並獲得 根據目標區塊200的特徵值比對對照表3〇中的各指標 ^ 302後,判斷是否在對照表3〇中比對出與特徵值相同或 相近的指標值⑸23)··若比對出相同或相近的指標值與特 徵值時,選取該指標值所對應的深度資訊(Sl25),並將被 選取的深度資訊指定給目標區塊2〇〇 (s】27),使被選取的 冰度資訊成為目標區塊200的深度。 ' 其中,目標區塊200從對照表3〇所選取的深度資訊可 能為絕對深度資訊或相對深度資訊。相對深度資訊係關於 不同特色區域之間的空間關係,同一特色區域可能具有一 或多個不同的相對深度資訊,也就是與其他的一或多個特 色區域都具有不同的空間關係。因此,選取到記錄了相對 深度資訊的目標區塊200,亦可能同時對應到一或多個不同 的空間關係。 若未比對出相同或相近的特徵值而無法得知目標區塊 13/26 201227602 200的深度資訊時’則將目標區塊200標記並記錄為一未知 區塊(S129)。 對非均勻區塊進行區塊比對及檢測值篩選而獲得深度 (步驟S117)、或利用將均勻區塊比對對照表2〇而獲得深 度(步驟S127 )、或將均勻區塊標記為未知區塊(步驟gag )後,可判斷目標影像20劃分出來的每一目標區塊是否都 已經過比對(S131)。由於影像比對的順序一般多是由左向 右、由上向下依序比對,因此只要判斷最近一次比對的目 標區塊是否為目標影像20巾關分出來的最右下區塊,即 可得知是否全部的目標區塊都已經過比對。若判斷的結果 發現目標影像2G中的目標區塊尚未完全經過比對,則可選 取次一個目標區塊(S133),並返回步驟sl〇7繼續執行。 由於每一目標區塊200都屬於目標影像2〇中至少一特 色區域的-部分,因此,當整個目標影像2〇的每一個目標 區塊都經過與參考影像進行區塊比對、或㈣對昭表 30後,可使目標影像20的大部分特色區域都透過所屬的目 標區塊200而獲得相對應的絕對深度。同時,還可根據相 鄰的目標區塊2G0之間的深度及邊緣形狀,判斷出不同特 色區域之間的空_係,使得縣在龍表%中未能提供 味度貧訊的部分特色區域也具有相對深度資訊。 而上述僅獲得相對深度的特色區域,可根據盆盘鄰近 的特色區域的空間關係,推算出其絕對深度⑶切,、 而獲得整個目標影像20的深度資訊。 明參m、a及第六B圖所示的深度表示意圖。 如第六A圖所示,深度表6〇記錄了目標影像心备 固特色區域600的深度資訊,包括絕對深度6犯及相對漆 14/26 201227602 度604。每個特色區域_的深度資訊都 =塊依序與參考影像比對、或參照對照表 例如:特色區域A、B及c分別 個目標區塊查詢對照表而獲得A的絕對i度為的ί 位、Β的相對深度為“Β被覆蓋於Α” , 早 度為“C被覆蓋於Β”。特色區域c還經由目桿區 徵比對及檢測值筛選後,計算出絕對i 度為HM早位。以及特色區域£則根據所 丁衣 的區塊比對,和目標區塊與其他 挣^區塊 深度為“E接觸(t〇uch) D” 。 早位和相對 待所有的目標區塊200皆已比對完畢後,處理” _ 根據已獲得的多數深度資訊,騎度表 ^^可 深度的特色區域進行推算。如第六B =仔絕對 區域^的絕對深度可根據其與特色區域特色 ,以及特色區域AM已知的絕對 關係 特色區域Ε的二二2 絕對深度則可根據* 行的工間關係而推算出來同為“95”單仿。 藉此’依據影像區塊均勻盥否,八 的影像區塊進行影像差異比對而:算影像中 影像區塊與預設的對照表進行比對的手段而、θ或將 影像2〇中各個區塊的深度。獲得目標區塊遍的標 可將目標區塊2GG的絲及深度料訊記錄錢存^ 丄並繼_根據區塊尺寸選取下—個目標區塊進行 得整個目標影像2〇的深度(如第六8圖的深度表),以= 15/26 201227602 將目標影像轉換為三維影像。 Ύ掛提的疋’當目標區塊被辨識為均勻區塊時,為 了增加獲得深声咨% ΑΑώ Τ ^ 可減少比ff ^ 衫_實關巾更提供一種 二對讀的手段,請參閱第七圖所*的流程圖。 撼曰^ 塊被辨識別為均勾區塊後,處理單元仍可根 不品i的各影像特徵確定其特徵值(§7〇1),接著以目Sir judges the depth of the target blue image in accordance with the size of the standard blue sphere and the area of the characteristic area a, and obtains the index 303 according to the eigenvalue of the target block 200. Whether the index value that is the same or similar to the eigenvalue is compared in the comparison table (3) 23) · If the same or similar index value and eigenvalue are compared, the depth information corresponding to the index value is selected (S25) And the selected depth information is assigned to the target block 2〇〇(s) 27), so that the selected ice information becomes the depth of the target block 200. The depth information selected by the target block 200 from the comparison table 3 may be absolute depth information or relative depth information. The relative depth information is about the spatial relationship between different characteristic areas. The same characteristic area may have one or more different relative depth information, that is, different spatial relationships with other one or more special color areas. Therefore, the target block 200 selected to record the relative depth information may also correspond to one or more different spatial relationships at the same time. If the same or similar feature value is not compared and the depth information of the target block 13/26 201227602 200 is not known, the target block 200 is marked and recorded as an unknown block (S129). Performing block alignment and detection value screening on the non-uniform block to obtain depth (step S117), or obtaining depth by comparing the uniform block comparison table 2 (step S127), or marking the uniform block as unknown After the block (step gag), it can be judged whether or not each target block divided by the target image 20 has been compared (S131). Since the order of the image comparisons is generally compared from left to right and from top to bottom, it is only necessary to determine whether the target block of the last comparison is the rightmost block of the target image 20, You can see if all the target blocks have been compared. If the result of the judgment finds that the target block in the target image 2G has not been completely aligned, the next target block may be selected (S133), and the process returns to step sl7 to continue execution. Since each target block 200 belongs to a portion of at least one of the target images 2〇, when each target block of the entire target image 2〇 is subjected to block comparison with the reference image, or (four) pairs After the display 30, most of the featured regions of the target image 20 can be made to pass through the associated target block 200 to obtain a corresponding absolute depth. At the same time, according to the depth and edge shape between adjacent target blocks 2G0, the empty space between different characteristic areas can be judged, so that the county fails to provide some characteristic areas of taste in the dragon table%. Also has relative depth information. In the above, only the characteristic area with the relative depth can be obtained, and the absolute depth (3) cut can be calculated according to the spatial relationship of the characteristic area adjacent to the basin, and the depth information of the entire target image 20 can be obtained. A schematic diagram of the depth table shown in m, a and sixth B. As shown in Figure 6A, the depth table 6〇 records the depth information of the target image-prepared feature area 600, including the absolute depth 6 and the relative paint 14/26 201227602 degrees 604. The depth information of each featured area _ is sequentially compared with the reference image, or the reference comparison table, for example, the characteristic area A, B, and c, respectively, the target block query comparison table to obtain the absolute i degree of A. The relative depth of bit and Β is “Β is covered by Α”, and early is “C is covered by Β”. The characteristic area c is also screened by the target area and the detection value, and the absolute i degree is calculated as the HM early position. And the characteristic area is based on the block comparison of the Dingyi, and the depth of the target block and other earned blocks is “E-contact (t〇uch) D”. After the early position and the relative target block 200 have been compared, the processing " _ based on the obtained most depth information, the riding range ^ ^ can be the depth of the characteristic area is calculated. For example, the sixth B = absolute area The absolute depth of ^ can be based on its characteristic characteristics and the absolute relationship known to the characteristic area AM. The absolute depth of the characteristic area can be calculated according to the inter-work relationship of the * line. In this way, according to the uniformity of the image block, the image blocks of the eight are compared to each other: the image block in the image is compared with the preset comparison table, and θ or the image 2 The depth of the block. Obtaining the target block pass can record the volume of the target block 2GG and the depth of the data record and then select the next target image according to the block size. Depth (such as the depth table in Figure 8), convert the target image to a 3D image with = 15/26 201227602. Ύ 疋 疋 'When the target block is recognized as a uniform block, in order to increase the deep sound Consultation % ΑΑώ Τ ^ can be reduced More than the ff ^ shirt _ real off towel provides a two-pair reading means, please refer to the flow chart of the seventh figure * 撼曰 ^ block is identified as a check block, the processing unit can still be rooted Each image feature determines its eigenvalue (§7〇1), followed by

:區塊的特徵值比對與所述目標區塊相鄰且已獲得深度資 ,的其他區塊的特徵值(S7G3),並判斷目標區塊的特徵值 是否與相鄰θ且已獲得深度的其他任-區塊的舰值相符( S®7〇e5) ·右是,則可直接將相鄰區塊所獲得的深度指定為目 ‘區塊的沬度(S707);但若經與相鄰的其區塊比對的結果 ,並未找到相符的特徵值'或相鄰區塊並非均勻區塊而無 特徵值時,再採行比對對照表的步驟(S709),即接續第一 圖步驟S119以下的程序,自對照表中尋找相對應的指標值 及其深度資訊。: the feature value of the block is compared with the feature value of the other block adjacent to the target block and the depth of the block is obtained (S7G3), and it is determined whether the feature value of the target block is adjacent to θ and the depth has been obtained. The other-block values match (S®7〇e5). · Right, the depth obtained by the adjacent block can be directly specified as the degree of the block's block (S707); As a result of the adjacent block comparison, if the matching feature value is not found or the adjacent block is not a uniform block and there is no eigenvalue, the step of comparing the comparison table (S709) is adopted, that is, the continuation A procedure below step S119 is performed to find the corresponding index value and its depth information from the comparison table.

舉具體貫施例來說,請參照第八圖所示的目標影像 示意圖。假設目標影像的解析度為1〇24χ768像素,若根據 16x16大小的區塊尺寸來選取目標區塊,則目標影像20可 等分為3072個區塊’每一列劃分為64個區塊(64欄),共 計48列。 一般常見的影像比對順序係依據光栅掃描順序(raster order)進行’也就是從目標影像2〇的最左上區塊開始進行 比對’依序向右一一完成第一列的區塊比對後,再接著由 下一列的最左區塊開始進行比對,再依序向右直到第二列 的各區塊完成比對,再往下一列進行比對,依此類推。 因此,當第八圖中位於第N欄、第Μ列的目標區塊204 16/26 201227602 被判斷為均㈣_,與i相鄰且p 塊為其左區塊(第N_Uf 每獲得深度資訊的區 攔、第M-j列)、卜f , '〗)、左上區塊(第N-1 塊(第朗攔、第二/列二^1 M-】列i及右上區 各區塊的資訊記錄絲存單元過比對而計算出深度的 對應的特徵值及深度資吒。田,包括區塊的座標、相 由於均勻區塊周圍的相鄰 區塊,因此,若能先行在盘目交高的機率亦為均勾For a specific example, please refer to the target image diagram shown in Figure 8. Assuming that the resolution of the target image is 1〇24χ768 pixels, if the target block is selected according to the block size of 16x16 size, the target image 20 can be equally divided into 3072 blocks, and each column is divided into 64 blocks (64 columns). ), a total of 48 columns. Generally, the common image matching order is performed according to the raster order, that is, the comparison is performed from the top leftmost block of the target image 2〇, and the block alignment of the first column is sequentially performed one by one. Then, the comparison is started by the leftmost block of the next column, and then the blocks are sequentially aligned to the right until the second column is compared, and then the next column is compared, and so on. Therefore, when the target block 204 16/26 201227602 located in the Nth column and the Μ column in the eighth figure is judged to be (4) _, adjacent to i and the p block is its left block (the N_Uf is obtained for each depth information) Block, Mj column, Bu f, '〗), upper left block (N-1 block (Dang dam, second / column 2 ^ 1 M-) column i and the information record of each block in the upper right area The silk storage unit compares and calculates the corresponding feature value and depth of the depth. The field, including the coordinates of the block, and the phase due to the adjacent block around the uniform block, therefore, if the line can be first The probability is also

對出相同的特徵值,即可節】;目,广相鄰的區塊中比 對照表令數量眾多的指標值一一 m 204的特徵值與 塊204獲得深度資訊的速度。因此,元^ f標的特徵值後,根據目標區塊的座心 早7C項取與其相舰塊的特徵值互相比對,並於 符的特徵值時,讀取該相鄰區塊的深度資訊以套用到目標 區塊204 ’作為目標區塊204的深度資訊。例如第七圖中的 目標區塊204與相鄰的區塊L、LT、T&RT的特徵值進行 比對,判斷出與左區塊L的特徵值相符時,即讀取區塊二 記錄在儲存單元的深度資訊,並指定為目標區塊2〇4的深 度。. 、冰 反之,若與相鄰的區塊比對後,皆未找到相符的特徵 值時,則仍繼續執行比對對照表的程序,由對照表中找= 與目標區塊204相對應的深度資訊。 在另一個實施例中,若已可預先確定目標影像與表考 影像的像差方向,可僅針對像差方向選取用以比對的區塊 ,反方向的區塊省略不比對,以增加運算的致率。例如. 若預設目標影像相對於參考影像為左視角影像,參考景《像 17/26 201227602 為右視角影像,則在參考影像當中,與目標區塊相對應的 參考區塊的座標必然在目標區塊座標的右側。此二 使處理單元在比對區域中僅選取出在目標區塊座標右2 比對區塊進行比對,叫少;^必要㈣算資源及時門 藉由上述實施例之說明可知,本發明所提供的^像深 度计异方法,係以-區塊尺寸在目標影像及參考影像中圈 ,複數個像素進行輯,_是以魏方式實作區塊 日r相當於透過多域理單元平行處理各像素的像本值比 广㈣還透過,值來過遽‘塊與 =對應彡考區塊的檢難,以絲據門健輯結 王區塊尺寸的方式’修正區塊比對時可能涵蓋到不二 =晝面而造成立體影像模糊的問題,相較於以單點像= 對(point matching)的方式,本發明的 ’、 量的運算時間,還能維持立體影像的晝面清;f度旦可節省大 ,除此之外,本發明獲取影像深度的手段 影像與參考影像相互進行像素比對之外,針對平w均^ :區塊,更提供-預先建立的氣照表,使該等均;嫌; 服I錄獲相對應的深度f訊,藉此而克 服问色澤或同亮度的晝面 =此而克 獲得深度資訊的問題。的像素比對而 本發Ξ:舉=實之元件及步驟,僅係為闡述 凡遵循本發明之精神及根=所請求保護之範圍的意圖。 進行微巾5之修輪Γ 發明所揭示之技術手段,而 田4或改變者,亦屬本發明所保護之範脅。 18/26 201227602 【圖式簡單說明】 第一圖:本發明提供的一種影像深度計算方法實施例之 流程圖; 第二圖:本發明提供的方法實施例的目標影像及參考影 像之不意圖; 第三圖:本發明提供的方法實施例的目標區塊及比對區 塊之示意圖; 第四圖:本發明提供的方法實施例的目標影像及調整後 之區塊尺寸不意圖, 第五圖:本發明提供的對照表示意圖; 第六A及六B圖:本發明提供的目標影像深度表示意圖 第七圖:本發明提供的另一種影像深度計算方法實施例 的均勻區塊比對流程圖;及 弟八圖.本發明提供的目標區塊與相鄰區塊之不意圖。 【主要元件符號說明】 20目標影像 200, 200’,202, 204 目標區塊 24參考影像 240, 240’,244比對區塊 242比對區域 30對照表 300特色區域 302指標值 304深度資訊 19/26 201227602 60深度表 600特色區域 602絕對深度 604相對深度 T1〜T16像素 R1〜R16像素 L、LT、T、RT相鄰區塊 S101-S135流程步驟 S701-S709流程步驟For the same eigenvalues, the eigenvalues of the plurality of metric values in the widely adjacent blocks are compared with those of the block 204 to obtain the depth information. Therefore, after the eigenvalue of the element ^ f, the eigenvalues of the phase block of the target block are compared with each other according to the feature value of the phase block of the target block, and the depth information of the adjacent block is read when the feature value of the symbol is read. The depth information of the target block 204 is applied to the target block 204'. For example, the target block 204 in the seventh figure is compared with the feature values of the adjacent blocks L, LT, T&RT, and when it is determined that the feature value of the left block L matches, that is, the block 2 record is read. The depth information in the storage unit is specified as the depth of the target block 2〇4. In the opposite case, if no matching feature value is found after the comparison with the adjacent block, the program of the comparison table is continued, and the matching table corresponds to the target block 204. In-depth information. In another embodiment, if the aberration direction of the target image and the test image image can be determined in advance, the block for comparison can be selected only for the aberration direction, and the block in the reverse direction is omitted to increase the operation. The rate. For example, if the preset target image is a left-view image relative to the reference image, and the reference scene “Like 17/26 201227602 is a right-view image, the coordinates of the reference block corresponding to the target block must be in the target among the reference images. The right side of the block coordinates. The second processing unit causes the processing unit to select only the right block in the comparison area to compare the blocks in the right side of the target block, so that the number is small; ^ necessary (four) calculating the resource and the door according to the description of the above embodiment, the present invention The ^ image depth difference method is provided by the block size in the target image and the reference image circle, and the plurality of pixels are used for the series, _ is the Wei method to implement the block day r is equivalent to parallel processing through the multi-domain unit The image value of each pixel is also transmitted through the wide (four) value. The value is over the block and the corresponding block of the test block is checked. The problem of blurring the stereoscopic image caused by the difference to the facet is compared with the method of single point image = point matching, and the operation time of the quantity of the present invention can maintain the stereoscopic image clearing; In addition, the method for obtaining the image depth of the present invention and the reference image are mutually compared with each other, and for the flat w: ^ block, a pre-established gas photo table is provided. Make the same; suspicion; service I recorded the corresponding depth f In order to overcome the problem of color or the same brightness = this is the problem of getting deep information. The pixel comparisons and the steps of the present invention are merely intended to illustrate the spirit of the invention and the intent of the claimed scope. The technical means disclosed by the invention is carried out, and the field 4 or the changer is also a protection of the present invention. 18/26 201227602 [Simplified Schematic] FIG. 1 is a flowchart of an embodiment of an image depth calculation method provided by the present invention; FIG. 2 is a schematic diagram of a target image and a reference image of an embodiment of the method provided by the present invention; FIG. 3 is a schematic diagram of a target block and a comparison block of the method embodiment provided by the present invention; FIG. 4 is a schematic view of the target image and the adjusted block size of the method embodiment provided by the present invention, and FIG. The schematic diagram of the comparison table provided by the present invention; the sixth and sixth B diagrams: the schematic diagram of the target image depth table provided by the present invention. The seventh diagram: the uniform block comparison flowchart of another embodiment of the image depth calculation method provided by the present invention And the eight diagrams of the present invention provide the target block and adjacent blocks. [Main component symbol description] 20 target image 200, 200', 202, 204 target block 24 reference image 240, 240', 244 comparison block 242 comparison area 30 comparison table 300 characteristic area 302 index value 304 depth information 19 /26 201227602 60 depth table 600 featured area 602 absolute depth 604 relative depth T1 ~ T16 pixels R1 ~ R16 pixels L, LT, T, RT adjacent blocks S101-S135 process steps S701-S709 process steps

20/2620/26

Claims (1)

201227602 七、申請專利範圍: 1. 一種影像深度計算方法,包括: 決定一區塊尺寸; 區塊尺寸於—目標影像選取—目標區塊; 依標區塊的-影像特徵,判斷該 為均勻區塊; &疋古 當該=區塊非為均勾區塊,職_ -蒼考影像的一參考區塊,並對該 = 考區塊進行f彡料彳 參考區塊的-影像差異值^^目標區塊及該 利用-門檻值篩選對應於該 ,當該檢測值未通過篩選眸…、值測值 而幻師璉日守,鈿小該區塊尺寸以選 :该目私區塊,直到該檢測值通過篩選; 該檢測值所對應的該影像差異值,計 5亥目私區塊的深度;及 田特二j塊為均勻區塊’根據該目標區塊的該影像 特徵比對一斟昭矣丨、,描· /θ + 度。 …、 X件對應於該目標區塊的深 2. 如2專利範項所述㈣像 寸選取該參考區境,並對該目標區: 進爾特徵比對以獲得該影像差異值的步 根:考區:寸於該參考影像中的-比對區域選取 3. 如==2項所述的影像深度計算方法,其中 據㈣塊尺寸選取該參考區塊,並對該目標區塊及 21/26 201227602 徵比對以獲得該影像差異值的步 該參考區塊進行影像特 驟t,包括: 康該區塊尺寸於該比龍域選取複數比對區塊,气 …寻比對區塊的其争之一為該參考區塊;及4 ’ 該等mi分別與該目標區塊進㈣像特徵比對 並刀別&侍相對應的該影像差異值及每一該 像差異值所對應的該檢測值; 人^ -中4參考區塊所對應的該檢測值為該等 所對應的該等檢測值中的一極端值。f °°兔201227602 VII. Patent application scope: 1. An image depth calculation method, including: determining a block size; block size in - target image selection - target block; determining the uniform region according to the image characteristics of the target block Block; & ancient when the = block is not a check block, job _ - Cang test image of a reference block, and the = test block for the data 彳 reference block - image difference value ^ ^The target block and the utilization-threshold value filter correspond to the same, when the detected value does not pass the screening 眸..., the value of the value is determined by the illusionist, and the size of the block is selected to be: the private block of the target, Until the detected value passes the screening; the image difference value corresponding to the detected value is calculated as the depth of the block of the Haimu private block; and the field block of the second block is the uniform block 'according to the image feature comparison of the target block A 斟 矣丨 , , , , , , , , , ..., X pieces correspond to the depth of the target block 2. As described in the 2 patents (4), the reference area is selected, and the target area is: the target feature is compared to obtain the step root of the image difference value. : test area: in the reference image - the comparison area is selected 3. For example, the image depth calculation method described in item ==2, wherein the reference block is selected according to the (4) block size, and the target block and the target block are 21/ 26 201227602 The comparison step is to obtain the image difference value. The reference block performs the image specific step t, including: the block size of the block is selected in the comparison range, and the gas is searched for the block. One of the contention is the reference block; and 4' the mi corresponding to the target block in the (four) image feature comparison and the image difference value corresponding to the image and the corresponding image difference value The detected value of the human ^ - medium 4 reference block corresponds to an extreme value of the detected values corresponding to the ones. f °° rabbit 4·如申料·㈣3項所料影像深度計算方法, ,利用該門檻值篩選該檢測值的步驟中包括·· 〃、 判斷對應於該參考區塊的該檢測值是否小於或等於 該門播值,當該檢難不小於且不等於該卩⑽值時 ’縮小該區塊尺寸以選取該目標區塊,直到該檢測 值小於或等於該門檻值; 其中,该極端值為該等比對區塊所對應的該等檢測值 中的最小值。4. If the image depth calculation method is used in the three items, the step of screening the detection value by using the threshold value includes:··, determining whether the detection value corresponding to the reference block is less than or equal to the door number. a value, when the difficulty is not less than and not equal to the value of 卩(10), 'reducing the block size to select the target block until the detected value is less than or equal to the threshold value; wherein the extreme value is the ratio The minimum of the detected values corresponding to the block. .如申請翻第3項所述的影像深度計算方法,其中 利用該門檻值篩選該檢測值的步驟中包括: 判斷對應於該參考區塊的該檢測值是否大於或等於 该門插值,當該檢測值不大於且不等於該門播值時 ,縮小該區塊尺寸以選取該目標區塊,直到該檢測 值大於或等於該門檻值; 其中,該極端值為該等比對區塊所對應的該等檢測值 中的最大值。 •如申請專利範圍第1項所述的影像深度計算方法,其中 22/26 201227602 更包括: 根據該目標影像的複拿 照表包括每—料料表,該= 資訊; ^之—指標值及一深度 取與相符的該指標值對應 =值時,l 區塊的深度; ‘心Λ'衣又肓矾為該目標 其中本根據該目標區塊的該影像特徵比對 相符的該指-時: 深度的步驟前,更包括:&于對應於该目標區塊的 尋Si目標f塊相鄰且已獲得深度的-或多個相 Μ £塊,亚分別以該等相鄰區 該影像特徵比對; U目仏區塊進行 當==㈣標區塊的該影像特徵相符時,選 相鄰區塊的深度以作為該目標區塊的 當該相鄰區塊與該目標區塊㈣影像特徵 仃比對該對照表的步驟。 .寺執 S·如申請專利範圍第6項所述的影像深度計算方法,更勺 括· 當該目標影像的每一目標區塊已獲得深度或已 為未知區塊後’利用已獲得深度的目標區塊與^等 23/26 201227602 未知區塊的空間關係,推算出每一未知區塊的深度 9. 一種影像深度計算方法,包括·· 決定一區塊尺寸; 根f該區塊尺寸對—目標影像的-目標區塊及—表 考影像的-參考區塊進行影像特徵比對,以獲得^ 目標區塊及該參考區塊的1像差異值; ^ 利^-門捏值篩選對應該影像差異值之一檢測值一 ,測值未通過篩選時,料魏塊尺相對“ *影像及該參考f彡像騎影料徵比對,直到 測值通過該門檻值之篩選;及 μ铋 根整後之該區塊尺寸對該目標影 像進行影像特徵比對以獲得該目標 声考- 10.如申請專利範圍第9項所述的影像 冰度。 ’根據該區塊尺寸對該目標區塊及“塊::Ϊ : 特徵比對的步驟中,包括: 進仃衫像 根::塊尺。寸於該參考影像+的-比對區域選取 如申料·销制料深騎算 ’根據该區塊尺寸對該目標區塊 ”中 特徵比對,以獲得該影像差異值的步驟行影像 根據該區塊尺寸於紐對區域 ^ . 等比對區塊的其中之一為該參考【:比:區塊,該 該等比對區塊分別與該目標區塊 ,並分別獲得相對應的該影像差及,域比對 像差異值所對應的該檢測值; 母一該等影 24/26 201227602 、中林考區塊所對應的該檢測值為料比對區塊 所對應的該等檢測值中的極端值。 士申明專利範圍第u項所述的影像深度計算方法,直中 驟門檻㈣選制該影像差異值之該檢測值的步 判斷對應於該參考區塊的該檢測值是否小於或等於 該m監值,當該檢難不小於且不特關根值時 ’縮小該區塊尺寸以選取該目標區塊,直到該檢測 值小於或等於該門檻值; /、中忒極端值為該等比對區塊所對應的該等檢測值 中的最小值。 申明專利範圍弟1 1項所述的影像深度計算方法,其中 ’利用該門檻值㈣對應該影像差異值之該檢測值的步 驟中包括: 判斷對應於該參考區塊的該檢測值是否大於或等於 該門檻值,當該檢測值不大於且不等於該門檻值時 ,縮小該區塊尺寸以選取該目標區塊,直到該檢測 值大於或等於該門摇值; 其中,該極端值為該等比對區塊所對應的該等檢測值 中的最大值。 14·一種影像深度計算方法,包括: 決定一區塊尺寸; 根據該區塊尺寸於一目標影像選取一目標區塊以及 於一參考影像的一比對區域中選取複數比對區塊; 該等比對區塊分別與該目標區塊進行影像特徵比對 ’以分別獲得該目標區塊及該等比對區塊的一影像 25/26 201227602 差異值及每一該等影像差異值所對應之一檢測值; 當該等檢測值中的極端值未通過一門檻值之篩選時 ,縮小該區塊尺寸以對該目標影像及該參考影像進 行影像特徵比對,直到該等檢測值的極端值通過該 門檻值之篩選;及 根據調整後之該區塊尺寸對該目標影像及該參考影 像進行影像特徵比對以獲得該目標區塊的深度。 15. 如申請專利範圍第14項所述的影像深度計算方法,其中 ,根據調整後之該區塊尺寸對該目標影像及該參考影像 進行影像特徵比對以獲得該目標區塊的深度的步驟中包 括: 根據調整後之該區塊尺寸對該目標影像的該目標區 塊及該參考影像的一參考區塊進行影像特徵比對 ,以獲得相對的該影像差異值;及 根據該影像差異值計算出該目標區塊的深度; 其中,該參考區塊所對應的該檢測值為該等比對區塊 所對應的該等檢測值中的極端值。 16. —種電腦可讀取之記錄媒體,儲存用以計算影像深度之 電腦可執行程式,當該程式被載入電腦之處理器時執行 如專利申請範圍第1項所述之方法。 17. —種電腦可讀取之記錄媒體,儲存用以計算影像深度之 電腦可執行程式,當該程式被載入電腦之處理器時執行 如專利申請範圍第9項所述之方法。 26/26The image depth calculation method of claim 3, wherein the step of filtering the detection value by using the threshold value comprises: determining whether the detection value corresponding to the reference block is greater than or equal to the gate interpolation value, when When the detected value is not greater than and not equal to the homing value, the size of the block is reduced to select the target block until the detected value is greater than or equal to the threshold; wherein the extreme value corresponds to the aligned block The maximum of these detected values. • The image depth calculation method described in claim 1 of the patent application, wherein 22/26 201227602 further comprises: according to the target image, the retake list includes each material list, the information = ^ - the index value and When a depth is taken to match the corresponding value of the indicator value, the depth of the l block; the 'heart Λ' is the target, and the finger-time is matched according to the image feature comparison of the target block. Before the step of depth, the method further includes: & a corresponding one of the Si-target f blocks corresponding to the target block and having obtained depth - or a plurality of relative blocks, the sub-areas respectively in the adjacent regions Feature alignment; U-view block performs when the image feature of the == (four) tag block matches, and selects the depth of the adjacent block as the target block when the adjacent block and the target block (4) The image characteristics are compared to the steps of the look-up table. Temple S. The image depth calculation method described in item 6 of the patent application scope, and more, when each target block of the target image has obtained depth or is already an unknown block, The spatial relationship between the target block and the unknown block of 23/26 201227602, and the depth of each unknown block is derived. 9. An image depth calculation method, including: determining the size of a block; - image matching of the target image - the target block and the reference image - reference block to obtain the image difference of the target block and the reference block; ^ profit ^ - gate pinch value screening One value of the image difference value should be detected. When the measured value fails to pass the screening, the Wei block is compared with the “* image and the reference f彡 image riding sign, until the measured value is filtered by the threshold value; and μ The image size of the target image is compared to obtain the target sound test - 10. The image iceness as described in claim 9 of the patent application. 'The target size according to the block size Block and "block::Ϊ: feature ratio Step, comprising: a T-shirt as a root into the Ding :: block feet. In the comparison area of the reference image +, the feature comparison in the target block according to the block size is as follows: the step line image of obtaining the difference value of the image is based on The block size is in the paired area ^. One of the aligned blocks is the reference [: ratio: the block, the compared blocks are respectively associated with the target block, and respectively obtain the corresponding The difference between the image difference and the domain-to-image difference value; the parent-like image 24/26 201227602, the detection value corresponding to the Zhonglin test block is the corresponding detection value corresponding to the block The extreme value of the method. The image depth calculation method described in the U.S. patent scope, the step of determining the detection value of the image difference value corresponds to whether the detection value of the reference block is smaller than the image depth calculation method. Or equal to the m monitor value, when the check is not less than and does not specifically close the root value, 'reduce the block size to select the target block until the detected value is less than or equal to the threshold value; /, the middle limit value The check for the comparison block The minimum value of the value. The method for calculating the image depth according to the patent scope, wherein the step of using the threshold value (four) corresponding to the detected value of the image difference value comprises: determining the corresponding block corresponding to the reference block Whether the detected value is greater than or equal to the threshold value, when the detected value is not greater than and not equal to the threshold value, the block size is reduced to select the target block until the detected value is greater than or equal to the threshold value; The extreme value is the maximum value of the detection values corresponding to the comparison blocks. 14. An image depth calculation method, comprising: determining a block size; selecting one target image according to the block size Selecting, by the target block, a plurality of comparison blocks in a comparison region of the reference image; the comparison blocks respectively performing image feature comparison with the target block to obtain the target block and the ratio respectively An image of the block 25/26 201227602 difference value and one of the detected values corresponding to each of the image difference values; when the extreme value of the detected values does not pass through a threshold And reducing the size of the block to perform image feature comparison on the target image and the reference image until an extreme value of the detected values is filtered by the threshold value; and the target image is adjusted according to the adjusted block size And the image image is compared with the reference image to obtain the depth of the target block. The image depth calculation method according to claim 14, wherein the target image is based on the adjusted block size and The step of performing the image feature comparison on the reference image to obtain the depth of the target block includes: performing image feature on the target block of the target image and a reference block of the reference image according to the adjusted block size Aligning to obtain a relative image difference value; and calculating a depth of the target block according to the image difference value; wherein the detection value corresponding to the reference block is corresponding to the comparison block The extreme value in the detected value. 16. A computer readable recording medium storing a computer executable program for calculating image depth, which is executed as described in claim 1 of the patent application when the program is loaded into a computer processor. 17. A computer readable recording medium storing a computer executable program for calculating image depth, which is executed as described in claim 9 of the patent application when the program is loaded into a computer processor. 26/26
TW099145298A 2010-12-22 2010-12-22 Method and computer-readable medium for calculating depth of image TW201227602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW099145298A TW201227602A (en) 2010-12-22 2010-12-22 Method and computer-readable medium for calculating depth of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW099145298A TW201227602A (en) 2010-12-22 2010-12-22 Method and computer-readable medium for calculating depth of image

Publications (1)

Publication Number Publication Date
TW201227602A true TW201227602A (en) 2012-07-01

Family

ID=46933335

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099145298A TW201227602A (en) 2010-12-22 2010-12-22 Method and computer-readable medium for calculating depth of image

Country Status (1)

Country Link
TW (1) TW201227602A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025193A (en) * 2014-04-29 2015-11-04 钰创科技股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
CN105282375A (en) * 2014-07-24 2016-01-27 钰创科技股份有限公司 Attached Stereo Scanning Module
US9323782B2 (en) 2013-07-16 2016-04-26 Novatek Microelectronics Corp. Matching search method and system
CN106231240A (en) * 2015-06-02 2016-12-14 钰立微电子股份有限公司 Monitoring system and operational approach thereof
TWI589149B (en) * 2014-04-29 2017-06-21 鈺立微電子股份有限公司 Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
TWI622022B (en) * 2017-07-13 2018-04-21 鴻海精密工業股份有限公司 Depth calculating method and device
CN108765480A (en) * 2017-04-10 2018-11-06 钰立微电子股份有限公司 Advanced treatment device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323782B2 (en) 2013-07-16 2016-04-26 Novatek Microelectronics Corp. Matching search method and system
CN105025193A (en) * 2014-04-29 2015-11-04 钰创科技股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
TWI589149B (en) * 2014-04-29 2017-06-21 鈺立微電子股份有限公司 Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
US9955141B2 (en) 2014-04-29 2018-04-24 Eys3D Microelectronics, Co. Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
CN105025193B (en) * 2014-04-29 2020-02-07 钰立微电子股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
CN105282375A (en) * 2014-07-24 2016-01-27 钰创科技股份有限公司 Attached Stereo Scanning Module
CN105282375B (en) * 2014-07-24 2019-12-31 钰立微电子股份有限公司 Attached stereo scanning module
CN106231240A (en) * 2015-06-02 2016-12-14 钰立微电子股份有限公司 Monitoring system and operational approach thereof
CN106231240B (en) * 2015-06-02 2020-03-10 钰立微电子股份有限公司 Monitoring system and method of operation thereof
CN108765480A (en) * 2017-04-10 2018-11-06 钰立微电子股份有限公司 Advanced treatment device
TWI622022B (en) * 2017-07-13 2018-04-21 鴻海精密工業股份有限公司 Depth calculating method and device

Similar Documents

Publication Publication Date Title
JP7027537B2 (en) Image processing methods and equipment, electronic devices, and computer-readable storage media
JP6271609B2 (en) Autofocus for stereoscopic cameras
US10083366B2 (en) Edge-based recognition, systems and methods
TW201227602A (en) Method and computer-readable medium for calculating depth of image
JP4772839B2 (en) Image identification method and imaging apparatus
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
JP6456156B2 (en) Normal line information generating apparatus, imaging apparatus, normal line information generating method, and normal line information generating program
Campo et al. Multimodal stereo vision system: 3D data extraction and algorithm evaluation
CN107480613A (en) Face identification method, device, mobile terminal and computer-readable recording medium
KR101560866B1 (en) Viewpoint detector based on skin color area and face area
JP5521676B2 (en) Image processing apparatus and image processing program
US20080292192A1 (en) Human detection device and method and program of the same
WO2014180255A1 (en) Data processing method, apparatus, computer storage medium and user terminal
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
TWI469623B (en) Method for distinguishing a 3d image from a 2d image and for identifying the presence of a 3d image format by feature correspondence determination
JP6580761B1 (en) Depth acquisition apparatus and method using polarization stereo camera
JP5672112B2 (en) Stereo image calibration method, stereo image calibration apparatus, and computer program for stereo image calibration
WO2016107638A1 (en) An image face processing method and apparatus
US20130170756A1 (en) Edge detection apparatus, program and method for edge detection
CN107491744A (en) Human body personal identification method, device, mobile terminal and storage medium
CN110120012A (en) The video-splicing method that sync key frame based on binocular camera extracts
JP2008242833A5 (en)
CN110472085B (en) Three-dimensional image searching method, system, computer device and storage medium
JP2013044597A (en) Image processing device and method, and program
JP2004046464A (en) Apparatus and method for estimating three-dimensional position of mobile object, program, and recording medium thereof