TW200937350A - Three-dimensional finger motion analysis system and method - Google Patents

Three-dimensional finger motion analysis system and method Download PDF

Info

Publication number
TW200937350A
TW200937350A TW97106361A TW97106361A TW200937350A TW 200937350 A TW200937350 A TW 200937350A TW 97106361 A TW97106361 A TW 97106361A TW 97106361 A TW97106361 A TW 97106361A TW 200937350 A TW200937350 A TW 200937350A
Authority
TW
Taiwan
Prior art keywords
dimensional
points
finger
image
point
Prior art date
Application number
TW97106361A
Other languages
Chinese (zh)
Other versions
TWI346311B (en
Inventor
Yung-Nien Sun
Cheung-Wen Chang
Yen-Ting Chen
Sheng-Pin Ho
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW097106361A priority Critical patent/TWI346311B/en
Publication of TW200937350A publication Critical patent/TW200937350A/en
Application granted granted Critical
Publication of TWI346311B publication Critical patent/TWI346311B/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A system and a method for three-dimensional (3D) motion analysis are disclosed. This system comprises a plurality of markers, a plurality of cameras, a 3D virtual hand model, a module for retrieving image features, a module for initializing model parameters, a module for predicting the 3D information of the markers and a module for predicting 3D model parameters. The method comprises: performing a step for photographing a series of images; performing a step for retrieving image features; performing a step for initializing model parameters; performing a step for detecting and corresponding the information of markers; performing a step for tracing the markers; performing a step for reconstructing the 3D information of the markers; and performing a step for predicting 3D model parameters.

Description

200937350 * 相當多的探討,其中習知之χ光影像在拍攝的時候只 ,有二維的投影資訊,其所有的實驗都是靜態的,病人必 須固定幾個功能性的姿勢來做實驗,不但精確性低且相 篇耗時’習知之動態螢光攝影(Fluoroscopy)雖是即時動 態的’但是其所提供的資訊仍只有二維平面,加上,使 用以上習知技術進行實驗的受測者都須承受X光輻射 的風險。習知之運動分析系統(M〇ti〇n Analysis巧^⑽) 雖可追蹤到放置在病人之手部表皮之標記點(Markers)的 ©三維位置’但此些標記點所能提供的資訊有限不能代 表真實手指的自由度,指節與指節之間的交錯位移每 個指節的局部(Local)關節座標轴向等資訊。 【發明内容】 因此,需要發展一種三維手指影像運動分析系統與 方法’以提供手部運動情形的三維資訊,而克服習知技200937350 * Quite a lot of discussion, in which the traditional Twilight image only has two-dimensional projection information when shooting, all the experiments are static, the patient must fix several functional postures to do the experiment, not only accurate Low and time-consuming 'Fluoroscopy' is dynamic and dynamic, but the information it provides is still only two-dimensional, plus the subjects who experimented with the above-mentioned techniques Subject to the risk of X-ray radiation. The conventional motion analysis system (M〇ti〇n Analysis (^)) can track the three-dimensional position of the Markers placed on the epidermis of the patient's hand, but the information provided by these markers is limited. Represents the degree of freedom of the real finger, the interlaced displacement between the knuckles and the knuckles, and the information about the local joint coordinates of each knuckle. SUMMARY OF THE INVENTION Therefore, there is a need to develop a three-dimensional finger image motion analysis system and method to provide three-dimensional information of a hand motion situation while overcoming conventional techniques.

本發明之一方面為提供 系統與方法,藉以量測到不 提供醫師在診療上更多的資 依據。 一種二維手指影像運動分析 同手指的各種運動參數,來 訊’及評估病患復健程度的 讲备明之另一方面為提供一裡三維手指影像運動 析系統與方洋_ ,兹_ a m , 使用多口相機來降低複數個手指 Μ 動參數的 遮蔽(〇CClUSi〇n)現象’並增加量測手指 動參數的精確性與實用性。 200937350 〇 ❹ 、根據本發明之實施例,提供一種三維手指影像運動 分析系、统。此系統至少包括:複數個標記點、複數個攝影 機;虛擬三維手指模型、影像特徵擷取模組、模型參數 初始化模組預測二維標記點資訊模組、及預測三維模 型參數模組。此些標記點係設置於標的手部上^此些攝 影機係分別設置於標的手部附近之不同位置,其中此些 攝影機具有不同視野(Views)’並依序於複數個蜷間點; 別操取該標的手部的影像,而依序獲得複數個影格 (Frames)組’每—個影格組具有分別對應至攝影機之複 數個影格。虛擬三維手指模型係用以模擬標的手部之手 指的運動。影像特徵擷取模組係用以根據约色差 (Cr-Chromiance)資訊來分別擷取出每一個影格组之每一 個影格的複數個影像特徵,其中影像特徵至少包括:每一 個影格中之每一個手指的前景區域與輪廓資訊、及每一 個標記點的前景區域。模型參數初始化模組係用以根據 影像特徵來初始化虛擬三維手指模型於各時間點之第一 ^時之複數個位置參數组和複數個方向參數组並獲得 每一個手指之每一個指節的指節長度。 針應=三:標記點資訊模組至少包括:標記點偵測與 應模組、標記點追縱模組、標記點追蹤模組、及重構 標記點三維資訊模組。標却 ^ ^ φ ^ ^ ^ 偵測與對應模組係用以根 據位置參數組和方向參數知 ^ 歎組,並利用極線限制(Epipolar ConStraint)和指節長度限制, 夕岁从 々基、 采刀別建立每一個影格組 之影格之多重視野間標記點的對應關係。標記點追縱模 4 200937350 組係用以使用改良的平均移動(Mean_Shift)演算法來追 蹤每一個影格組之每一個影格之標記點的位置。重構標 ❹ ❹ 記點三維資訊模組係用以使用最小平方法 (Least-Squares Metll0d)來重構每一個影格組之每一個些 影格之標記點的三維資訊,而獲得每一個影格組的標記 點三維資訊。預測三維模型參數模組係用以根據每一個 影格組的標記點三維資訊,來定義在每—個時間點之虛 擬三維手指模型的骨節座標系及模型參數。 根據本發明之實施例,提供—種三維手指影像運動 刀析方法纟此方法中,首先,進行前置步驟’以設置 複數個標記點於標的手部;公 ^ A 丁刀別設置複數個攝影機於標 的手部附近之不同位置;及撻 杈供虛擬二維手指模型《接 著’進行拍攝影像序列的步驟 八㈣泡“ * 时驟,以依序於複數個時間點 刀別擷取標的手部的影像, 铁换^ A 咁依序獲得複數個影格組。 然後,進打影像特徵擷取的步 夾八如描& # 驛藉以根據rb色差資訊 來刀別擷取出每一個影格組 特徵。接$ , 每個影格的複數個影像 特徵接著,進打模型參數初始化的 像特徵來初始化虛擬三維手藉以根據景 / 複數個位置參數組和複數個方=第-個時間點時之 手才曰之每一個指節的指節長度 ⑨料调 記點資訊模組的步驟,藉 =,進行預測三維標 向參數,組,並利用極線限制和根::^ 立每一個影格組之影格之多 限制,來分別建 係;使用改良的平均移動演算法來襟㈤點的對應關 每一個影格組之 200937350 每一個影格之姆 得之標圮點的位置;使用最小 一個影格細之每-個影格之標記點的法來重構每 每-個影格組的標記點三維資訊;更;::訊,而獲得 後,所追蹤到之錯誤的標記點位置。芦 後進仃預測三維模型參數的步驟,藉 包 、、 格组的摄根據每一個影 格組的標s己點三維資訊,來定義 擬=維车板班, 可個時間點之該虛 擬一維手私模型的骨節座標系及模型 ❹ 型參數的步驟更使。預測二維模 更使用粒子濾波器’以整合每一個影格组 型的…: 間點之虛擬三維手指模 指模型的模型參數。 湿擬一維手 【實施方式】 ' i 、本發明之實施例係使用三維虛擬模型的追蹤系統, 並藉由整合標記點資訊和影像上的資訊以補償表皮標記 點所提供的資訊’來代表全部手指的運動情形以及手指 ©與手指之間的互動狀況。當手部在運動時,其複數個手 指容易因不同手型姿態、不同相機角度的影響而產生遮 蔽現象,即某一手扎被另一手指所遮蔽而無法被拍攝 到。為了減少此種遮蔽現象,本發明使用具有複數台(例 如:4台)相機的追蹤系統。然而,為了得到物體的三維 資訊,使用多台相機的立體視覺系統通常需要克服多重 視野間的對應問題及重構三維資訊的問題。因此,為了 克服這些問題,本發明應用極線限制(Epip〇lar 200937350One aspect of the present invention provides a system and method for measuring that no physician is provided with more resources for diagnosis and treatment. A two-dimensional finger image motion analysis with the various motion parameters of the finger, the other side of the message and the assessment of the patient's degree of rehabilitation is to provide a three-dimensional finger image motion analysis system and Fang Yang _, Z _ am, Use a multi-port camera to reduce the shadowing of multiple finger tweeting parameters (〇CClUSi〇n) and increase the accuracy and practicability of measuring finger movement parameters. 200937350 In accordance with an embodiment of the present invention, a three-dimensional finger motion analysis system and system are provided. The system includes at least: a plurality of markers, a plurality of cameras; a virtual three-dimensional finger model, an image feature capture module, a model parameter initialization module for predicting a two-dimensional marker point information module, and a prediction three-dimensional model parameter module. These marking points are set on the target hand. These cameras are respectively placed at different positions near the target hand, wherein the cameras have different views (views) and sequentially follow a plurality of points; The image of the target hand is taken, and a plurality of frames are sequentially obtained. Each frame group has a plurality of frames respectively corresponding to the camera. The virtual three-dimensional finger model is used to simulate the movement of the finger of the target hand. The image feature capture module is configured to extract a plurality of image features of each of each of the frame groups according to the information of the color difference (Cr-Chromiance), wherein the image features include at least: each finger in each of the frames The foreground area and outline information, and the foreground area of each marker point. The model parameter initialization module is configured to initialize a plurality of position parameter groups and a plurality of direction parameter groups of the virtual three-dimensional finger model at the first time of each time point according to the image features and obtain a finger of each knuckle of each finger. Section length. Needle=3: The marker point information module includes at least: a marker point detection and response module, a marker point tracking module, a marker point tracking module, and a reconstructed marker point three-dimensional information module. The target ^ ^ φ ^ ^ ^ detection and corresponding module is used to know the sigh group according to the position parameter group and the direction parameter, and to use the Epipolar ConStraint and the knuckle length limit. The cutter does not establish the correspondence between the multiple visual field points of the frame of each frame. Mark Point Modeling 4 The 200937350 group is used to track the position of each point of each frame of each frame group using the improved Mean_Shift algorithm. Reconstruction standard ❹ The three-dimensional information module is used to reconstruct the three-dimensional information of each of the frames of each frame by using the least square method (Least-Squares Metll0d) to obtain each frame group. Mark 3D information. The predictive 3D model parameter module is used to define the bone joint coordinate system and model parameters of the virtual three-dimensional finger model at each time point according to the three-dimensional information of the marker points of each frame group. According to an embodiment of the present invention, a three-dimensional finger image motion knife analysis method is provided. In this method, first, a pre-step is performed to set a plurality of marker points on a target hand; a public camera is provided with a plurality of cameras. At different positions near the target's hand; and 挞杈 for the virtual two-dimensional finger model, "following step 8 (four) bubble" of the image sequence, to capture the target hand in sequence at multiple time points The image, the iron for ^ A 咁 sequentially obtains a plurality of frame groups. Then, the step of the image feature capture is as described in the figure &# 驿 by the rb color difference information to remove each frame group feature. Next, a plurality of image features of each frame are followed by image features initialized by the model parameters to initialize the virtual three-dimensional hand to be based on the scene/plural position parameter group and the plurality of squares = the first time point. The knuckle length of each knuckle is 9 steps of the point information module, borrowing =, predicting the 3D direction parameter, group, and using the pole line limit and root:: ^ each frame group The number of frames is limited to build separately; use the improved average moving algorithm to 襟 (five) points corresponding to the position of each frame of the 200937350 each frame of the frame; use the smallest frame of each fine - A method of marking points of a frame to reconstruct the three-dimensional information of each point of the frame group; more;::, after the acquisition, the position of the wrong point that is tracked. The prediction of the 3D model parameters In the step, the borrowing and the grouping of the frame are based on the three-dimensional information of each of the frame groups of the frame group, and the skeleton structure and model of the virtual one-dimensional hand-pick model can be defined at a certain time point. The step of the ❹-type parameter is further improved. The prediction of the two-dimensional mode uses the particle filter to integrate the model parameters of the virtual three-dimensional finger model of each frame group... The wet one-dimensional hand [embodiment] ' i , an embodiment of the present invention uses a tracking system of a three-dimensional virtual model and is represented by integrating information on the marker points and information on the image to compensate for the information provided by the marker points. The movement of the finger and the interaction between the finger © and the finger. When the hand is in motion, its multiple fingers are easily obscured by different hand postures and different camera angles, that is, one hand is tied In order to reduce such obscuration, the present invention uses a tracking system having a plurality of cameras (for example, four cameras). However, in order to obtain three-dimensional information of an object, stereoscopic vision using multiple cameras is used. The system usually needs to overcome the corresponding problems between multiple fields of view and the problem of reconstructing three-dimensional information. Therefore, in order to overcome these problems, the present invention applies the polar limit (Epip〇lar 200937350)

Constraint)和空間幾何限制於視覺技術中以解決對應 性問題並提出了-套自動選擇視野的重構方法。另一方 面,為了增加追蹤結果的穩定性,本發明提供標記點追 蹤的方法及模型參數預測的方法,以使預測出來的手指 模型參數更為接近真實手骨的運動情形。 ❹ ❹ "月參”’、第1圖,其係繪示根據本發明之實施例之三 維手指影像運動分析系統的方塊示意圖。本實施例之三 維手指影像運動分析系統至少包括:複數個標記點22、 Μ和26、複數個攝影機2〇(例如:4台)、虛擬三維手指 模型30、影像特徵摘取模组4()、模型參數初始化模組 預測一維標記點資訊模組6〇、及預測三維模型參數 70其中影像特徵擷取模組4〇、模型參數初始化模 奴50預測一維標圯點資訊模組、及預測三維模型參 數模級70係被安裝於雷聪r γ丨l 文衣於冤腦80(例如:個人電腦)中。由於 案之硬體設備僅需至少2台攝影機個人電腦.1台及 干標記點’故具有成本低、佔據空間小的優勢。 請參照第1圖和第2圖,笛loa ^ ^ 2團第2圖係繪示根據本發明 :實施例之標記點與骨頭間的關係示意圖。本實施例係 =提供標的手部運動情形的三維資訊其中標的手 :0具有手掌部12和複數個手指Μ,每一根手指Μ 具有至少一個指節28(例如:3個 端鉬13個),每一個指節28之兩 端朝手掌部12的方向依序為—第— 4® 嘴和一第二端。複數 標δ己點22(例如:3個)係設置於丰 ^ 罝於手掌部12上,此些標 。己點22的位置較佳是在手掌皮 手反廣較不易滑動的位置,三 10 200937350 個標記點22的其中兩條連線(未標示)必須互相垂直,用 來表示手掌部12的座標系。每一根手指14之每一個指 節28之第一端設置有標記點26,而每一根手指14之每 一個指節28之第二端設置有標記點24。標記點24和26 ❹Constraint) and space geometry are limited to visual techniques to solve the correspondence problem and propose a set of reconstruction methods that automatically select the field of view. On the other hand, in order to increase the stability of the tracking result, the present invention provides a method for marking point tracking and a method for predicting model parameters so that the predicted finger model parameters are closer to the real hand bone movement.月 ❹ "月参"', Figure 1 is a block diagram showing a three-dimensional finger image motion analysis system according to an embodiment of the present invention. The three-dimensional finger image motion analysis system of the embodiment includes at least: a plurality of markers Points 22, Μ and 26, a plurality of cameras 2 (for example: 4), a virtual three-dimensional finger model 30, an image feature extraction module 4 (), a model parameter initialization module, and a one-dimensional marker point information module. And predicting the three-dimensional model parameters 70, wherein the image feature extraction module 4〇, the model parameter initialization model slave 50 prediction one-dimensional standard point information module, and the prediction three-dimensional model parameter level 70 are installed in Lei Cong r γ丨l Wenyi is in the camphor 80 (for example: personal computer). Because the hardware of the case requires only at least 2 camera PCs. 1 set and dry mark points, it has the advantages of low cost and small space. 1 and 2, the second diagram of the flute loa ^ ^ 2 group is a schematic diagram showing the relationship between the marked point and the bone according to the embodiment of the present invention. This embodiment is a three-dimensional information for providing a target hand movement situation. The target hand :0 has a palm portion 12 and a plurality of finger cymbals, each finger Μ having at least one knuckle 28 (for example: 13 end molybdenum 13), and the ends of each knuckle 28 are directed toward the palm portion 12 For the -4® mouth and a second end. The complex number δ □ point 22 (for example: 3) is set on the palm of the hand 12, and the position of the point 22 is preferably The position of the palm of the hand is relatively difficult to slide, and the two lines (not labeled) of the three 10 200937350 points 22 must be perpendicular to each other to indicate the coordinate system of the palm portion 12. Each of the fingers 14 The first end of the knuckle 28 is provided with a marking point 26, and the second end of each of the knuckles 28 of each finger 14 is provided with a marking point 24. Marking points 24 and 26

係被用來表示一個骨節在空間上的運動情況,其係黏貼 在手指14中心軸線上(如第2圖所示),而其位置放置於 每根指骨的上方,原則上距關節中心約5〜6mm左右,使 其離開關節附近皮膚滑動較大區域,並使其代表骨節運 動的兩個標記點能保持一定距離。如此,關節運動的時 候’其較不容易產生太大的皮膚滑動和位移,可以提升 以標記點為依據的測量手部運動參數方法的可靠度並方 便使用本實施例的演算法來補償。 本實施例將四台攝影機20係分別設置於標的手部 10附近之不同位置,其中此些攝影機2〇 且與標的手部i。等距。此四台攝影機2。依序不於门:數野個 時間點分別擷取該標的手部的影像,而依序獲得複數個 影格組。此些影格組係分別對應至前述之時間點,每一 2影格組具有分別對應至攝影機2〇之複數個影格(例 4個)。請參照第3圖,第3圖為根據本發明之實施 各攝影機於H點所拍攝到之影格的圖片。 2分別制本實施狀虛擬三維手㈣型3〇、影 標特取模組40、模型參數初始化模組5〇、預測三維 :訊模組60、及預測三維模型參數模組70。 * 1 … 30 11 200937350 . 虛擬三維手指模型30係用以模擬標的手部1 〇之手 指14的運動。人類的的手指是具有高度自由度(Degree % of Freedom ; DOF)的構造,每個手關節都具有不同的自 由度。請參照第4A圖,其係繪示本發明之實施例所採 用之球臼型關節的示意圖。人體的關節大致分三大類: 不可動關節、少動關節、可動關節。而其中大部分的關 節都屬於可動關節,又可分為五種:鉸鏈型(HingeIt is used to indicate the movement of a joint in space, which is attached to the central axis of the finger 14 (as shown in Figure 2), and its position is placed above each phalanx, in principle about 5 from the center of the joint. ~6mm or so, so that it slides away from the skin near the joint to a larger area, and makes it possible to maintain a certain distance between the two marked points representing the movement of the joint. Thus, when the joint is moving, it is less likely to cause too much skin slip and displacement, and the reliability of the method for measuring the hand motion parameter based on the marker point can be improved and the algorithm of the embodiment can be used to compensate. In this embodiment, four cameras 20 are respectively disposed at different positions in the vicinity of the target hand 10, wherein the cameras 2 are aligned with the target hand i. Isometric. These four cameras are 2. The order is not in the door: the number of time points is taken from the image of the target hand, and a plurality of frame groups are sequentially obtained. The frame groups correspond to the aforementioned time points, and each of the 2 frame groups has a plurality of frames (for example 4) respectively corresponding to the camera 2〇. Please refer to FIG. 3, which is a picture of a frame captured by each camera at point H according to the implementation of the present invention. 2 The virtual three-dimensional hand (four) type 3〇, the image feature extraction module 40, the model parameter initialization module 5〇, the predicted three-dimensional image module 60, and the predicted three-dimensional model parameter module 70 are separately prepared. * 1 ... 30 11 200937350 . The virtual three-dimensional finger model 30 is used to simulate the movement of the hand 14 of the target hand 1 . Human fingers are constructed with Degree of Freedom (DOF), each with different degrees of freedom. Referring to Figure 4A, there is shown a schematic view of a ball-and-socket joint used in an embodiment of the present invention. The joints of the human body are roughly divided into three categories: non-movable joints, less moving joints, and movable joints. Most of the joints belong to the movable joint, and can be divided into five types: hinge type (Hinge

Joint)、軸型(Pivot Joint)、球臼型(Ball and Socket O Joint)、滑動型(Gliding Joint)、和馬較型(Saddle-ShapedJoint), Pivot Joint, Ball and Socket O Joint, Gliding Joint, and Saddle-Shaped

Joint)。而其中球白型擁有最大自由度,可以在x轴、Y 轴、Z轴各有三個旋轉和平移。因此,本實施例採用球 臼型關節來模擬手指14的關節。如第3圖所示,由球臼 型關節係由一骨的球型面(關節頭)與另一骨的凹面(關節 窩)形成的關節。 請參照第4B圖,其係繪示本發明之實施例之虛擬三 維手指模型的示意圖。三維手部虛擬模型30是由圓柱、 © 球體與半球體所組成。因為人類的手指骨是各自獨立 的。當指節在彎曲時,指節與指節之間仍會產生相對的 位移’所以本實施例使每一個指節都有各自的端點來反 應骨節之間的位移。如第4B圖所示,虛擬三維手指模型 30較佳係將每根手指分為3個指節28a、28b和28c,指 節28a、28b和28c係分別由圓柱體所組成,指節之關節 * 處則以球體與半球體來模擬前述之球臼型關節的結構。 、 在虛擬三維手指模型30中,每根手指具有3個關節,每 12 200937350 . 個關節具有6個自由度。另外,在手掌部份的處理方式 本實施例並不是將手掌視為單一剛體,而是將手掌運動 11 對手指的影響整合進額外賦予的6個自由度,其用意在 計算因手掌運動產生的指根位置對世界座標之整體轉換 (Global Transform)的變動。因此,虛擬三維手指模型3〇 共有120個自由度。雖然虛擬三維手指模型3〇具有極高 的自由度,然而,實際在追蹤參數的時候,本實施例會 考慮人類手指的自然限制而不去預測所有的自由度。 ® 以下說明虛擬三維手指模型30的模型參數。 請參照第4C圖,其係繪示本發明之實施例之虛擬三 維手指模型之骨節座標系的示意圖。首先必須先定義虛 擬三維手指模型30的骨節座標系。如第4C圖示,整個 系統的座標系共分為世界座標系(w〇Hd c〇〇rdinate) ^ 手掌座標系(Hand Coordinate) 2、以及每個指節的關節座 標系3、4、5。 一請參照第4D圖,其係繪示本發明之實施例之虛擬 ❹三維手指模型之模型參數的示意圖。在定義完骨節座標 系之後,便可定義虛擬三維手指模型3〇的模型參數如 下。 (a) 夕;x、心、夕7ζ、〜6個參數,分別為世界座 標系1對手掌座標系2在X轴、y轴、z轴的旋轉及平移; (b) 〜、夕办、化、匕、w、匕6個參數,分別為手掌座 •標系2對關節座標系3在x軸、y軸、z軸的旋轉及平移; - ⑷心、&、夕3z、匕、b 6個參數,分別為關節座 13 200937350 • 標系3對關節座標系4在X軸、y軸、z軸的旋轉及平移; 、 (d)夕打、夕办、“、〜、4、k 6個參數,分別為關節座 標系4對關節座標系5在\軸、y軸、z轴的旋轉及平移。 影像特徵擷取模組40 影像特徵擷取模組40主要是根據rb色差 (Cr-Chromiance)資訊來分別擷取出攝影機2〇所拍攝之 每一個影格組之每一張影格(如第3圖所示)的複數個影 像特徵。此些影像特徵至少包括:影格中每一根手指和 〇 每一個標記點的前景區域、及影格中每一根手指的輪廓 資訊。所擷取下來的影像特徵將有助於後續預測三維標 記點資訊以及模型參數。 手指的前景通常是指膚色區塊的部份。影格中的影 像膚色的區塊佔大部分的比例,若在RGB色彩空間分割 手指前景區域容易因光影的影響而導致膚色不易被完整 的分割出來’因而本實施例採用色差記錄的色彩空間(如 YUV,YCbCr,LMS···等),以可容易地直接在單一維度 〇 上分離出膚色部分。本實施例對於標記點22、24和26 之色彩的選取,為與膚色之色相值相近但飽和程度高之 色彩,使得標記點22、24和26在色差分佈上與一般膚 色具有分辨度。為了降低光影的影響,影像特徵擷取模 組40將RGB色彩空間轉換到YcbCr(REC 6〇1_2 CCIR 601 color space)色彩空間,其中γ代表亮度 . (Luminance),Cb 及 Cr 代表色度(Chr〇minance)。其定 如公式(1)所示: ' 14 200937350 γ 77 256 Λ4Joint). The ball white type has the greatest degree of freedom, and can have three rotations and translations on the x-axis, the y-axis, and the z-axis. Therefore, the present embodiment simulates the joint of the finger 14 using a ball-ankle joint. As shown in Fig. 3, the ball-ankle joint system is a joint formed by a spherical surface (joint joint) of one bone and a concave surface (joint socket) of another bone. Referring to Figure 4B, a schematic diagram of a virtual three-dimensional finger model of an embodiment of the present invention is shown. The three-dimensional hand virtual model 30 is composed of a cylinder, a sphere, and a hemisphere. Because human finger bones are independent. When the knuckles are bent, there is still a relative displacement between the knuckles and the knuckles. So this embodiment has each knuckle having its own end point to reflect the displacement between the segments. As shown in FIG. 4B, the virtual three-dimensional finger model 30 preferably divides each finger into three knuckles 28a, 28b, and 28c, which are respectively composed of cylinders, joints of knuckles. * The structure of the aforementioned ball-ankle joint is simulated by a sphere and a hemisphere. In the virtual three-dimensional finger model 30, each finger has three joints, and every 12 200937350 . joints have 6 degrees of freedom. In addition, the treatment of the palm portion of the present embodiment does not regard the palm as a single rigid body, but integrates the influence of the palm movement 11 on the finger into the additional six degrees of freedom, which is intended to calculate the movement caused by the palm movement. Refers to the change of the root position to the global transformation of the world coordinates. Therefore, the virtual three-dimensional finger model has a total of 120 degrees of freedom. Although the virtual three-dimensional finger model has a very high degree of freedom, in practice, when tracking parameters, the present embodiment considers the natural limitations of human fingers without predicting all degrees of freedom. ® The model parameters of the virtual three-dimensional finger model 30 are explained below. Please refer to FIG. 4C, which is a schematic diagram showing a bone joint coordinate system of a virtual three-dimensional finger model according to an embodiment of the present invention. First, the bone joint coordinate system of the virtual three-dimensional finger model 30 must first be defined. As shown in Fig. 4C, the coordinate system of the whole system is divided into the world coordinate system (w〇Hd c〇〇rdinate) ^ Hand Coordinate 2, and the joint coordinate system 3, 4, 5 of each knuckle. . Please refer to FIG. 4D, which is a schematic diagram showing model parameters of a virtual ❹ three-dimensional finger model according to an embodiment of the present invention. After defining the bone joint coordinate system, the model parameters of the virtual three-dimensional finger model can be defined as follows. (a) Xi; x, heart, eve 7 ζ, ~ 6 parameters, respectively, the rotation and translation of the coordinate system 2 of the world coordinate system 2 on the X-axis, y-axis, and z-axis; (b) ~, 夕, Six parameters of 化, 匕, w, 匕, respectively, are the rotation and translation of the joints of the palms and the coordinate system 2 on the x-axis, y-axis, and z-axis; - (4) heart, & eve 3z, 匕, b 6 parameters, respectively, joint seat 13 200937350 • The rotation and translation of the coordinate system 3 on the X-axis, y-axis, and z-axis of the joint coordinate system 4; (d) Xia, Xi, ", ~, 4, k 6 parameters, respectively, the rotation and translation of the joint coordinate system 4 on the \ axis, y axis, z axis. Image feature capture module 40 Image feature capture module 40 is mainly based on rb color difference ( Cr-Chromiance) information to extract a plurality of image features for each frame of each of the frame groups captured by the camera 2 (as shown in Figure 3). The image features include at least: each of the frames Finger and 前景 前景 foreground area of each marker point, and contour information of each finger in the frame. The captured image features will help to follow-up predictions. Dimensional marker information and model parameters. The foreground of the finger usually refers to the part of the skin color block. The block of the skin color of the image in the frame accounts for most of the proportion. If the foreground area of the finger is divided in the RGB color space, it is easy to be affected by light and shadow. The skin color is not easily separated into complete parts. Thus, the color space (such as YUV, YCbCr, LMS, etc.) recorded by the color difference is used in this embodiment, so that the skin color portion can be easily separated directly in a single dimension. For the selection of the colors of the marking points 22, 24 and 26, the color is similar to the hue value of the skin color but is highly saturated, so that the marking points 22, 24 and 26 have a resolution with the general skin color in the color difference distribution. In effect, the image feature capture module 40 converts the RGB color space into a YcbCr (REC 6〇1_2 CCIR 601 color space) color space, where γ represents brightness. (Luminance), Cb and Cr represent chrominance (Chr〇minance). It is as shown in formula (1): ' 14 200937350 γ 77 256 Λ 4

Cb = 44 2560=¾ 256 R-—G+~B+m 256 256 110 p 21 „ G-—-5+128 Ο) 256 256 首先’影像特徵擷取模組40將整張影格(如第3圖 所示)的影像轉換到YCbCr空間。由觀察廣色區域在 © YCbCr空間的分布情形,可發現膚色區塊係大部分集中 在CbCr平面的某範圍内,所以影像特徵擷取模組的進 一步地取出rb色差的部份。同時,為了使不同攝影機2〇 取得之影像在膚色表現上趨向一致,有利於提升特徵拇 取的穩定性’影像特徵擷取模組40對四個攝影機2〇所 拍攝到四個影格的之影像平面的Cr影像做正規化 (N〇rmaiize)的動作’使其Cr值的分佈趨於一致。至於正 規化的公式係發明所屬領域中具有通常知識者所知,故 〇 不在此贅述。 然後,根據Cr分佈圖(Histogram),以找到膚色的分 布範圍。接著’尋找Cr的分佈圖兩個高峰值後,再對色 差影像取適當的閥值(Threshold),即可將手指前景及標 記點前景分割出來,如第5A圖和第5B圖所示,其分別 =示本發明之實施例之分割出來的手指前景圖和標記點 • 前景圖。本實施例以相同的閥值對如第3圖所示之不同Cb = 44 2560=3⁄4 256 R-—G+~B+m 256 256 110 p 21 „ G-—-5+128 Ο) 256 256 First, the image feature capture module 40 will move the entire frame (as shown in Figure 3). The image shown in the image is converted to the YCbCr space. By observing the distribution of the wide-color region in the YCbCr space, it can be found that most of the skin color block is concentrated in a certain range of the CbCr plane, so the image feature capture module is further The rb color difference is taken out. At the same time, in order to make the images obtained by different cameras 2 趋 in the skin color performance, it is advantageous to improve the stability of the feature thumb. The image feature capturing module 40 shoots four cameras. The normalized (N〇rmaiize) action of the Cr image to the image plane of the four frames makes the distribution of Cr values uniform. As for the normalized formula, it is known to those of ordinary skill in the field of invention. 〇 〇 。 。 。 。 。 Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Put the finger foreground and mark The point foreground is segmented, as shown in Figures 5A and 5B, which respectively show the segmented foreground image and the marker point foreground image of the embodiment of the present invention. This embodiment uses the same threshold value as the first 3 different

現野進行分割,可成功的分割出前景二值影像,如第5C 15 200937350 圖所不,其'诊示根據本發明之實施例分冑帛3 s所示之 不同視野的手指前景圖。 然後,先將整張影像轉成灰階影像,再以拉式濾波 器a_ee Filte相陣對灰階影像做迴旋積,將邊緣部 份強化出來《拉式濾波器的迴旋積矩陣係如下所示: ❹ 請參照第5D圖和冗圖,其分別緣示根據本發明之 實施例之分割出來的手部輪廓圖和漶除背景後的手部輪 廓圖。為了使分割出來的手指輪廊(如第⑺圖所示)不要 受背景影響’本實施例利用如第5A圖所示之手指前景 圖當成一個手部前景遮罩(H d ςη 早 Inland Silhouette Mask),以將 落在手指輪廓影像前景區垃w & Λ. 京122域以外的Pixel值設為〇,反之 則不變。利用這個遮罩可蔣士 # μ ❹ 疋皁了將大部份的背景區域濾除,如 第5E圖所示。 模型春數初始化掇知<;& 模型參數初始化模组你》 Μ , Λη 、 〇係用以根據影像特徵擷取 模組40所獲得之影傻牲外 Α、 办像特徵,來初始化一虛擬三維手指模 型分別於攝影機20之拍摄拉„ l 士 疋拍攝時間點時之複數個位置參數 組和複數個方向參數組。 為了能夠在一開始時能 it π # e *k n 吁距夠凊楚的拍攝到所有的標記 點,使標s己點不會因&太P1主h 同手指交錯的影響而受到遮蔽 16 200937350 . 的現象,本實施例所有的動作都是由全手伸展(Full .Extension)開始。在手開始動作之前,為了使三維手指虛 • 擬模型30能對應到真實之標的手部10的手型,本實施 例會由影像上的資訊先對模型參數作初始化的動作。初 始化的動作則分為兩部分,一部分是對位置參數作初始 化,另-部分是對方向參數作初始化。纟實施例欲初始 化的參數則是五根手指的= 1..5 , 即世界座標系到指根的轉換關係以及手指的伸展方向。 © 請參照第4C®14D圖和第6入圖,第6A圖為繪 示根據本發明之實施例之模型參數初始化模組之初始化 位置參數的流程示意圖。首先,進行步驟11〇,以設定 標記點二元影像的座標系關係。5參數 即是在此所定義的位置參數,其所代表的▲義是世界座 標系1對手掌座標系2之間的轉換關係。在此需要束得 世界座標系1和指根之間的轉換關係。為了求得這個轉 換關係,必須先求得手掌座標系2和根部標記點位置(即 〇 關節座標系3)。如第4C圖所示,在本系統中手掌座標 系是由手掌上所黏貼的三顆標記點Ml、M2、M3所決定, 這三個標記點是位於手掌上較不會移動的位置^沿瓦:方 向為z軸方向,叫从3連線為χ軸方向,y轴方向則為2轴 和X轴的外積》本實施例使用者Ml、M2、M3的位置來 定義手掌座標系2。接著,進行標記點中心偵測的步驟 • 112,以搜尋每根手指根部的標記點位置。在影像特徵擷 • 取模組4〇中,已藉由影像特徵萃取將標記點前景分割出 17 來(如第5B圖所示 ❹ ❹ 200937350 n f 、彼描 尔月网運算子(Sobel Γ )標記點輪_取出來,再對每個標記點的輪 部份利用最小平方H gj球配合(Leas⑽Segmentation in the field can successfully segment the foreground binary image, as shown in Fig. 5C 15 200937350, which shows a foreground image of a different field of view as shown in Fig. 3 s according to an embodiment of the present invention. Then, the whole image is first converted into a grayscale image, and then the gray-scale image is rotated by the pull filter a_ee Filte phase array to strengthen the edge portion. The pull-back matrix of the pull filter is as follows. ❹ Refer to FIG. 5D and the redundancy diagram, which respectively illustrate the segmented hand contour map and the hand contour map after the background is removed according to an embodiment of the present invention. In order to make the segmented finger wheel (as shown in Figure 7) not affected by the background', this embodiment uses the finger foreground image as shown in Figure 5A as a hand foreground mask (H d ς 早 Early Inland Silhouette Mask ), to set the Pixel value outside the foreground area of the finger contour image to be 〇, and vice versa. Using this mask, you can filter out most of the background area, as shown in Figure 5E. The model spring number initialization 掇 &;; 模型 模型 模型 模型 模型 模型 模型 模型 模型 模型 Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据 根据The virtual three-dimensional finger model is respectively photographed by the camera 20 to capture a plurality of position parameter groups and a plurality of direction parameter groups at the time of shooting. In order to be able to start π # e *kn at the beginning, the distance is sufficient. Shooting all the marked points so that the target points are not obscured by the influence of & too P1 main h with the finger interlacing 16 200937350 . All the actions in this embodiment are extended by the whole hand (Full .Extension) Before the start of the action, in order to make the three-dimensional finger virtual model 30 correspond to the hand of the real target 10, this embodiment will first initialize the model parameters by the information on the image. The initialization action is divided into two parts, one is to initialize the position parameter, and the other part is to initialize the direction parameter. The parameter to be initialized in the embodiment is five fingers = 1..5, That is, the conversion relationship between the world coordinate system and the finger root and the extending direction of the finger. © Please refer to the 4C®14D and 6th drawings, and FIG. 6A illustrates the initialization of the model parameter initialization module according to an embodiment of the present invention. The flow chart of the position parameter. First, step 11〇 is performed to set the coordinate system relationship of the binary image of the marker point. The 5 parameter is the position parameter defined here, and the ▲ meaning represented by the world is the coordinate system of the world. The conversion relationship between the coordinate system 2. Here, the conversion relationship between the world coordinate system 1 and the finger root is required. In order to obtain this conversion relationship, the position of the palm coordinate system 2 and the root marker point must be obtained first (ie, Joint coordinate system 3). As shown in Fig. 4C, in this system, the palm coordinate system is determined by the three marking points Ml, M2, M3 attached to the palm, these three marking points are located on the palm of the hand. Position that will move ^ along the tile: the direction is the z-axis direction, the direction from the 3 line is the x-axis direction, and the y-axis direction is the outer product of the 2 axis and the X-axis. The position of the user Ml, M2, M3 in this embodiment comes Define the palm coordinate system 2. Next, enter Steps for detecting the center of the marker point • 112, to search for the position of the marker point at the root of each finger. In the image feature 撷• 取 Module 4〇, the foreground of the marker point has been segmented out by image feature extraction (eg, In Figure 5B, ❹ ❹ 200937350 nf , the bel 尔 网 运算 运算 So So So So So So So So So So So So So So So So So 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373 373

Fming)找到所有標記點的令心位置’如第7圈所_,ΡΓ 繪示根據本發明之實施例之最小平方橢圓玻人不、 果示意圖。 取』十万橢圓球配合後的結 =進行搜尋每根手指初始位置的步驟"4。首 標記點的位置二:用所/Λ 每根手指的根部 置為了適用所有的手指,定義先決 (υ手掌座標系已決定。 每-手型固定為相同位置’手擺放的位置位於 左邊,指尖的方向向右。”的部…像平面的 第對第%像平面和第二影像平面搜#根部點, 一 四影像平面則藉由投影來取得根部位置。 搜尋根部標記點位置的方法介紹如下: Ο)拇指根部(初始化參數广 找姆_8A圓’其繪示根據本發明之實施例之尋 線L1 ;異抑不意圖。首先計算出通過M2、M3的辅助 出所有Ll<〇的點集合8;接著找出3中乂 C〇〇r lnate最小的點即為姆指根部點Η。 (b)小指根部(初始化 參數): 小指第8B圖,其繪示根據本發明之實施例之尋找 曰根一的示意圖。首先計算出通過M1、M2的辅助線 18 200937350 找出離L2最近的點即為小指根部點p2。 (b)食指根部(初始化兒參數): 找Fming) finds the center position of all the points as shown in the seventh lap, ΡΓ, which shows a schematic diagram of the least square elliptical glass in accordance with an embodiment of the present invention. Take the knot after the 100,000 elliptical ball is matched = the step of searching for the initial position of each finger "4. Position 2 of the first marker point: Use the root of each finger to apply all the fingers, define the prerequisite (the palm of the hand has been determined. Each hand is fixed to the same position) The position of the hand is on the left, The direction of the fingertip is to the right. The part of the image is like the plane of the first image plane and the second image plane. The root image is obtained by projection. The method of searching for the position of the root marker point. The description is as follows: Ο) The root of the thumb (initialization parameter is widely found _8A circle', which shows the line finder L1 according to the embodiment of the present invention; the ambiguity is not intended. First, all the L1<lt; Point set 8; then find the smallest point in 3 乂C〇〇r lnate is the root point of the thumb. (b) Root of the little finger (initialization parameter): Figure 8B of the little finger, which shows the implementation according to the present invention For example, look for the schematic diagram of the root one. First calculate the auxiliary line 18 through M1 and M2, 200937350 to find the point closest to L2, which is the root point p2 of the little finger. (b) The root of the index finger (initialization parameter): Find

“、、第8C圖,其繪示根據本發明之實施例之尋 食匕根部的示意圖。找出離M3最近的點即為食指根 (d)中指、 數): 無名指根部(初始化參 請參照第8D圖,其繪示根據本發明之實施例之尋 ®找中指、無名指根部的示意圖°首先計算出通過食指根 β 】才曰根部點P2的輔助線;再找出離L3最 近的兩個點,其中7座標較高的點為無名指根部點Μ, 較低的點為中指根部,點P5。在搜尋完根部標記點位置 後’即可定義出位置參數’因而完成步驟114,如第8E 圖所示,其綠示根據本發明之實施例之尋完所有根部襟 記點的結果圖,其中紅色圈圈即代表根部標記點位。 _睛參照第4C圖、第4D圖和第6B圖,第6B圖為冷 ©示根據本發明之實施例之模型參數初始化模組之初始化 方向參數的流程示意圖。首先,進行步驟ih,以設定 初始方向參數。在找到世界座標系和指根的轉換關係 ^為了使二維虛擬模型模型30更能對應真實的手型, 尚需要尋找每根手指伸展的方向。本實施例利用影像上 的手部前景及手部輪廓資訊來尋找每根手指伸展的方 •向,其中定義^為方向參數。因為本實施例的初始手型 • 定義為全手伸展,所以當找到心心參數時,即可找到手指 200937350 . 伸展方向。本實施例的模型參數一共有五根手指共120 個參數: • ,其中 共30個參數已由前述的方法初始化。本 實施例给定與與為此0〖與成太/以尤%與]^此以(」=1..5生9〇偭參 數的初始值,接著,進行最佳化方向參數的步驟122, 再使用習知的粒子過濾(Particle Filtering)演算法來最佳 化H這兩個參數。位置參數和方向參數初始化前後的結 © 果係如第9A圖、第9B圖和第9C圖所示,其中第9A圖 為繪示本發明之實施例之模型參數初使化前結果;第9B 圖為繪示本發明之實施例之位置參數初使化後結果;第 9 C圖為繪示本發明之實施例之方向參數初使化後結果。 预測三維標記點眘訊棍組60 為了能更為準確的預測三維模型參數,本實施例的 系統整合重構的標記點三雉資訊來預測追蹤三維手指虛 擬模型30的參數。如上所述,本實施例是個複數台(例 φ 如:4台)相機的立體視覺系統(Stereo Vision System)。 習知之立體視覺系統均會遇到兩個問題,一個是對應性 (Correspondence)建立問題,也就是如何建立多重視野間 點與點之間的對應關係。另外一個問題則是重構 (Reconstruction)問題,也就是某點如何由其他視野的已 知對應點來重構出此點的三維資訊。 如第1囷所示,本實施例之預測三維標記點資訊模 組60至少包括:標記點偵測與對應模組62、標記點追 20 200937350 . 縱模組64和重構標記點三維資訊模組66。以下分別說 明標記點#測與對應模組62、標記點追蹤模組64和重 義 構標記點三維資訊模組66 摞記點偵測輿對鹿掸紐a, 標記點摘測與對應模組64係根據前述冬位置參數 和方向參數’並利用極線限制(Epip〇lar constraint)和指 節長度限制’來分別建立每一個影格組之影格之多重視 野間標記點的對應關係。標記點偵測與對應模組64係針 〇 對每根手指上的所有標記點來建立對應關係,在三維手 指虛擬模型30中每根手指是互相獨立的,所有手指均是 採用相同的對應性建立法則:即是每一個視野每一手扣 皆由手指近端(Proximal)端依續向末端(Distal)編號。為 了使每根手指建立相同的對應法則,首先對所有標記點 依據空間關係分類(Categorization)來決定此標記點在結 構上屬於那一手指。因為手的結構是固定的,所有標記 點在空間上的分佈會根據每根手指的不同位置而有所不 Q 同,故本實施例利用手部模型結構對所有標記點分類。 本實施例的手部模型根據人類手指而可區分為拇指、食 指、中指、無名指和小指,再利用初始化後手指的3D 位置建立影像平面的手指模型來對不同視野的標記點分", Figure 8C, which shows a schematic diagram of the root of the sputum stalk according to an embodiment of the present invention. Find the point closest to M3 is the index finger root (d) middle finger, number): the root of the ring finger (for initial reference) 8D is a schematic diagram showing the search for the middle finger and the base of the ring finger according to an embodiment of the present invention. First, the auxiliary line passing through the root of the index finger β is calculated; and then the two closest to L3 are found. Point, where the higher point of the 7 coordinates is the root point of the ring finger, and the lower point is the root of the middle finger, point P5. After searching for the position of the root mark point, 'the position parameter can be defined' and thus complete step 114, as in 8E As shown in the figure, the green color shows the result of finding all the root points according to the embodiment of the present invention, wherein the red circle represents the root mark point. _ eye refers to the 4C, 4D and 6B Figure 6B is a flow diagram showing the initialization direction parameters of the model parameter initialization module according to an embodiment of the present invention. First, step ih is performed to set the initial direction parameter. The conversion of the world coordinate system and the finger root is found. Relationship In order to make the two-dimensional virtual model model 30 more suitable for the real hand shape, it is still necessary to find the direction in which each finger extends. This embodiment uses the hand foreground and hand contour information on the image to find the side of each finger extension. • To, where ^ is the direction parameter. Because the initial hand shape of this embodiment is defined as full-hand stretching, when the heart parameter is found, the finger 200937350 can be found. The extension direction. The model parameters of this embodiment have a total of five. There are a total of 120 parameters for the root finger: • , a total of 30 parameters have been initialized by the above method. This example gives and is 0 for this and _ too with / to be especially %) ^ this to (" = 1.. The initial value of the 9 〇偭 parameter, followed by step 122 of optimizing the directional parameter, and then using the conventional Particle Filtering algorithm to optimize the two parameters of H. Position and direction parameters The results before and after the initialization are as shown in Fig. 9A, Fig. 9B and Fig. 9C, wherein Fig. 9A is a graph showing the results of the initial parameters of the model parameters of the embodiment of the present invention; Location of an embodiment of the invention The result of the initialization is shown in Fig. 9C is the result of the initialization of the direction parameter of the embodiment of the present invention. The prediction of the three-dimensional marker point caution stick group 60 In order to more accurately predict the three-dimensional model parameters, the implementation The system integrates the reconstructed marker point information to predict the parameters of the tracking three-dimensional finger virtual model 30. As described above, the present embodiment is a stereo vision system of a plurality of cameras (eg, φ: 4) (Stereo Vision System) The traditional stereo vision system will encounter two problems, one is Correspondence, which is how to establish the correspondence between points and points between multiple fields of view. Another problem is reconstruction (Reconstruction) The problem, that is, how a point reconstructs the three-dimensional information of this point from the known corresponding points of other fields of view. As shown in FIG. 1 , the predicted three-dimensional marker point information module 60 of the embodiment includes at least: a marker point detection and corresponding module 62, and a marker point chase 20 200937350. The vertical module 64 and the reconstructed marker point three-dimensional information module Group 66. The following describes the marker point # measurement and corresponding module 62, the marker point tracking module 64, and the re-construction marker point three-dimensional information module 66. The point detection is performed on the deer 掸 a, the marker point extraction and corresponding module 64 Corresponding relationships between multiple visual field points of each frame of the frame group are established according to the aforementioned winter position parameter and direction parameter 'and using Epip 〇 constraint constraint 和 和 和 和 和 和 和 。 。 。 。 。 。 。 The marker detection and corresponding module 64 are connected to all the markers on each finger. In the three-dimensional finger virtual model 30, each finger is independent of each other, and all fingers adopt the same correspondence. The rule of thumb is that each hand of each field of view is numbered by the Proximal end of the finger to the end (Distal). In order to establish the same corresponding rule for each finger, first, all the points are determined according to the Categorization to determine which point the structure belongs to. Since the structure of the hand is fixed, the spatial distribution of all the points is different depending on the position of each finger, so this embodiment uses the hand model structure to classify all the points. The hand model of this embodiment can be divided into a thumb, an index finger, a middle finger, a ring finger and a little finger according to a human finger, and then the finger model of the image plane is established by using the 3D position of the finger after initialization to mark the points of different fields of view.

在先前的模型參數初始化後,可得到三維立體模型 的圓柱中心端點(Joint Center)3D座標,本實施例的手指 模型將每根手指分為三節,所以一共可以得到六個3D 21 200937350 . 端點,再將這六個端點投影到影像平面上,即可在每個 .影像平面上也得到六個2D端點,再將第一個端點和最 後一個端點連線,即可得到每根手指的關節中心連線 (Joint Center Line),並可參數化為平面上之一直線作為 此影像上之手指模型,如第10A圖所示,其繪示根據本 發明之實施例之關節中心連線示意圖。很顯然的,此却 始化投影結果與目前分割出之標記點略有出入,但依據 結構上標記點之順序性,可單純利用距離的評估將標記 © 點分類。本實施例利用每個標記點到關節中心連線的距 離來決定每個標記點屬於哪一類,標記點距離最接近該 類手指的關節中心連線就將標記點分為該類,如公式(2) 所示:After the previous model parameters are initialized, the cylindrical center end point (3D coordinates) of the three-dimensional model can be obtained. The finger model of this embodiment divides each finger into three sections, so that a total of six 3D 21 200937350 can be obtained. Point, and then project the six endpoints onto the image plane, you can get six 2D endpoints on each image plane, and then connect the first endpoint to the last endpoint. The joint center line of each finger can be parameterized as a straight line on the plane as a finger model on the image, as shown in FIG. 10A, which shows the joint center according to an embodiment of the present invention. Connection diagram. Obviously, this initial projection result is slightly different from the currently segmented marker points. However, depending on the order of the structural markers, the marker can be used to simply mark the marker. In this embodiment, the distance from each marker point to the center of the joint is used to determine which category each marker point belongs to. The marker point is separated from the joint center of the finger closest to the finger, and the marker point is divided into the class, such as a formula ( 2) shown:

I表示標記點所屬類別,表示第i個影像平面的 標記點位置,咖+鲫+α = 0表示第k類的關節中心連 〇 線。分類完的結果係如第10B圖所示,其繪示根據本發 明之實施例之標記點分類結果圖。 接著,本實施例對每一類的每個標記點,尋找他們 在其他視野應被對應為那個影像平面的標記點,即是找 出其對應關係。本實施例尋找對應關係的方式主要根據 兩個限制,一個是視覺技術上依據兩相機中心與三維點 共平面所定義的限制:極線限制(Epipolar Constraint), « 另一個是依據實驗值產生空間上的限制:指節長度限制 22 200937350 . (Segment Length Constraint)。以下分別做介紹 β , 本實施例首先在四個影像平面中定義一個影像平面 為主影像平面(Primary view),$個影像平面必須镇測到 所有的標記點位置。在本實施例之系統中,因為大部分 第二個影像平面均能债測到所有標記點,所以本實施例 定義第二個影像平面為主影像平面。本實施例希望找到 其他影像平面的標記點和主影像平面的標記點之間的對 應關係《本實施例在此係採用極點幾何(Epip〇iar ❹ Geometry)的概念,而極點幾何係發明所屬領域中具有通 常知識者所知,故不在此贅述。當要在其他影像平面上 尋找對應於主影像平面上的標記點時’對應的標記點一 定會位於極線(Epipolar Line)上’本實施例便應用這個概 念在每個影像平面上尋找極線。請參照第i〇c圖,其繪 示根據本發明之實施例之㈣示意目,其中主影像平面 為第二個影像平面’主影像平面上所有的標記點都已標 上編號,依據每根手指由右到左分別標上丨、2、3、4、 ® 5、6,而其餘影像平面上則有各個主影像平面的標記點 所對應的極線。 在尋找其他影像平面的標記點和主影像平面的標記 點之間的對應關係時,除了極線限制外,本實施例尚會 用到空間上的指節長度限制,這裡的指節長度限制指的 是模上歧義的每個指節的長度,當本實施例在模型初 始化時,即已定義备俯社然 . 疋我母個扣節的長度,如第9C圖所示的深 藍色線段。 23 200937350 請參照表一,其尋找其他影像平面的標記點和主影 像平面的標記點之間的對應關悉的演算法。表示主影像 * 平面的第i個標記點,表示其他影像平面上對應的第i 個標記點,:表示極線,ED表示S到E的距離,P表示 經過極線限制篩選過後的候選點集合,J表示三維模型 關節中心的投影點,SD表示指節長度限制,PD表示候 選點指節長度,N表示標記點的數目,i = 1.. N,j = 1 · · N。I indicates the category to which the marker belongs, indicating the position of the marker point of the i-th image plane, and coffee + 鲫 + α = 0 indicates that the joint center of the kth category is connected to the 〇 line. The classified results are shown in Fig. 10B, which shows a result of the classification of the points according to the embodiment of the present invention. Then, in this embodiment, for each marked point of each class, it is found that the other points of view should be corresponding to the image plane, that is, the corresponding relationship is found. The way in which the present embodiment finds the correspondence is mainly based on two limitations. One is visually defined according to the limitation defined by the coplanar of the center of the two cameras and the three-dimensional point: Epipolar Constraint, «The other is to generate space according to experimental values. Upper limit: knuckle length limit 22 200937350 . (Segment Length Constraint). The following is a description of β. In this embodiment, an image plane is defined as a primary view in four image planes, and all image points must be detected in the image plane. In the system of the present embodiment, since most of the second image planes can measure all the markers, the second image plane is defined as the main image plane. This embodiment hopes to find the correspondence between the mark points of other image planes and the mark points of the main image plane. In this embodiment, the concept of pole geometry (Epip〇iar ❹ Geometry) is adopted, and the field of pole geometry is invented. It is known to those who have common knowledge, so it is not described here. When you want to find the corresponding points on the main image plane on other image planes, 'the corresponding mark points must be on the Epipolar Line'. This example applies this concept to find the polar line on each image plane. . Please refer to FIG. 2c, which shows a schematic diagram of (4) according to an embodiment of the present invention, in which the main image plane is the second image plane, and all the marking points on the main image plane are numbered, according to each The fingers are marked 丨, 2, 3, 4, ® 5, 6 from right to left, and the remaining image planes have the polar lines corresponding to the points of each main image plane. In the search for the correspondence between the marker points of other image planes and the marker points of the main image plane, in addition to the polar line limitation, the space knuckle length limitation is used in this embodiment, where the knuckle length limit refers to It is the length of each knuckle of the ambiguity on the model. When the model is initialized in the present embodiment, it has been defined as the length of the knuckle. The length of the knuckle of the mother is the dark blue line as shown in Fig. 9C. 23 200937350 Please refer to Table 1 for the algorithm of correspondence between the points of other image planes and the points of the main image plane. Indicates the i-th marker point of the main image* plane, indicating the corresponding i-th marker point on the other image planes: : indicates the polar line, ED represents the distance from S to E, and P represents the candidate point set after the polar line restriction screening J represents the projection point of the joint center of the 3D model, SD represents the knuckle length limit, PD represents the candidate point knuckle length, and N represents the number of marked points, i = 1.. N, j = 1 · · N.

Algorithm: Cmn^jandencss Finding [ f] = CF({Si}^ ) 〇 ❹Algorithm: Cmn^jandencss Finding [ f] = CF({Si}^ ) 〇 ❹

WHILEQ <-N) FORi-l:NWHILEQ <-N) FORi-l:N

則林3|| FORk-l:mThen Lin 3|| FORk-l:m

P*=argmin r Q=P· End FOR j钭; End WHILE 表一 以下用一簡單的示意圖來理解這個演算法,請參照 24 200937350 二:示根據本發明之實施例之尋找標記點對 應關係^意圖,用以制在其他影像平面域出對應 於主影像平面的第二個對應ρ首先,第^對應點^ 為已知根部標記點,運用極線限制* pk,運用户筋“ 找到所有候選點 P,運用指知長度限制SD2找出候選 sd2的點’即為c2。請參照第J == ^ ^ +. ., , m 再緣不根據本發 月之實施例之標記點對應結果圖。 標記點追蹤辑 ❹ ❹ 標記點追縱模,组64肖以使用改良的平均移動 二:)演算法來追縱每一個影格組之每一個影格 置1找到了多重視野間標記點的對應關 持讀追蹤每張影格新的標記點位置,以 及利用對應的標記點來重構出標記點的三維貪訊。 ::照第12圖’其繪示根據本發明之實施例之改良 算法的流程圖。本實施例之追咸標記點的 習知的一hift演算法來追*影像序列 =影格標記點新的位置。首先,進行步称150,以 ::標記點的可靠度。在每張影格預測出新的位置後, 本實:會判斷這個位置…與否…此視野 重:=可靠或不可靠的視野’再僅以可靠之視野 二避免不正確的追蹤結果影響到標記點 === 行步驟152,以判斷可靠影像平 或等π Γ於或等於2。如果四個影像年面有多於 或等於兩個影像平面是正確的,則進行重構標記點三維 25 200937350 • 座標的步驟154,以對這個標記點進行三维重構的動作 .反之’如果少於兩個影像平面是正確的,本實施例’ 用影像上的資訊補償標記點新的位置,則進行影像資吞 補償標記點位置的步驟156,以經過補償後再重/構出3 記點的三維座標。在重構完標記點的三維資訊後進= 視覺技術反投影補償標記點位置的步驟158,以將三= 座標投影到所有不可靠(Unreliable)的影像平面藉以得 到Mean-Shift演算法下張影格的初始位置。 © 本實施例使用改良式的Mean-Shift演算法來追縱每 張影格新的標記點位置。若習知之Mean-Shift演算法的 輸入影像為二值化影像(Binary Image),則其追蹤訊息極 容易受光影和所取閥值影響被割出標記點的大小,而導 致追蹤結果不穩定《因此,本實施例不使用二值化影像 而是改取Cr影像作為本實施例的輸入影像。本實施何針 對影像特徵擷取模組40所獲得之Cr影像做造一步的影 像強化,使標記點和周圍背景的Cr值差異更為明顯,如 φ 公式(3)所示: NCr(k) = (Cr(k)-l25).4 (3) 本實施例之改良的平均移動演算法係如表二所示。 由於傳統技術只考慮到強度(Intensity)質心的追 蹤,容易受到光影、雜訊影響,@導致不穩定的運動現 • 象,故本實施例改以成本(Cost )質心的追蹤,以期達到 更穩定的追蹤結果。本實施例之成本函數(c〇stFuncti〇n) 26 200937350 分為三部份: (1)核心(Kernel)部分:強化靠近質心的比重,越近 質心比重越大,反之比重越小,如公式(4)所示:。 ®φΗ(*-^)2 Hy-yefV^1) (4) σ值經測試大約取3〜5。 bqmt: Ο 翁一锢時鬭點標記難的中位置、_像強化處理遇後的Cr影像 Γ β如鼷(四十13)所示* 1. 利用前一個時間點(ί-乃彩格(fene)的標記點中心為視菌中心》初 始化一姻追联祗窗(1〇〇101^;'«^^〇'»)1^·· 2. 計算window内每難的的質量M»W為定義的標&路瘙域大小*P*=argmin r Q=P· End FOR j钭; End WHILE Table 1 below uses a simple diagram to understand this algorithm, please refer to 24 200937350 2: Show the correspondence of the marker points according to the embodiment of the present invention ^ The intention is to make a second corresponding ρ corresponding to the main image plane in other image planes. First, the corresponding ^ point is a known root point, and the polar line limit * pk is used. Point P, using the finger length limit SD2 to find the point sd2 of the candidate sd2 is c2. Please refer to the J == ^ ^ +. ., , m and then not according to the embodiment of the month of the month, the corresponding result map Mark Point Tracking ❹ ❹ Mark Point Tracking Module, group 64 XI to use the improved average moving two:) algorithm to track each frame of each frame set to find the corresponding point between multiple view points Holding and tracking the new marker position of each frame, and reconstructing the three-dimensional corruption of the marker point by using the corresponding marker point. :: FIG. 12 is a schematic diagram showing the flow of the improved algorithm according to an embodiment of the present invention. Figure. The salt-spot marking point of this embodiment A known hift algorithm to chase *image sequence = new location of the frame marker. First, step 150 is performed to:: the reliability of the marker. After each frame predicts a new position, this: Will judge this position... or not... This field of view is heavy: = reliable or unreliable field of view' and only with reliable field of view 2 to avoid incorrect tracking results affecting the mark point === line step 152 to determine whether the reliable image is flat or If π Γ is equal to or equal to 2. If the four image aging faces have more than or equal to two image planes, then reconstruct the marker point three-dimensional 25 200937350 • coordinate step 154 to make the three-dimensional weight of the marker point The action of the structure. Conversely, if less than two image planes are correct, the present embodiment uses the information on the image to compensate for the new position of the marker point, and then performs step 156 of the image swallowing compensation marker position to compensate Re-weight/construct the three-dimensional coordinates of the three points. After reconstructing the three-dimensional information of the marked points, step 158 of the visual technology back projection compensation point position to project the three = coordinates to all unreliable The image plane is used to get the initial position of the next frame of the Mean-Shift algorithm. © This embodiment uses the improved Mean-Shift algorithm to track the new marker position of each frame. If the known Mean-Shift algorithm When the input image is a Binary Image, the tracking information is extremely susceptible to the size of the cut point due to the light and shadow and the threshold value taken, resulting in unstable tracking results. Therefore, this embodiment does not use binary values. The image is changed to use the Cr image as the input image of the embodiment. The present embodiment performs image enhancement for the Cr image obtained by the image feature capturing module 40, so that the difference between the mark point and the surrounding background is more For obvious, as shown in φ Equation (3): NCr(k) = (Cr(k) - l25).4 (3) The improved average motion algorithm of this embodiment is shown in Table 2. Since the conventional technology only considers the tracking of the intensity (Intensity) centroid, it is easy to be affected by light and shadow, and the noise causes the unstable motion to appear. Therefore, this embodiment is changed to the cost (Cost) centroid tracking, in order to achieve More stable tracking results. The cost function (c〇stFuncti〇n) 26 200937350 of this embodiment is divided into three parts: (1) Kernel part: strengthens the proportion close to the centroid, the closer the centroid is, the smaller the specific gravity is. As shown in formula (4): ®φΗ(*-^)2 Hy-yefV^1) (4) The σ value is approximately 3 to 5 after testing. Bqmt: Ο 翁 锢 锢 标记 标记 标记 标记 标记 标记 标记 标记 标记 标记 标记 标记 Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Cr Fene) The center of the marker point is the center of the visual bacteria. Initialize a marriage chase window (1〇〇101^; '«^^〇'»)1^·· 2. Calculate the quality of each difficulty in the window M»W For the defined target & path size*

表二 ο (2)關聯(Correlatio)n部分·:評估與前張所找到的 色彩相似度(Cr值),假設在運動時光影不作瞬間變化, 此項將有助於尋找追蹤視窗(Tracing Window ; TW)中光 27 200937350 影分佈與前張相同的位置,如公式(5)所示:Table 2 ο (2) Correlatio n part:: Evaluate the color similarity (Cr value) found in the previous sheet, assuming that the light and shadow do not change instantaneously during the movement, this item will help to find the tracking window (Tracing Window) ; TW) Zhongguang 27 200937350 The shadow distribution is the same as the front sheet, as shown in equation (5):

Corri^I,^ (5) (3)阻尼(Damping)部分:此項為與前一時間點質心 速度差,加速度差的評估函數,目的在避免影像雜訊造 成質心誤判所導致之不穩定的運動現象,如公式(6)所 示: ❹ P=<xe,ye> =exp(-^1% -δ^ΙΙ)) (6) 另外,本系統的追蹤視窗(TW)定為12 χ 12,標記點 區域大小定為8x8’ k值約為0.001,與值均取〇5,因 其k值甚小,可以判定其時態響應不會造成震盡的結果。 當在追蹤標記點時,因為標記點過於密集,以及相 〇 機平面角度的問題,有可能出現標記點重複追蹤的揞 形。這是指本實施例的演算法所預測的標記點中心位置 和其餘標記點中心位置過於靠近,而可能產生誤判的情 形°在這裡本實施例所採取的策略是當鄰近的標記點所 預測的標記點中心位置過近時,本實施例會將鄰近的兩 標δ己點均設為不可靠標記點,使其不參與後續的重構步 驟。 复屢標記點三錐眢訊模组 28 200937350 • 重構標記點三雉資訊模組66係用以使用最小平方 法(Least-Squares Method)來重構每一個影格組之每一個 * 影格之標記點的三維資訊,而獲得每一個影格組的標記 點三維資訊。 當本實施例重構三維點座標時,是藉由追蹤演算法 所估測的影像點來進行重構的。而在追蹤標記點時有可 能會因為物體移動的過快而使追蹤演算法追蹤到錯誤的 位置,稱之為「界限外(0utHer)」。此種誤判並無法由 © 去除重複追蹤的方式去除,在此本實施例提出了 一套剔 除Outlier而自動選擇可靠視野來進行重構的機制,其中 進行了兩階段的重構:第一階段的重構是為了要剔除 Outlier ’第二階段的重構才重構出最後的主維點座標。 本實施例將第一階段重構出來的三維點座標投影到影像 平面上,比較投影點和影像點的絕對誤差。計算投影誤 差的公式(7)如下: 0^«= — jti . Λ (7)Corri^I,^ (5) (3) Damping part: This is an evaluation function of the difference in centroid velocity and acceleration difference from the previous time point. The purpose is to avoid the misjudgment caused by image noise. A stable motion phenomenon, as shown in equation (6): ❹ P=<xe,ye> =exp(-^1% -δ^ΙΙ)) (6) In addition, the tracking window (TW) of this system is defined as 12 χ 12, the size of the marked area is set to 8x8' k value is about 0.001, and the value is taken as 〇5, because its k value is very small, it can be judged that its temporal response will not cause the result of the shock. When tracking the marker points, because the marker points are too dense, and the angle of the plane of the camera is different, there is a possibility that the marker points are repeatedly tracked. This refers to the case where the center position of the marker point predicted by the algorithm of the embodiment is too close to the center position of the remaining marker points, and a misjudgment may occur. Here, the strategy adopted in this embodiment is predicted by the adjacent marker point. When the center of the marker point is too close, the embodiment will set the adjacent two-point δ self points as unreliable marker points, so that they do not participate in the subsequent reconstruction step. Multi-point marker three-cone system 28 200937350 • Reconstruction marker three-way information module 66 is used to reconstruct each * frame label of each frame group using the Least-Squares Method The 3D information of the points, and the 3D information of the points of each frame group is obtained. When the three-dimensional point coordinates are reconstructed in this embodiment, the image points estimated by the tracking algorithm are reconstructed. When tracking a marker point, it is possible to track the tracking algorithm to the wrong position because the object moves too fast, which is called "outside (0utHer)". Such a misjudgment cannot be removed by the method of removing the repetitive tracking. In this embodiment, a mechanism for automatically selecting a reliable view to reconstruct the Outlier is proposed, in which a two-stage reconstruction is performed: the first stage The refactoring is to eliminate the Outlier 'reconstruction of the second stage to reconstruct the final main dimension coordinates. In this embodiment, the three-dimensional point coordinates reconstructed in the first stage are projected onto the image plane, and the absolute errors of the projection point and the image point are compared. The formula (7) for calculating the projection error is as follows: 0^«= — jti . Λ (7)

FPiX)=PJC 其中五:·表示·投影誤差;表示投影點座標;I表示三 維點座標,表示第i個影像平面的影像點;^為第i個 影像平面的投影矩陣。 當五,·> Θ時,本實施例將第i個影像平面視為 • …11“,而將其設為不可靠標記點,使其不參予第二次 .重構’這…可為例如5,大約超過標記點大小的一 29 200937350 半。 重構標記點三維資訊的方法有报多,本實施例係採 用例如最小平方法來進行重構: 令叫―為第i個影像平面所估剛出來的影像點,i =1..N,若其反投影射線交於空間中的—個三維點χ,則 可得到公式(8)如下:FPiX)=PJC where five: · indicates · projection error; indicates projection point coordinates; I indicates three-dimensional point coordinates, indicating the image point of the i-th image plane; ^ is the projection matrix of the i-th image plane. When five, · > ,, this embodiment regards the i-th image plane as ... 11", and sets it as an unreliable marker point so that it does not participate in the second time. Reconstruction 'this... For example, 5, which is more than a 29 200937350 half of the size of the marked point. There are many methods for reconstructing the three-dimensional information of the marker point. In this embodiment, for example, the least square method is used for reconstruction: Let the call be the i-th image plane The image point just estimated, i =1..N, if its back projection ray intersects with a three-dimensional point in space, formula (8) can be obtained as follows:

WXi-PiX ❹ 其中》=(祐,1*,1) (8) , 若 户《•為第i個影像平面的投影矩陣;w•為未知數量 將户《•第j列表示為片Γ,則公式(8)可推導為: « ~栌)龙=〇, (v^3r-/f^)jr=0i (9) 若有N個影像平面,則會有2N個線性方程式解_ 〇 則公式(9)可被寫為』,其中心為2NX4的矩陣:WXi-PiX ❹ where "= (you, 1*, 1) (8), if the household is "• is the projection matrix of the i-th image plane; w• is the unknown number of households. Then the formula (8) can be derived as: « ~栌) dragon=〇, (v^3r-/f^)jr=0i (9) If there are N image planes, there will be 2N linear equation solutions _ 〇 Equation (9) can be written as ", with a matrix of 2NX4 at its center:

XAXA

(10) 本實施例藉由minimize||JX-5||來估測,其中J是/ 陣的的則三行’ B是最後一行,故可解出公式(11): 30 200937350 ου ^ = (ΛτΑ)-ιΑτΒ 如第12圖所示,當參與重構的影像平面少於兩個影 像平面,因為沒有深度資訊’是屬於不可重構的情況。 此時會利用影像上的資訊補償標s己點新的位置,經過補 償後再重構出標記點的三維座標。在此本實施例是利用 空間的結構性來補償標記點位置,當標記點無法重構 時,只要其鄰近點是可以重構的狀況,本系統仍可重 〇 回來該 標記點。主要步驟如下: (1Μ貞測所有標記點:利用最小平方橢圓球配合法重 新尋找所有標記點的中心位置。 (2)預測標記點可能位置:請參照第13圖,其綠示 根據本發明之實施例說明預測標記點可能位置示意圖。 本實施例依照前一個時間點t-Ι的結構性來預測時間點彳 之標記點的可能位置,預測的公式(12)如下:(10) This embodiment is estimated by minimum||JX-5||, where J is / array, then three lines 'B is the last line, so formula (11) can be solved: 30 200937350 ου ^ = (ΛτΑ)-ιΑτΒ As shown in Fig. 12, when the image plane participating in the reconstruction is less than two image planes, since there is no depth information, it is a non-reconfigurable condition. At this time, the information on the image is used to compensate for the new position of the target, and after the compensation, the three-dimensional coordinates of the marked point are reconstructed. In this embodiment, the structure of the space is used to compensate the position of the marker point. When the marker point cannot be reconstructed, the system can still return to the marker point as long as the neighboring point is reconfigurable. The main steps are as follows: (1) Measure all marker points: Re-find the center position of all marker points by the least square ellipsoid method. (2) Predict the marker point possible position: Please refer to Fig. 13, the green diagram is according to the invention. The embodiment illustrates a possible position of the predicted marker point. In this embodiment, the possible position of the marker point at the time point 预测 is predicted according to the structure of the previous time point t-Ι, and the predicted formula (12) is as follows:

. _丨 I» ·. _丨 I» ·

V (12)V (12)

Pi-1-Ptl pl =Pt~1Pi-1-Ptl pl =Pt~1

*中$ V 第i個標記點位置向量; 前—個時間點t-l的第i個標記點位置向量和 個^票§&點位置向量差; 現在這個時間點所預測 尋找最接近橢圓中心: 的標記點位置向量。 如果在影像上有極為接 31 (3) 200937350 近的橢圓中心,則將預測的標記點中c令為該糖圓中 心,若沒有,則預測的標記點中心仍為。 最後,當重構完標記點三維資訊後,仍然需要將所 有不參與重構的影像點補償回來。這是因為必須作為下 個時間點本實施例的Modified Mean-Shift演算法的初 始位置,補償的公式(13)如下: x=Fp^x)=pd (13) 〇 其中&=册(戈):投影點座標; 鑒·二維點座標;尸:·為第i個影像平面.的投影 矩陣。 預測三_模型春數模细7 η 預測三維模型參數模組70係用以根據每一個影格 組的標記點三維資訊,來定義在每一個時間點之虚擬三 維手指模型30的骨節座標系及模型參數。 上述標記點的追蹤與重構基本上已經由標記點提供了 ❹手部的部伤運動資訊’然而此訊息仍然無法滿足對於手部運 動的整體觀察,因此將說明如何應用模型與觀察到的影像資 \、運動的手指做最佳的追蹤。首先必須先找到標記點與模 的關係接著利用時間的一致性以前一個時間點追縱 出來的最佳模型參數與以標記點定出之模型參數為基礎,再 利用一隨機據波器做最佳參數之追縱。 • 第4C圖和第4D圖已說明本實施例之系統的骨節座標 系及模型來私 * 致。备重構出標記點的三維資訊後,即可利用; 32 200937350 . 記點來定義本實施例的骨節座標系及初始模型參數而。 • —首先說明如何由標記點定義骨節座標系,手掌座標系的 疋義方式已於前面描述,接下來便是各個L〇cal座標系的定 義’本實施例之每根手指的局部(關節)座標系的定義方式均 相同,每個局部(關節)座標系皆可分為Χ軸γ轴Z軸, f 4Α圖所示,其繪示根據本發明之實施例之座標軸 向示意圖。 如公式(14)所示,每個局部(關節)座標系之χ轴係由 〇 標記點24求得: ^ =M* -Mi-1 (14) f广 :t時間點的X轴向量;* in the middle of $ V the i-th marker position vector; the ith-point position vector of the previous time point tl and the § & point position vector difference; now this time point is predicted to find the closest to the ellipse center: The marker point position vector. If there is a sharp ellipse center near 31 (3) 200937350 on the image, then c is the center of the sugar circle in the predicted mark. If not, the center of the predicted mark is still. Finally, after reconstructing the 3D information of the marker points, it is still necessary to compensate all the image points that are not involved in the reconstruction. This is because the initial position of the Modified Mean-Shift algorithm of this embodiment must be used as the next point in time, and the compensation formula (13) is as follows: x=Fp^x)=pd (13) 〇 where &=book (go) ): Projection point coordinates; Jian·2D point coordinates; corpse: · is the projection matrix of the i-th image plane. Predicting the three-model spring number module 7 η predicting the three-dimensional model parameter module 70 is used to define the bone joint coordinate system and model of the virtual three-dimensional finger model 30 at each time point according to the three-dimensional information of the marker points of each frame group. parameter. The tracking and reconstruction of the above-mentioned markers has basically provided the information on the motion of the hand injury of the hand. However, this message still cannot satisfy the overall observation of the hand movement, so it will explain how to apply the model and the observed image. Capital, sports fingers do the best tracking. First, we must first find the relationship between the marker point and the modulus, and then use the time model to match the best model parameters and the model parameters determined by the marker points, and then use a random data filter to do the best. Tracking of parameters. • Figures 4C and 4D have been described to illustrate the joints and models of the system of the present embodiment. After reconstructing the three-dimensional information of the marked points, it can be utilized; 32 200937350 . The points are used to define the bone joint coordinate system and the initial model parameters of the present embodiment. • First, how to define the joint coordinate system by the marker point. The derogatory method of the palm coordinate system has been described above, followed by the definition of each L〇cal coordinate system. 'The part (joint) of each finger of this embodiment) The coordinate system is defined in the same manner, and each local (joint) coordinate system can be divided into a γ-axis γ-axis Z-axis, which is shown in FIG. 4 , which shows a schematic axial view of the coordinate according to an embodiment of the present invention. As shown in equation (14), the χ axis of each local (joint) coordinate system is obtained from 〇 mark point 24: ^ =M* -Mi-1 (14) f wide: the X-axis vector at time t ;

Mi .t時間點的第丨個標記點位置向量。 ❹ 每個局部(關節)座㈣之Z轴係藉由下述的追縱方法 預測出標記點摘測後的第一張影格的z 而得 轴的差異。假設2軸和手掌 之間的差異疋以的。之後每張影格@ Z轴即可跟 座標系的z轴而變。如公式(15)所示: (15) HAlZi〇 = AZiL-i2rfl αζ^αζ^-ααζ^ 時間點Ζ轴向量和手堂The third point position vector of the Mi.t time point. Z The Z-axis of each local (joint) seat (4) is obtained by predicting the z of the first frame after the spot is taken by the tracking method described below. Assume that the difference between the 2 axes and the palm is awkward. Each frame @Z axis can then be changed from the z-axis of the coordinate system. As shown in the formula (15): (15) HAlZi〇 = AZiL-i2rfl αζ^αζ^-ααζ^ Time point axis vector and hand hall

向晉沾* _ 于單座標系的Z 量的差異,ί/O表示標記點偵測之後的第一張影格· 33 200937350 AZf :t時間點的z軸向詈. -丨丨丨__^ «3& ’ Z轴向量。 t時間點的手掌座標系 每個局部(關節)座標系之Y轴係藉由X轴和Z轴的外 積即可得到》如公式(i 6)所示: ΛΫ-ΑΪΧΛΧ* 其中 αΨ 表 示t (16) 時間點的Y軸向量。 Ο 請參照第14B圖’其繪示根據本發明之實施例之兩 個座標系之間的轉換關係的示意圖。當定義完座標軸之 後’即可藉由座標輛來求得模型參數。假設現在已知有兩個To Zhan* _ The difference in the Z-quantity of the single-coordinate system, ί/O indicates the first frame after the marker is detected. 33 200937350 AZf : z-axis at time t. 丨丨丨. -丨丨丨__^ «3& 'Z-axis vector. The palm coordinate of the t-time point is the Y-axis of each local (joint) coordinate system obtained by the outer product of the X-axis and the Z-axis as shown by the formula (i 6): ΛΫ-ΑΪΧΛΧ* where αΨ denotes t ( 16) Y-axis vector at time. Ο Referring to Figure 14B, a schematic diagram of the conversion relationship between two coordinate systems in accordance with an embodiment of the present invention is shown. When the coordinate axis is defined, the model parameters can be obtained by the coordinate vehicle. Suppose there are two known now

座標系’而這兩個座標系的三個軸向量亦已知。如第14B 圖所不,丨)、z(心知:2)為座標系J的3個轴, ^,0,y〇,z,o)' R求為2個座標系之間的旋轉矩陣。如公式(i 7)所示:The coordinate system 'and the three axis vectors of the two coordinate systems are also known. As shown in Fig. 14B, 丨), z (know: 2) are the three axes of the coordinate system J, ^, 0, y 〇, z, o) 'R is the rotation matrix between the two coordinate systems. . As shown in equation (i 7):

X0 X,X0 X,

\Z〇 Z\ η 芯、 y% _l_ wmm y,〇 y\ y,2 Z2j Z,1 Z,2J (17) 0座標系依序對X轴旋轉6、對γ軸旋轉&、對z軸旋 轉之’則轉換為座標系2β㊣轉矩冑R便是由這三個旋轉角 度:”所組成。對X軸旋轉之的旋轉矩陣為、、對γ轴凌 轉哟旋轉矩陣0”Siz轴旋轉、旋轉轉,,r為這 34 200937350 三個矩陣的乘積 。如公式叫所示:\Z〇Z\ η core, y% _l_ wmm y, 〇y\ y, 2 Z2j Z,1 Z,2J (17) 0 coordinate system rotates the X axis in sequence, rotates the γ axis & The axis rotation is converted to the coordinate system 2β positive torque 胄R is composed of these three rotation angles:". The rotation matrix for the X-axis rotation is, and the γ-axis is rotated 哟 rotation matrix 0" Siz The axis rotates, rotates, and r is the product of the three matrixes of this 34 200937350. As the formula is called:

所以當有了兩座標系 即 得所有局部(關節) 轉矩陣’進而可以求 ❹ 的旋轉角度: 記點初始化的模型參數%定義 。在1時間點’由標 •5 :參照第-圖和第15B圖,其 :=. 明之實施例之標記點^義 ^根據本發 ,Al J —維模型及其投影結果。 此外,預測三維模也丨夾奴&心 乍棋型參數模組至少包括:粒手濾波器 ❹ 72,用以整。母一個影格組的標記點三維資訊和在其前 一時間點之虛擬三維手指模型30的模型參數,來追蹤在 每一個時間點時虛擬三維手指模型3〇的模型參叙、 般而3 ’傳統粒子遽波器(particle FiHering)多用來估 測一高度非線性系統的參數值,其重要的觀念是根據取樣定 理而來。當樣本數夠多則最後樣本分佈即為隨機變數之機率 分佈。若將貝式(Baysian)估測方式含入,此問題可以使用 連續的事前機率的樣本來估測事後的機率分佈。因此傳統的 粒子濾波器所要解決問題係如表三所示。 35 200937350 . 假設我們已經有從時間/到時間w的影像資訊=良,¾zMi *當我們擁 有目前時間的觀察值可表示為影像上的資訊)時,便可利爾以下的式子來表示 事後機率池14) 6Therefore, when there are two calibration systems, all local (joint) rotation matrices can be obtained, and then the rotation angle can be determined: the model parameter % defined by the point initialization. At 1 time point 'by the standard' 5: refer to the first picture and the 15th picture, which: =. The marking point of the embodiment of the invention ^ According to the present invention, the Al J - dimensional model and its projection result. In addition, the prediction of the three-dimensional mode is also included in the 奴 &&; 心 型 参数 参数 参数 参数 参数 参数 参数 参数 参数 , , , , , , , , , The three-dimensional information of the marker of the parent group and the model parameters of the virtual three-dimensional finger model 30 at the previous point in time to track the model representation of the virtual three-dimensional finger model at each time point, the general 3 'tradition Particle FiHering is often used to estimate the parameter values of a highly nonlinear system. The important concept is based on the sampling theorem. When the number of samples is sufficient, the final sample distribution is the probability distribution of random variables. If Bayesian estimates are included, this problem can be used to estimate the probability distribution afterwards using a continuous sample of ex ante. Therefore, the traditional particle filter to solve the problem is shown in Table 3. 35 200937350 . Suppose we already have image information from time/time w = good, 3⁄4zMi * when we have the observation of the current time can be expressed as information on the image), we can use the following formula to express afterwards Probability pool 14) 6

池I忑)优池k)池丨D 表三 傳統粒子濾波器的觀念即是利用N個粒子β’ί = 1”…I, 和每個粒子其對應到的權重<,ί = 1,···#,來表示目前的事後機 ❹ 率,當Ν越大,越能代表ΙΖ〇。傳統粒子濾波器在 單一時間點的運作過程通常會經過三個步驟:重新取樣 (Resample),即根據權重大小,去除權重太小的粒子,留下 權重較大的粒子;預測粒子新的狀態;根據權重函授班給予 每一個粒子新的權重。本實施例整合標記點資訊到傳統的粒 子濾波器來預測追蹤模型參數。 如表四所示,本實施例要解決的問題如下:Pool I忑) Youchi k) Pool丨D Table 3 The concept of a traditional particle filter is to use N particles β'ί = 1”...I, and each particle's corresponding weight <, ί = 1, ···#, to indicate the current after-the-fact rate, the larger the Ν, the more representative it is. The traditional particle filter usually operates in three steps at a single point in time: Resample, According to the weight size, remove the particles with too small weight, leaving the particles with larger weights; predict the new state of the particles; give each particle a new weight according to the weight function. This embodiment integrates the marker information into the traditional particle filter. To predict the tracking model parameters. As shown in Table 4, the problems to be solved in this embodiment are as follows:

中介绍如何取得),我們要根據鉍和6,表示出最大的事後機率» 表四 A是目前的狀態參數(Statevector)。在這裡以向量參 數的方式來表示,包含了手部的模型參數。如下所示: 36 200937350 本實施例係利用N個粒子(particle) …況,和每個 粒子其對應到的權重,來表示目前的事後機率 本實施例之粒子濾波器72的演算法係如表五所 示,其中每個粒子的權重<是根據習知之重要性取樣 (Importance Sampling)的原貝ij,即由重要性密度(Importance Density)來取樣的。试4Κ,Μί)是重要性密度,粒子可以更容 易地從取樣出來。如公式(19)所示: ,pjJGlZM) ❹ (19)In the introduction of how to get), we have to show the maximum after-effect rate according to 铋 and 6, » Table 4 A is the current state parameter (Statevector). It is represented here as a vector parameter and contains the model parameters of the hand. As shown below: 36 200937350 This embodiment uses N particles and the corresponding weight of each particle to represent the current probability of the particle filter 72 of the present embodiment. As shown in FIG. 5, the weight of each particle is sampled according to the Importance Sampling, which is sampled by the importance density (Importance Density). Test 4Κ, Μί) is the density of importance, and particles can be sampled more easily. As shown in the formula (19): , pjJGlZM) ❹ (19)

Algoriilnn : Ad_ve Marker-Guided Psrtide Filter 1. FOR i=l:N (N 是 particle 個數) 從I Zt,Af·)取樣jfi 之》根據 westing fimdkmjKzi 丨 〇 ’ 算, 出每個4對應到的權重< β End FOR 2.對 < 做正规化(NonnaHze),使箨艺=1Algoriilnn : Ad_ve Marker-Guided Psrtide Filter 1. FOR i=l:N (N is the number of particles) Sampling jfi from I Zt, Af·) according to westing fimdkmjKzi 丨〇', the weight corresponding to each 4 < β End FOR 2. Normalize (NonnaHze) for <

粒子的 wd^ited smn · 表五 ❹ 37 200937350 在實作上,本實施例將重要性密度設為ρ(ϋ,从)。 Ρ(:ϊ 為咖丨W I叫)的混合。如公式(2〇)所 .示: p{41 *ί-ι,Μ) = (l-s) p(x" I Μ*,11 »*0 (20) 將ιΦίΙΟ和MO)以高斯分佈來表示。如公式(21)所 示: ❹ ίΦίΐΟ 況(石·Ά) (21) jpfjc* I mt) -* 其中Σι和&是對角共變異矩陣(Diag〇nal c〇variance Matrix)。 既然將歹(4丨〜叫)表示為〆1¾)和尸(AI,财,)兩個機率值 的混合’當產生粒子的時候’一部分產生自〆巧丨U.,一部 ❹ 分產生自1’%)。如公式(22)所示: (22) 其中々/¾是從地,Κι)取樣出來 试也你Wd^ited smn of the particle · Table 5 ❹ 37 200937350 In practice, this embodiment sets the importance density to ρ(ϋ, 从). A mixture of Ρ (: ϊ is a curry W I called). As shown in the formula (2〇).: p{41 *ί-ι,Μ) = (l-s) p(x" I Μ*,11 »*0 (20) ιΦίΙΟ and MO) are expressed in Gaussian distribution. As shown in the formula (21): ❹ ίΦίΐΟ Condition (石·Ά) (21) jpfjc* I mt) -* where Σι and & are Diag〇nal c〇variance Matrix. Since the 歹(4丨~叫) is expressed as 〆13⁄4) and the corpse (AI, 财,), the mixture of the two probability values 'when the particle is produced' is generated from the 〆 丨 丨 U. 1'%). As shown in the formula (22): (22) where 々/3⁄4 is sampled from the ground, Κι)

Phl,%)取樣出來的,>^+52=ΛΓ。 本實施例藉由X"和%在影像上的誤差程度來評估s 值大小,如公式(23)所示: 38 (23) 200937350 Λ — & Ε(Χ) = E(edge)+E{re0on) 其中EYedge)和£Yregz'o«)是指模型參數在影像上邊緣和 區域的誤差,其將於如公式(24)和公式(25)中說明。這樣評 估s值的方式是希望當標記點所定義的模型參數與前一個 時間點的最佳模型參數誤差相差過大時,能降低由〜取樣 的比例,使在三維標記點資訊不準確時,仍然能追蹤出正確 ❹ 的參數。 由於手部的運動具有高度的牽制性和連動性,故本實施 例亦利用此種特性來追蹤出正確的參數,如表六所示。Phl, %) sampled, >^+52=ΛΓ. In this embodiment, the magnitude of the s value is evaluated by the degree of error of X" and % on the image, as shown in the formula (23): 38 (23) 200937350 Λ — & Ε(Χ) = E(edge)+E{ Re0on) where EYedge) and £Yregz'o«) refer to the error of the model parameters at the edges and regions of the image, which will be as described in equations (24) and (25). The way to evaluate the s value is that when the model parameters defined by the marker points are too different from the error of the best model parameters at the previous time point, the ratio of ~samples can be reduced, so that when the information of the three-dimensional marker points is inaccurate, Can track the parameters of the correct ❹. Since the movement of the hand is highly confined and interlocking, this embodiment also uses this characteristic to track the correct parameters, as shown in Table 6.

其中MCT :掌指關節;:離心趾間關節;尸/P :近 心趾間關節;/P :指間關節;CMC :腕掌骨關節 由^=^,利用雖或响么)產生粒子4的⑥參 39 200937350 數值,之後再用 決定&這個參數值。這樣的做法 可以讓^和&符合人體手部限制的連動性 重新取樣(Resample)即根據權重大小,去除權重太小的 粒子,留下權重較大的粒子,如表七所示。Among them MCT: metacarpophalangeal joint;: centrifugal interphalangeal joint; corpse/P: proximal interphalangeal joint; /P: interphalangeal joint; CMC: wrist and metacarpal joint by ^=^, using or not, to produce particle 4 of 6 39 200937350 The value is then used to determine the value of this parameter. This approach allows the sum and re-sampling of the human hand to be resampled (Resample) to remove particles with too small weights, depending on the weight, leaving the particles with larger weights, as shown in Table 7.

Algcnithm : ResanqiliiigAlgorithix^O] [{*/*,RESAMPLEtW,^}*]Algcnithm : ResanqiliiigAlgorithix^O] [{*/*,RESAMPLEtW,^}*]

• MtWize &e GDF(CiBnii3bted Disbibutim FuiKrticm} : f 0 FORi=2;N i:• MtWize &e GDF(CiBnii3bted Disbibutim FuiKrticm} : f 0 FORi=2;N i:

-Ctxc^mdtCDF * c. * END FOR-Ctxc^mdtCDF * c. * END FOR

distribution <m Oie iBte^vdiDistribution <m Oie iBte^vdi

* i = if 1 爾 END WHILE » Assign saonfde : xf = » END FOR 表七 40 200937350 • 接著,說明粒子的權重(<)的計算方式。本實施例主要 疋根據權重函數來計算,越接近最佳解的粒子,給予的權重 也應該較大,反之越偏離最佳解的粒子,予的權重也應該越 小°共可分為(a)手指輪廓、(b)手指前景、(c)標記點中心3 個部分’以下分別做介紹: 手指輪都的谨用 於指骨圓柱模型在影像上投影後的手指邊線,在兩條連 線上每隔一段距離,平均取樣出一些點,共取出[點。輪 © 廓圖/e上的畫素值範圍由0〜1。每個邊線點(共L個)的座標 為(A,h) ’灸=/,…乂。則在手部輪廓圖(Pixel Map)上的畫 素值為便為/e(々,h),以政來表示八(;),无=7,…乂。 巧著,定義矿(\4)為欲最小化的函式,令其為邊線點晝素值 和1的差值之總和平均。如果此粒子所表示的+指模型越 靠近手部輪廓,越小。如果此粒芋所表未的手指模型 越偏離手部輪廓,則越大。铲(«)可代表粒子和正 確解的誤差程度’如公式(24)所示。 ❹. (24) ikl手指前景的速q 在指骨圓柱模型在影像上投影後的4個端點投.影在影 像上’視這4個投影點為方框的4個角落,在box的内部平 • 均取樣出一些點(共Μ點)。手部前景圖上的"上的pixel值 範圍由〇〜卜每個邊線點共有Μ個,(々,以),灸=…。 41 200937350 則在手部前景圖(pixel map)上的pixel值為乃(^,h),以片來 表示八(々,八)’ A =人。定義欲最小化的函式為 邊線點晝素值和1的差值之總和平均。如果此粒子所表示的 手指模型越在手部前景之内,越小。如果此粒子所 表示的手指模型越在手部前景之外,則五越大。疋«&) 可代表粒子¥和正確解的誤差程度,如公式(25)所示。 (xttzf) = ΤγΣ 0~.pI) (25) 〇 (c)標記點中心的谨爾 因為表皮標記點是放在骨頭兩端的上方,因此標記點連 線大致與骨頭中心線平行,所以權重函數考慮,了標記點連線 與粒子所產生的模型圓柱中心線的平行程度,亦即標記點連 線與模型圓柱中心線越平行權重越大,反之權重越小。由於 只考慮平行程度,可能會導致雖然標記點連線與骨頭中心線 夠平行,但是卻距離過大的情形產生。所以本實施例的權重 〇 函數新增了重構的標記點中心與粒子所產生的模型標記點 中心的最大距離。當最大距離越小,權重越大,反之權重越 小。定義了欲最小化的函式丑'夕*是標記點連線和圓柱體 中心線的夾角’二六。以㈣丨2的大小來決定表示兩條^ 的平行程度。随t越大,表示兩條線越不平行;|血色f越 小’表示兩條線越平行。A則表示重構的標記點中心與粒 子所產生的模型標記點中心的距離,灸=人,6。以/^^㈣的 • λ小來歧重構的標記點中心與粒子所產生的模型標記點 42 200937350 o-^max (Dk) . 中心的最大距離的大小,e h’6值越大,表示最大距離越 大,反之表示越小,c值為常數,如公式(26)所示。 ib=x* i = if 1 ell WHILE » Assign saonfde : xf = » END FOR Table 7 40 200937350 • Next, explain how the weight of the particle (<) is calculated. In this embodiment, the weight is calculated according to the weight function. The closer to the optimal solution, the weight given should be larger. On the contrary, the more the particle deviates from the optimal solution, the smaller the weight should be. The outline of the finger, (b) the foreground of the finger, and (c) the three parts of the center of the marker point are described below: The finger wheel is used for the finger edge of the phalanx cylinder model projected on the image, on two lines At intervals, an average of some points are taken and a total of [points are taken. The range of pixels on the wheel © profile /e ranges from 0 to 1. The coordinates of each edge point (total of L) are (A, h) ‘ moxibustion=/,...乂. Then the pixel value on the Pixel Map is /e(々,h), and the political representation is eight (;), no =7,...乂. Coincidentally, define the mine (\4) as the function to be minimized, and make it the sum average of the difference between the edge point and the difference of 1. If the + finger model represented by this particle is closer to the hand contour, the smaller. If the finger model not shown in this grain is more deviated from the contour of the hand, it is larger. The shovel («) can represent the degree of error of the particle and the correct solution as shown in equation (24). 24. (24) The speed q of the ikl finger foreground. The four endpoints after the cubital cylinder model is projected on the image. The image is on the image. 'The four projection points are the four corners of the box, inside the box. Ping • All points are sampled (a total of points). The pixel value on the " on the foreground map of the hand ranges from 〇~卜 to each edge point, (々,以), moxibustion=.... 41 200937350 The pixel value on the pixel map is (^, h), and the slice represents eight (々, eight) ' A = person. Define the function to be minimized as the sum average of the difference between the edge point and the difference between 1. If the finger model represented by this particle is within the foreground of the hand, the smaller it is. If the finger model represented by this particle is outside the foreground of the hand, then the fifth is larger.疋«&) can represent the degree of error of the particle ¥ and the correct solution, as shown in equation (25). (xttzf) = ΤγΣ 0~.pI) (25) 〇(c) The center of the marker point is because the epidermis marker is placed above the ends of the bone, so the marker line is approximately parallel to the centerline of the bone, so the weight function Considering the parallelism between the line connecting the marker points and the centerline of the model cylinder generated by the particles, that is, the parallel of the line connecting the marker points and the center line of the model cylinder is larger, and the weight is smaller. Since only the degree of parallelism is considered, it may result in a situation in which the distance between the marker points and the center line of the bone is parallel, but the distance is too large. Therefore, the weight 〇 function of this embodiment adds the maximum distance between the center of the reconstructed marker point and the center of the model marker generated by the particle. When the maximum distance is smaller, the weight is larger, and the weight is smaller. It is defined that the function to be minimized is the angle between the line connecting the point and the center line of the cylinder. Determine the degree of parallelism of the two ^ by the size of (4) 丨2. The larger t is, the less parallel the two lines are; the smaller the blood color f is, the more parallel the two lines are. A indicates the distance between the center of the reconstructed marker point and the center of the model marker point produced by the particle, moxibustion = person, 6. The center of the marker reconstructed by /λ^(4) and the model marker point generated by the particle 42 200937350 o-^max (Dk) . The maximum distance of the center, the larger the value of e h'6, Indicates that the maximum distance is larger, and vice versa, the smaller the c value, as shown in equation (26). Ib=x

(26) 本實施例用指數的方式來表示權重的大小,五,、、丑™ 的值越大’權重越小;五,、、丑》的值越小,權重越大如 公式(27)所示。(26) In this embodiment, the size of the weight is expressed by an index. The larger the value of five, , and ugly, the smaller the weight; the smaller the value of five, ugly, the greater the weight, such as the formula (27) Shown.

Ο 巧=(27) 在這裡本實施例增加了一個限制項4<馮))2,藉以刪哈 較不可能的粒子’即粒子所得到的模型標記點與重構得到的 標記點距離過大的粒子,定義如公式p8)所示。 (〇 eise (2 8) 由於本實施例較佳有4台攝影機同步拍攝,所以整合4 張影像的資訊如公式(29)所示。 K ^ p(zt | χ;)巧 巧 = (27) Here, this embodiment adds a limit item 4 < von)) 2, in order to delete the particles that are less likely to be particles, that is, the model mark points obtained by the particles are too far from the reconstructed mark points The particle is defined as shown in equation p8). (〇 eise (2 8) Since the four cameras are preferably photographed simultaneously in this embodiment, the information for integrating the four images is as shown in the formula (29). K ^ p(zt | χ;)

Camera_num (29) Π PkCamera_num (29) Π Pk

k=I 其中wr、we、w«的為各個影像資訊的影響權重,可分 别設為例如2.5、1.5、0.5,且攝影機數目(Cawere—««/«)為4。 以下根據上述之各模組的功能來說明本發明之三維手 私影像運動分析方法,其中各步驟進行的方式係舆於前 43 200937350 述之各模組的功能相同,故不在此贅述。 請參照第1圓和第16圖,第16圖係繪示根據本發 明之實施例之三維手指影像運動分析方法的流程示意 圖。在本實施例之三維手指影像運動分析方法中首先, 進行前置步驟200,前置步驟2〇〇至少包括有:進行步 驟202以設置複數個標記點22、24和26於標的手部1〇; 進行步驟204,以分別設置複數個攝影機2〇於標的手部 附近之不同位置;進行步驟2〇6,以提供虛擬三維手指 〇 模型30 ’藉以模擬標的手部10之手指14的運動。 接著,進行拍攝影像序列的步驟21〇,以依序於複 數個時間點分別擷取標的手部1〇的影像,而依序獲得複 數個影格組,每一個格組具有分別對應至攝影機20之複 數個影格。然後,進行影像特徵擷取的步驟22〇,籍以 根據rb色差(Cr-Chromiance)資訊來分別擷取出每二個影 格組之每一個影格的複數個影像特徵。接眷,進扦棋型 參數初始化的步驟23〇,藉以根據影像特徵來初鹌化虛 〇 擬三維手指模型於第一個時間點時之複數個位置‘數組 和複數個方向參數組,並獲得每一個手指之每—褊指節 的指節長度。 然後’進行預測三維標記點資訊模組的步驟(未標 不)。在預測三維標記點資訊模組的步驟中,首先遂行標 記點偵測與對應的步驟240,藉以根據前述之位董Z數 和方向參數組,並利用極線限制和指節長度限制,來 分別建立每一個影格組之影格之多重視野間標記點的對 44 200937350 應關係。接基 i* j* 接著進仃標記點追蹤的步驟250,藉以#用 改良的平均蒋叙.當曾、土 Α A 精以使用 ❹^ 動/冑算法來追蹤每-個影格^每一個影 格之標記點的位詈。桩装 a / ^ 置接著’進行重構標記點三維資邙的 步驟260,囍以佑田甚,了 艰貧訊的 藉以使用最小平方法來重構每一個影格組之 每一個影格之標圮點的-她签〜 办格組之 的择Μ 資訊,而獲得每一個影格組 的標把點三維資訊。在重 牡垔構標記點二維資訊的步驟26〇 °去除使用該改良的平均移動演算法所追璇到 錯誤的標記點位置。 迫蹤到之 ❹ ❹ 底然後,進行預測三維模型參數的步驟27〇,藉以根 據每自料組的標記點三維資訊,來定義在每一個時 門點之該虛擬二維手指模型的骨節座標系及模型參數。 預測三維模型參數的步驟27〇更使用粒子濾波器,以整合 每:個影格組的標記點三維資訊和在其前一時間點乏虛 擬三維手指模型的模型參數,來追蹤在每一個時間黠時 該虛擬三維手指模型的模型參數。 由上述本實施例可知,本發明可量測到不同手$旨的達動 參數,例如骨節與骨節之間的旋轉角度、交錯位移:,骨節的 長度,骨節的座標軸,運動軌跡,角速度與角加速度等資訊, 同時藉由多台相機系統也大大降低了遮蔽現象。因而可提供 醫師在診療上更多的資訊,及評估病患復健程度的;,依據;並 增加量測手指運動參數的精確性與實用性。此外,丨,本發明 具有成本低、佔據空間小的優勢。 雖然本發明已以較佳實施例揭露如上,然其並非用以限 定本發明,任何熟習此技藝者,在不脫離本發明之精神和範 45 200937350 • 圍内,當可作各種之更動與潤飾,因此本發明之保護範圍當 視後附之申請專利範圍所界定者為準。 【圖式簡單說明】 為了更完整了解本發明及其優點,請參照上述敘述 並配合下列之圖式,其中: 第1圖係繪示根據本發明之實施例之三維手指影像 運動分析系統的方塊示意圖。 © 第2圖係繪示根據本發明之實施例之標記點與骨頭 間的關係示意圖。 第3圖為根據本發明之實施例之各攝影機於一時間 點所拍攝到之影格的囷片。 第4A囷係繪示本發明之實施例所採用之球臼型關 節的示意圓。 第4B圖’其係繪示本發明之實施例之▲擬三維手指 模型的示意圖。 〇 第 4C囷係繪示本發明之實施例之虛擬生維手指g 型之骨節座標系的示意圖。 第4D圖係繪示本發明之實施例之虛敲三維手指模 型之模型參數的示意圖。 第5A圖為纷示本發明之實施例之分割出來的手指 前景圖。 第5B圖為繪示本發明之實施例之分韌出来的標記 點前景圖。 46 200937350 第5C圖為繪示根據本發明之實施例分割第3圖所示 之不同視野的手指前景圖。 第5D為繪示根據本發明之實施例之分割出來的手 部輪廊圖。 第5E圖為繪示根據本發明之實施例之分割出來並 Ο 濾除背景後的手部輪廓圖。 第6A圖為繪示根據本發明之實施例之模型參數初 始化模組之初始化位置參數的流程示意圖。 第6B圖為繪示根據本發明之實施例之模型參數初 始化模組之初始化方向參數的流程示意圖° 第7圖係繪示根據本發明之實施例之最小平方橢圓 球配合後的結果示意圖。 第8A圖係繪示根據本發明之食施例之尋找姆指板k=I where wr, we, w« are the influence weights of the respective image information, which can be set to, for example, 2.5, 1.5, 0.5, and the number of cameras (Cawere_««/«) is 4. In the following, the method for analyzing the motion of the three-dimensional hand-held image according to the present invention will be described based on the functions of the above-mentioned modules. The manner in which each step is performed is the same as that of the modules described in the previous paragraphs 2009, 2009. Referring to the first circle and the sixteenth, FIG. 16 is a flow chart showing a three-dimensional finger image motion analysis method according to an embodiment of the present invention. In the three-dimensional finger image motion analysis method of the present embodiment, first, a pre-step 200 is performed. The pre-step 2〇〇 includes at least step 202 to set a plurality of marker points 22, 24, and 26 to the target hand. Step 204 is performed to respectively set a plurality of cameras 2 at different positions in the vicinity of the target hand; step 2〇6 is performed to provide a virtual three-dimensional finger 〇 model 30' to simulate the movement of the finger 14 of the target hand 10. Then, the step 21 of capturing the image sequence is performed, and the images of the target hand 1〇 are respectively captured in sequence at a plurality of time points, and a plurality of frame groups are sequentially obtained, each of which has a corresponding image to the camera 20 Multiple frames. Then, step 22 of the image feature extraction is performed, and a plurality of image features of each of the two frame groups are respectively extracted according to the rb color difference (Cr-Chromiance) information. In the following step, the step 23 of initializing the chess type parameter is initialized, according to the image feature, the plurality of position 'array and the plurality of direction parameter sets of the virtual virtual three-dimensional finger model at the first time point are initially obtained and obtained. Each finger is the length of the knuckle of the knuckle. Then 'the step of predicting the three-dimensional marker point information module (not marked). In the step of predicting the three-dimensional marker point information module, first, the step detection and corresponding step 240 is performed, according to the foregoing position of the number of the Z and the direction parameter group, and using the limit of the pole line and the length of the knuckle, respectively Establish a pair of multiple visual field points for each frame of the frame group. The base i* j* is followed by the step 250 of tracking the point tracking, by which the modified average is used by Jiang Xu. When Zeng and the bandit A use the ❹^ motion/胄 algorithm to track each frame ^ each frame The position of the marked point. The pile installation a / ^ is then followed by 'step 260 of reconstructing the marker point three-dimensional assets, 佑 佑 佑 , , , , , , , , , , , 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰 艰Point--she signs ~ select the information of the group, and obtain the 3D information of the label points of each frame group. The step 26 of the two-dimensional information of the heavy oysters is removed to remove the erroneous marker position using the improved average motion algorithm. Then, after the forced ❹ bottom, the step 27 of predicting the parameters of the three-dimensional model is performed, thereby defining the node coordinate system of the virtual two-dimensional finger model at each time gate point according to the three-dimensional information of the marker points of each self-feed group. And model parameters. Step 27 of predicting the parameters of the 3D model uses a particle filter to integrate the 3D information of each of the frame groups and the model parameters of the virtual 3D finger model at a previous point in time to track the time at each time. The model parameters of the virtual three-dimensional finger model. It can be seen from the above embodiment that the present invention can measure the moving parameters of different hands, such as the rotation angle between the joint and the joint, the staggered displacement: the length of the joint, the coordinate axis of the joint, the motion trajectory, the angular velocity and the angle. Acceleration and other information, while also greatly reducing the shadowing phenomenon by multiple camera systems. Therefore, it can provide more information about the doctor's diagnosis and treatment, and assess the degree of patient rehabilitation; and increase the accuracy and practicability of measuring finger movement parameters. Further, the present invention has the advantages of low cost and small space occupation. While the invention has been described above by way of a preferred embodiment, it is not intended to limit the invention, and it is possible to make various modifications and refinements without departing from the spirit and scope of the invention. Therefore, the scope of the invention is defined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS In order to more fully understand the present invention and its advantages, reference is made to the above description and in conjunction with the following drawings, wherein: FIG. 1 is a block diagram showing a three-dimensional finger image motion analysis system according to an embodiment of the present invention. schematic diagram. © Fig. 2 is a schematic view showing the relationship between marked points and bones according to an embodiment of the present invention. Figure 3 is a fragment of a frame taken by a camera at a point in time in accordance with an embodiment of the present invention. Section 4A shows a schematic circle of a ball-and-socket joint used in an embodiment of the present invention. Fig. 4B is a schematic view showing a ▲ three-dimensional finger model of an embodiment of the present invention. 〇 4C is a schematic diagram showing a virtual joint dimension finger type g-shaped joint coordinate system according to an embodiment of the present invention. Fig. 4D is a schematic diagram showing model parameters of a virtualized three-dimensional finger model of an embodiment of the present invention. Fig. 5A is a plan view of a divided finger showing an embodiment of the present invention. Fig. 5B is a plan view showing the marked points of the toughness of the embodiment of the present invention. 46 200937350 Figure 5C is a diagram showing the foreground of a finger dividing the different fields of view shown in Figure 3 in accordance with an embodiment of the present invention. Fig. 5D is a diagram showing a divided hand wheel gallery according to an embodiment of the present invention. Figure 5E is a cross-sectional view of the hand after segmentation and filtering out the background in accordance with an embodiment of the present invention. Figure 6A is a flow chart showing the initialization position parameters of the model parameter initialization module according to an embodiment of the present invention. FIG. 6B is a flow chart showing the initialization direction parameter of the model parameter initialization module according to the embodiment of the present invention. FIG. 7 is a schematic diagram showing the result of the least square elliptical ball combination according to the embodiment of the present invention. Figure 8A is a diagram showing the search for a fingerboard according to the food application of the present invention.

部的示意圖。 第8B圖係繪示根據本發明之實施例之尋找小指***的示意圖。 第8C圖係繪示根據本發明之實施例之尋找食於 部的示意圖。 第8 D圖係螬'示根據本發明之實施例之尋拽中 無名指根部的示意圖。 ^ 第gE圖係緣示根據本發明之實施例乏尋 部標記點的結果圖。 根 完所有 板 第9A圖為繪示本發明之實施例之模型參數、 前結果 使化 47 200937350 第 9B圖為繪示本發明之實施例之位置參數初使化 後結果。 . 第 9C圖為繪示本發明之實施例之方向參數初使:化 後結果。 第10A圖係繪示根據本發明之實施例之關節中心連 線示意圖。 第10B圖係繪示根據本發明之實施例之標記點分類 結果圖。 © 第10C圖係繪示根據本發明之實施例之極線示意 圖。 第11A圖係繪示根據本發明之實施例之#找標記點 對應關係的示意圖。 第11B圖為繪示根據本發明之實施例之標記點對應 結果圖。 第12圖為繪示根據本發明之實施例之改良的平均 移動演算法的流程圖。 Q 第1 3圖係繪示根據本發明之實施例說明預測標記 點可能位置示意圖。 第14A圖係緣示根據本發明之實施例之座標軸南示 意圖。 第14B圖係繪示根據本發明之實施韧之兩個座標系 之間的轉換關係的示意圖。 第1 5 A圖為繪示根據本發明之實施例之標記點定義 的三維模型。 48 200937350 第1 5B圖為繪示根據本發明之實施例之標記點定義 的三維模型的投影結果。 * 第1 6圖係繪示根據本發明之實施例之三維手指影 像運動分析方法的流程示意圖。 【主要元件符號說明】 10標的手部 12手掌部 14手指 20攝影機 ❹ 22、24、26標記點 28指節 30虛擬三維手指模型 40影像特徵擷取模組 50模型參數初始化模組 60預測三維標記點資訊模組 70預測三維模型參數模組 80電腦 110標記點二元影像的座標系關係 q 112標記點中心偵測 114搜尋每根手指初始位置 120設定初始方向參數 122最佳化方向參數 150判別標記點的可靠度 152判斷可靠影像平面的個數是否大於或等於2 1 5 4重構標記點三維座標 156影像資訊補償標記點位置 49 200937350 1 58視覺技術反投影補償標記點位置 200前置步驟 202設置標記點 204設置攝影機 206提供虛擬三維手指模型 2 10拍攝影像序列 220影像特徵擷取 230模型參數初始化 240標記點偵測與對應 250標記點追蹤 260重構標記點三維資訊 270預測三維模型參數 〇 ❹ 50Schematic diagram of the department. Figure 8B is a schematic illustration of the search for the root of the little finger in accordance with an embodiment of the present invention. Fig. 8C is a schematic view showing the search for food in accordance with an embodiment of the present invention. Figure 8D is a schematic diagram showing the root of the ring finger in the seek according to an embodiment of the present invention. ^ The gE diagram is a result diagram of the missing finder marker points in accordance with an embodiment of the present invention. Fig. 9A is a diagram showing the model parameters and the pre-results of the embodiment of the present invention. 47 200937350 Fig. 9B is a diagram showing the results of initializing the positional parameters of the embodiment of the present invention. Fig. 9C is a diagram showing the results of the initial direction of the embodiment of the present invention: the result of the reduction. Fig. 10A is a schematic view showing the joint center connection according to an embodiment of the present invention. Fig. 10B is a diagram showing the result of classification of the marker points according to the embodiment of the present invention. © Fig. 10C is a schematic diagram showing an epipolar line according to an embodiment of the present invention. Fig. 11A is a diagram showing the correspondence relationship between #找点点 according to an embodiment of the present invention. Fig. 11B is a diagram showing the result of correspondence of points according to an embodiment of the present invention. Figure 12 is a flow chart showing a modified average motion algorithm in accordance with an embodiment of the present invention. Q Figure 13 is a schematic diagram showing the possible positions of predicted marker points in accordance with an embodiment of the present invention. Fig. 14A is a schematic representation of the coordinate axis according to an embodiment of the present invention. Fig. 14B is a schematic view showing the conversion relationship between two coordinate systems of the toughness according to the present invention. Figure 15A is a three-dimensional model depicting the definition of a marker point in accordance with an embodiment of the present invention. 48 200937350 Figure 15B is a projection result showing a three-dimensional model defined by a marker point according to an embodiment of the present invention. * Fig. 16 is a flow chart showing a method of analyzing a three-dimensional finger image motion according to an embodiment of the present invention. [Main component symbol description] 10 standard hand 12 palm 14 finger 20 camera ❹ 22, 24, 26 mark 28 knuckle 30 virtual three-dimensional finger model 40 image feature capture module 50 model parameter initialization module 60 predict three-dimensional mark The point information module 70 predicts the coordinate system relationship of the three-dimensional model parameter module 80 computer 110 marker point binary image q 112 marker point center detection 114 searches each finger initial position 120 sets the initial direction parameter 122 optimization direction parameter 150 discriminate The reliability of the marked point 152 determines whether the number of reliable image planes is greater than or equal to 2 1 5 4 reconstructed marker point three-dimensional coordinates 156 image information compensation marker position 49 200937350 1 58 visual technology back projection compensation marker position 200 pre-step 202 set marker point 204 set camera 206 provides virtual three-dimensional finger model 2 10 captured image sequence 220 image feature capture 230 model parameter initialization 240 marker point detection and correspondence 250 marker point tracking 260 reconstruction marker point three-dimensional information 270 prediction three-dimensional model parameter Number 〇❹ 50

Claims (1)

200937350 • 每一該些影格組之該些影格之多重視野間該些標記 點的對應關係; 一$ » * 一標記點追縱模組’用以使用一改良的平均移 動(Mean-Shift)演算法來追蹤每一該些 , *** 之 一該些影格之該些標記點的位置;以及 一重構標記點三維資訊模組,用以使用一最小 平方法(Least-Squares Method)來重構每—該些影格 組之每一該些影格之該些標記點的三維資訊而獲 ® 得每一該些影格組的標記點三維資訊;以及 一預測三維模型參數模組,用以根據每一該些影格 組的標記點三維資訊,來定義在每一該些時間點:板虛 擬三維手指模型的骨節座標系及模型參數。 ^ 2.如申請專㈣圍第i項所述之三維手指影像運動分 析系統,其中該些標記點至少包括·· 複數個第一標§己點,設置於該手掌部上丨, 〇 複數個第二標記點,分別設置於每一該些手指之每 該至少一指節之該第一端;以及 複數個第三標記點,分別設置於每一該些手指之每 一該至少一指節之該第二端。 3.如申請專利範圍第2 ^ ^ ^ 乐項所述之二維手指影像運動分 析系統’其中該虛擬三維手你 ^ ^ ^ ^ ^ ^ *于知模型係使用一圓柱體來模 擬每一該些手指之每一該 场至J一指知,並使用使用由一 52 200937350 球體與一半球體所組成之一球臼型結構來模擬每一該些 手指之每兩相鄰指節間的關節、和指節與該手掌部間的 關節。 4. 如申請專利範圍第2項所述之三維手指影像運動分 析系統’其中該些第一標記點的數目為3個。 5. 如申請專利範圍第丨項所述之三維手指影像運動分 〇 析系統,其中該些影像特徵至少包括:每一該些影格中之 每一該些手指的前景區域與輪廓資訊、及每一該些標記 點的前景區域。 6.如申請專利範圍第!項所述之三維手指影像運動分 析系統’其中該重構標記點三維資訊模組係用以去除使 用該改良的平均移動演算法所追蹤到之錯誤的撫乾點位 置。 ❹ 7.如中請專利範圍第丨項所述之三維手指影像運勒 析系統,其中該預測三維模型參數模組至少包括: 一粒子濾波器,用以整合每一該些影格組的標記點 維資訊和在其前一時間點之該虛擬三維手的抬 參數,來追㈣每-該些時間點時該 =模 的模型參冑。 戳-維手指模 53 200937350 维手指影像運動分 該楔型參數初始化 及該預測三維模型 8·如申請專利範圍第1項所述之三 析系統,其令該影像特徵擷取模組、 模組、該預測三維標記點資訊模組、 參數模組係被安裝於一個人電腦中。 y.如令請專利範圍第丨項所述之三 析系統,並中哕此楚搞 ^ 于和影像運動分 垂直,用以二:間之其中兩條連線係互相 蜜 用以表不該手掌部的座標系。 1〇·如申請專利範圍第丨項所述之三 分析系統,其中在該些時間點之第,,3影像運動 手型定義為全指伸展。 ,該標的手部的 η·種二維手指影像運動分浙方、土 机番迸把如 勒刀析方法,至少包括: '^置複數個標記點於一標 具有一手掌、 ° ,其中該標的手部 =,:: 個手指’每-該些手指具有至少- 秦 扣卽每一該至少一尨銪^ ^ ❹為一第一端和一第二端;兩端朝該手掌部的方向依序 分別設置複數個攝影機於該標 置,其中該些攝影機具有不同視野; 之不同位 提供一虛擬三維手指模型, 該些手指的運動; 模擬該標的手部之 使用該些攝影機依序於 的手部的影像,而依序獲得 時間點分別擷取該標 數個影格組,每―該些影 54 200937350 格組具有分別對應至該些攝影機之複㈣影格; 進行一影像特徵擷取的步驟,藉以根據rb色差 (Cr-chromiance)資訊來分別擷取出每一該些影格組之每 一該些影格的複數個影像特徵; 進行-模型參數初始化的步爾藉以根據該些影谬 特徵來初始化該虛擬三維手指模型於該些時間點之第丄 者時之複數個位置參數組和複數個方向參數組,並獲得 每-該些手指之每-該至少一指節的指節長度; ❹ ❹ 進行帛測_維標記點資訊模紐的步驟至少包括; 進行一標記點债測與對應的步驟,藉以根據該 置參數組和該些方向參數組,並利用極線限制 和才曰節長度限制,來分別逢 ^ 來刀別建立母一該些影格組之該 ^格之多重視野間該些標記點的對應關係、: 進行-標記點追縱的步驟,藉以使用一改良的 平均移動演算法來追蹤每一 影格之該些標記點的位置;:及些影格組之每-該些 進行重構標記點三維資訊 一最小平方法炎畲搂反 藉从使用 法來重構每一該些影格組之每一少 格之該些標記點的三維資訊,而獲得每1::影 組的標記點三維資訊;以及以一影格 一預測三維模型參數的步驟,藉以根據备 些影格組的標記點三維資訊, 一:每1 之該虚擬三維手指描刑每該些時間k 予指模型的骨節座標系及模型參數。 55 200937350 u.如申請專利範圍帛u ;所述之三維手指影像運動 分析方法,其中該設置該些標記點的步驟至少包括: 設置複數個第一標記點於該手掌部上. 分別設置複數個第二標記點於每一該些手指之每一 該至少一指節之該第一端;以及 分別設置複數個第三標記點於每一該些手指之每一 該至少一指節之該第二端。200937350 • The correspondence between the points in the multiple views of the frames of each of the frames; a $ » * a point tracking module to use a modified mean moving (Mean-Shift) calculus To track each of these, *** one of the locations of the points of the frames; and a reconstructed marker three-dimensional information module for using a least flat method (Least-Squares Method) Each of the three-dimensional information of the points of each of the frames of the frame group obtains three-dimensional information of the mark points of each of the image groups; and a prediction three-dimensional model parameter module for each A three-dimensional information of the marker points of the frame group is defined at each of the time points: a bone joint coordinate system of the virtual three-dimensional finger model of the plate and a model parameter. ^ 2. For example, the three-dimensional finger image motion analysis system described in item (4) of the fourth item, wherein the points include at least a plurality of first points, which are arranged on the palm of the hand, and a plurality of a second marker point respectively disposed at the first end of each of the at least one knuckle of each of the fingers; and a plurality of third marker points respectively disposed on each of the at least one knuckle of each of the fingers The second end. 3. For example, the two-dimensional finger image motion analysis system described in the 2nd ^ ^ ^ item of the patent application, wherein the virtual three-dimensional hand you ^ ^ ^ ^ ^ ^ * knows that the model uses a cylinder to simulate each of the Each of the fingers is pointed to J and uses a ball-type structure consisting of a 52 200937350 sphere and a half sphere to simulate the joint between each two adjacent knuckles of each of the fingers, And the joint between the knuckles and the palm of the hand. 4. The three-dimensional finger image motion analysis system of claim 2, wherein the number of the first markers is three. 5. The three-dimensional finger image motion separation and decantation system of claim 3, wherein the image features include at least: foreground regions and contour information of each of the fingers of each of the frames, and each The foreground area of the points. 6. If you apply for a patent range! The three-dimensional finger motion analysis system described in the section wherein the reconstructed marker point three-dimensional information module is used to remove the erroneous position of the care point tracked by the improved average motion algorithm. ❹ 7. The three-dimensional finger image retrieval system according to the third aspect of the patent application, wherein the prediction three-dimensional model parameter module comprises at least: a particle filter for integrating the marking points of each of the image groups Dimension information and the lifting parameters of the virtual three-dimensional hand at a previous point in time to chase (four) the model parameters of the = model at each of these time points. Poke-dimensional finger mode 53 200937350 Dimensional finger image motion is divided into the wedge parameter initialization and the predicted three-dimensional model. 8. The third analysis system described in claim 1 of the patent application, which enables the image feature capture module and module The predicted three-dimensional point information module and the parameter module are installed in a personal computer. y. If you want to use the three-in-one system described in the scope of the patent, and in the middle of this, it is perpendicular to the image motion, and two of the two connections are used to indicate each other. The coordinate system of the palm. 1. The third analysis system described in the third paragraph of the patent application, wherein at these points in time, the 3 image motion hand is defined as the all-finger extension. The η·2D finger image movement of the target hand is divided into the method of the knife and the method of the knife, and at least includes: '^ placing a plurality of marker points on a target having a palm, °, where The target hand =,:: the fingers 'every-the fingers have at least - the 卽 卽 each of the at least one 尨铕 ^ ^ ❹ is a first end and a second end; both ends toward the palm of the hand A plurality of cameras are respectively disposed in the index, wherein the cameras have different fields of view; the different bits provide a virtual three-dimensional finger model, the movement of the fingers; and the hands of the target are simulated using the cameras. The image of the hand, and sequentially acquires the time-point points to capture the number of frame groups, and each of the shadows 54 200937350 has a complex (four) frame corresponding to the cameras respectively; performing an image feature extraction step According to the rb color difference (Cr-chromiance) information, each of the plurality of image frames of each of the image groups is extracted, and the image parameters are initialized according to the steps. A plurality of position parameter groups and a plurality of direction parameter sets for initializing the virtual three-dimensional finger model at the time of the time point, and obtaining each of the fingers - the at least one phalanx finger The length of the section; ❹ ❹ the step of performing the speculation _ dimension point information module includes at least; performing a token test and corresponding steps, according to the parameter group and the direction parameter group, and using the polar line limit and The length limit of the 曰 , 来 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 该 建立 该 该 该 该 该 该The average moving algorithm to track the position of the points of each frame; and each of the group of frames - the three-dimensional information-reconstruction method for reconstructing the marker points Each of the plurality of frame groups has a three-dimensional information of the marked points, and obtains three-dimensional information of the marked points of each 1:: group; and steps for predicting the three-dimensional model parameters by using one frame, According to the three-dimensional information marker prepare some of the group of pictures, a: each finger of the virtual three-dimensional description of a penalty for each time k to the plurality of finger joints and the coordinate system of model parameters. 55 200937350 u. The patent application scope 帛u; the three-dimensional finger image motion analysis method, wherein the step of setting the mark points at least comprises: setting a plurality of first mark points on the palm portion. respectively setting a plurality of a second marker point at the first end of each of the at least one knuckle of each of the fingers; and a plurality of third marker points respectively disposed on each of the at least one knuckle Two ends. 13.如申請專利範圍第12 分析方法,其中該虛擬三維 模擬每一該些手指之每一該 一球體與一半球體所組成之 些手指之每兩相鄰指節間的 的關節》 項所述之三維手指影像運動 手指模型係使用一圓柱體來 至少一指節’並使用使用由 一球臼型結構來模擬每一該 關節、和指節與該手掌部間13. The method of claim 12, wherein the virtual three-dimensional simulation of each of the one of the fingers and the joint between each two adjacent knuckles of the fingers of the half sphere is described in the item The three-dimensional finger image movement finger model uses a cylinder to at least one knuckle 'and uses a ball-type structure to simulate each of the joints, and the knuckles and the palm portion 14.如申請專利範圍第 分析方法’其中該些第一 12項所述之三維手指彩像運動 標記點的數目為3個。 15.如申請專利範圍第丨丨 分析方法,装中兮此麥 述之二維手指彰像達备 之每一續此 主乂包括.每一該些影格申 記點的前景區域。 、輪廓資訊、及每一該些標 16·如申請專利範圍第 11項所述之三維手指影像運動 56 200937350 分析方法,其中該重構標記點三維資訊的步驟至少包括: 去除使用該改良的平均移動演算法所追蹤到之錯誤 的標記點位置。 17.如申請專利範圍第u l 項所遂之二維手指影像運動 刀析方法,其中該預測三維模型參數的步驟至少包括: 使用一粒子慮波器,以整人卷兮此 =^ ^ + 蹩口每一該些影格組的標記點 —维資訊和在其前—Ββ 9U Ο ❹ β1點之該虛擬三維手指模型的模 型參數,來追蹤在每一兮此老 你母该些時間點時該虛擬三維手指螘 型的模型參數。 于知模 分析方8法如申:中專^ ;項所述之三維手指影像運 相垂直,用以表-;车標圮點間之其中兩條連線係 用以表不該手掌部的座標系。 分析項所述之三維“影像運, 手型定義為些時間點之第-者’該標的手部| 5714. The number of three-dimensional finger image motion point points as described in the first item 12 is three. 15. If the application scope of the patent scope is analyzed, the two-dimensional finger image of the essay is included in the continuation. The main 乂 includes the foreground area of each of the frame registration points. The contour information, and each of the objects of the invention, wherein the step of reconstructing the three-dimensional information of the marker point comprises at least: removing the average using the improvement The location of the wrong marker tracked by the mobile algorithm. 17. The method of claim 2, wherein the step of predicting the parameters of the three-dimensional model comprises at least: using a particle filter to reduce the number of people = ^ ^ + 蹩The model parameters of the virtual three-dimensional finger model of each of the frame groups of the image group - the dimension information and the front - Ββ 9U Ο ❹ β1 point, to track the time points of each of the old mothers and the time points Model parameters of a virtual three-dimensional finger ant type. In the model of the model analysis, the method of the 8th method is as follows: the three-dimensional finger image described in the item is vertical, and is used for the table--two of the links between the vehicle markings are used to indicate the palm of the hand. Coordinate system. The three-dimensional "image transport described in the analysis item, the hand type is defined as the first of the time points - the hand of the target | 57
TW097106361A 2008-02-22 2008-02-22 Three-dimensional finger motion analysis system and method TWI346311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW097106361A TWI346311B (en) 2008-02-22 2008-02-22 Three-dimensional finger motion analysis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW097106361A TWI346311B (en) 2008-02-22 2008-02-22 Three-dimensional finger motion analysis system and method

Publications (2)

Publication Number Publication Date
TW200937350A true TW200937350A (en) 2009-09-01
TWI346311B TWI346311B (en) 2011-08-01

Family

ID=44867038

Family Applications (1)

Application Number Title Priority Date Filing Date
TW097106361A TWI346311B (en) 2008-02-22 2008-02-22 Three-dimensional finger motion analysis system and method

Country Status (1)

Country Link
TW (1) TWI346311B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI421807B (en) * 2010-05-12 2014-01-01 Nat Univ Chin Yi Technology Mechanics analyzing device for ancient animal
TWI427558B (en) * 2010-12-06 2014-02-21 Ind Tech Res Inst System for estimating location of occluded skeleton, method for estimating location of occluded skeleton and method for reconstructing occluded skeleton
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
TWI562099B (en) * 2015-12-23 2016-12-11 Univ Nat Yunlin Sci & Tech Markers Based 3D Position Estimation for Rod Shaped Object Using 2D Image and Its Application In Endoscopic MIS Instrument Tracking Positioning and Tracking
TWI594208B (en) * 2016-11-01 2017-08-01 國立雲林科技大學 The Method Of Complete Endoscopic MIS Instrument 3D Position Estimation Using A Single 2D Image
TWI685817B (en) * 2018-01-26 2020-02-21 輔仁大學學校財團法人輔仁大學 The manufacturing process of the three-dimensional parametric model of the hand and the auxiliary equipment made with the model
CN111353355A (en) * 2018-12-24 2020-06-30 财团法人工业技术研究院 Motion tracking system and method
US20220198682A1 (en) * 2020-12-17 2022-06-23 Samsung Electronics Co., Ltd. Method and apparatus for tracking hand joints

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201412297A (en) * 2012-09-28 2014-04-01 zhi-zhen Chen Three-dimensional recording system of upper limb disorder rehabilitation process of and recording method thereof
TWI736083B (en) 2019-12-27 2021-08-11 財團法人工業技術研究院 Method and system for motion prediction

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI421807B (en) * 2010-05-12 2014-01-01 Nat Univ Chin Yi Technology Mechanics analyzing device for ancient animal
TWI427558B (en) * 2010-12-06 2014-02-21 Ind Tech Res Inst System for estimating location of occluded skeleton, method for estimating location of occluded skeleton and method for reconstructing occluded skeleton
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
TWI562099B (en) * 2015-12-23 2016-12-11 Univ Nat Yunlin Sci & Tech Markers Based 3D Position Estimation for Rod Shaped Object Using 2D Image and Its Application In Endoscopic MIS Instrument Tracking Positioning and Tracking
TWI594208B (en) * 2016-11-01 2017-08-01 國立雲林科技大學 The Method Of Complete Endoscopic MIS Instrument 3D Position Estimation Using A Single 2D Image
TWI685817B (en) * 2018-01-26 2020-02-21 輔仁大學學校財團法人輔仁大學 The manufacturing process of the three-dimensional parametric model of the hand and the auxiliary equipment made with the model
CN111353355A (en) * 2018-12-24 2020-06-30 财团法人工业技术研究院 Motion tracking system and method
CN111353355B (en) * 2018-12-24 2023-09-19 财团法人工业技术研究院 Motion tracking system and method
US20220198682A1 (en) * 2020-12-17 2022-06-23 Samsung Electronics Co., Ltd. Method and apparatus for tracking hand joints

Also Published As

Publication number Publication date
TWI346311B (en) 2011-08-01

Similar Documents

Publication Publication Date Title
TW200937350A (en) Three-dimensional finger motion analysis system and method
US9330307B2 (en) Learning based estimation of hand and finger pose
JP6369534B2 (en) Image processing apparatus, image processing method, and image processing program
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
Gupta et al. Texas 3D face recognition database
Bogo et al. FAUST: Dataset and evaluation for 3D mesh registration
JP5820366B2 (en) Posture estimation apparatus and posture estimation method
US20150347833A1 (en) Noncontact Biometrics with Small Footprint
Tulyakov et al. Robust real-time extreme head pose estimation
CN110287772A (en) Plane palm centre of the palm method for extracting region and device
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN107949851A (en) The quick and robust control policy of the endpoint of object in scene
Yan et al. Cimi4d: A large multimodal climbing motion dataset under human-scene interactions
Su et al. Virtualpose: Learning generalizable 3d human pose models from virtual data
Desai et al. Combining skeletal poses for 3D human model generation using multiple Kinects
WO2021235440A1 (en) Method and device for acquiring movement feature amount using skin information
Nguyen et al. Vision-based global localization of points of gaze in sport climbing
CN113570535A (en) Visual positioning method and related device and equipment
Moreno et al. Marker-less feature and gesture detection for an interactive mixed reality avatar
Jiang et al. Real-time multiple people hand localization in 4d point clouds
Asad et al. Learning marginalization through regression for hand orientation inference
Ding et al. Weakly structured information aggregation for upper-body posture assessment using ConvNets
Ascenso Development of a non-invasive motion capture system for swimming biomechanics
Mehmood et al. Ghost pruning for people localization in overlapping multicamera systems
WO2023152973A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees
MM4A Annulment or lapse of patent due to non-payment of fees