201241781 六、發明說明: 【發明所屬之技術領域】 ,本發明係關於一種提供真人影像之眼鏡三維(3d)虛擬 試戴互動服務系統與方法,尤指一種用於電子商務互動系 統上之眼鏡3D試戴互動服務系統與方法。 【先前技術】 隨著電子商務的蓮勃發展,愈來愈多的消費者也愈來 愈來依賴電子商務互動平台系統來選擇自己喜愛的貨物與 商品。而商品與模特兒的合併展示圖片、電子商品試戴系1 統與軟體,也愈來愈吸引消費者的注意,也進而引起消費 ,的購貝慾望。而提供真人影像之眼鏡試戴系統更是受到 肖費者’主目與歡迎的試戴系統之一’使用者只要應用自己201241781 VI. Description of the Invention: [Technical Field] The present invention relates to a three-dimensional (3d) virtual try-on interactive service system and method for providing real-life image glasses, in particular to a glasses 3D for use in an e-commerce interactive system. Try on interactive service systems and methods. [Prior Art] With the development of e-commerce, more and more consumers are increasingly relying on the e-commerce interactive platform system to choose their favorite goods and goods. The combination of merchandise and models to display pictures, electronic products try-on system and software, is also increasingly attracting the attention of consumers, which in turn leads to consumption, the desire to buy shellfish. The glasses try-on system that provides live-action images is also subject to the use of one of the short-sighted and popular try-on systems by users.
的照片’便可從數以萬計的眼鏡商品中找到自己喜 適的眼鏡。 Q 、U 然而,傳統的眼鏡試戴系統多為二維(2D)平面,使用 者=、此觀看自己試戴眼鏡的正面影像,而無法由左右側 來觀看自己試戴眼鏡的侧面影像。且傳統的眼鏡試戴系会 也並無法隨著消費者臉部的移動與轉動,適當地將^統 ,者的臉部作合成,因此,往往造成所合成=與 犬兀或不真實。 〜像過於 、、 因此’鑑於傳統眼鏡試戴系統與方法缺乏一有 來解決傳統的問題’因此亟需提出—新穎的眼鏡^ 碑戴互動服務系統與方法,能夠藉由精確的模擬蛊屣 ,以解決上述之問題。 、 201241781 【發明内容】 為解決上述問題,本發明係提供真人影像之眼鏡虛擬 試戴互動服務系統與方法,藉由精確的模擬與運算,以解 決當消費者臉部的移動與轉動時,無法將眼鏡與消費者的 臉部作適當的合成,以產生合成影像過於突兀或不真實之 問題。 根據一實施例,本發明提供真人影像之一種眼鏡虛擬 試戴互動服務方法,包括:透過一晝面内之一框架以定位 一人臉,並取得一第一人臉影像;對該第一人臉影像之眼 睛,取得複數個第一特徵點;在每個第一特徵點周圍取樣 複數個第一特徵資訊,並儲存該複數個第一特徵資訊;利 用晝面比對判斷該人臉是否有動態移動,並透過下一個該 晝面取得該第二人臉影像;結合搜尋比對範圍與追蹤特徵 資訊方法取得該第二人臉影像内複數個第二特徵點,進而 取得複數個第二特徵點定位資訊;比對該第一人臉影像與 該第二人臉影像的相對位置貢訊差異’以判斷該人臉的位 置、移動狀態與縮放比例,進而計算出該複數個第二特徵 點的位置;以及將預設的一眼鏡模型合成於該複數個第二 特徵點的位置。 根據一實施例,本發明提供真人影像之一種眼鏡虛擬 試戴互動服務系統,包括:一影像擷取單元,透過一晝面 内之一框架以定位一人臉,並取得一第一人臉影像;一處 理單元,耦接該影像擷取單元,對該第一人臉影像之眼睛, 取得複數個第一特徵點,且在每個第一特徵點周圍設計取 201241781 樣模式以取得有利的複數個第一特徵資訊,並儲存該複數 個第一特徵資訊,且利用晝面比對判斷該人臉是否有動態 移動,並透過下一個該晝面取得該第二人臉影像,結合搜 尋比對範圍與追蹤特徵資訊方法並取得該第二人臉影像内 複數個第二特徵點,進而取得複數個第二特徵點定位資 訊;一分析單元,耦接該處理單元,用以比對該第一人臉 影像與該第二人臉影像的相對位置資訊差異,以判斷該人 臉的位置、移動狀態與縮放比例,進而計算出該複數個第 二特徵點的位置;以及一合成單元,耦接該分析單元,將 預設的一虛擬眼鏡模型合成於該複數個第二特徵點的位 置。後續以此方法類推取得動態的第三、第四..的特徵點 的位置。 根據另一實施例,本發明提供一種隱形眼鏡虛擬試戴 互動服務方法,包括:透過一晝面内之一框架以定位一人 臉,並取得一第一人臉影像;對該第一人臉影像之眼睛瞳 孔,取得複數個第一特徵點;在每個第一特徵點周圍取樣 複數個第一特徵資訊,並儲存該複數個第一特徵資訊;以 及將預設的一隱形眼鏡模型合成於該複數個第二特徵點的 位置。後續以此方法類推取得動態的第三、第四..的特徵 點的位置。 為進一步對本創作有更深入的說明,乃藉由以下圖 示、圖號說明及發明詳細說明,冀能對貴審查委員於審 查工作有所助益。 【實施方式】 6 201241781 茲配合下列之圖式說明本創作之詳細結構,及复 關係,以利於貴審委做一瞭解。 /、連、,,σ 圖一係顯示根據本發明之一實施例之真人影 種眼鏡虛擬試戴互動服務方法,其包括:透過一書 -框架歧位-人臉,並取得—第—人臉影^(步^ S1繞二、:=二A所示’於本實施例於-晝面上顯示 -虛線框&亚要求制者臉部正面進人框架,以符 框大小並將雙眼對齊橫線,以定位使用者的臉部, I該第-人臉影像。接著,對該第—人臉影像之眼 處如兩側眼角,取得複數個第一特徵點(步驟si〇2),此福 -特徵點可以但不限於包括左右眼的兩側眼角以及 以手=選=’如圖二B中畫叉所示,於本實施例,可 :動2制第-人臉影像之左右眼的兩側眼角點以及嘴 分別取得一或多個第一特徵點。另外,於另 =V錢用人賴識方式讓程式自動抓取特徵 —、5 以判斷該該第一人臉影像是否符合人臉邏 符點i财新取得該第—人臉影像或該錢個第一特 5 ‘.、右疋,則進行下一步動作,應用一搜尋運管,以判The photo 'can find the best glasses from tens of thousands of eyeglasses. Q, U However, the traditional glasses try-on system is mostly a two-dimensional (2D) plane. The user =, this is to watch the front image of the glasses on their own, and it is impossible to view the side view of the glasses by the left and right sides. Moreover, the traditional glasses try-on system will not be able to properly synthesize the face of the person as the face moves and rotates, thus often causing the synthesis = and the barking or unreal. ~ It's too much, so 'in view of the lack of traditional glasses try-on systems and methods to solve the traditional problems', so it is urgent to propose - novel glasses ^ monumental interactive service systems and methods, can be accurately simulated, To solve the above problems. 201241781 SUMMARY OF THE INVENTION In order to solve the above problems, the present invention provides a virtual try-on interactive service system and method for real-life images, which can be solved by accurate simulation and calculation to solve the movement and rotation of the consumer's face. The glasses are properly combined with the consumer's face to create a problem that the synthetic image is too abrupt or unreal. According to an embodiment, the present invention provides a virtual virtual try-on interactive service method for a live-action image, comprising: positioning a face through a frame in a plane, and obtaining a first face image; the first face Obtaining a plurality of first feature points in the eye of the image; sampling a plurality of first feature information around each of the first feature points, and storing the plurality of first feature information; determining whether the face is dynamic by using a facet comparison Moving, and obtaining the second face image through the next face; combining the search comparison range and the tracking feature information method to obtain a plurality of second feature points in the second face image, thereby obtaining a plurality of second feature points Positioning information; comparing the relative position of the first face image and the second face image to determine the position, movement state and scaling of the face, thereby calculating the plurality of second feature points a position; and synthesizing a preset eyeglass model to a position of the plurality of second feature points. According to an embodiment, the present invention provides a virtual virtual touch-test interactive service system for a live-action image, comprising: an image capture unit that positions a face through a frame in a face and obtains a first face image; a processing unit coupled to the image capturing unit, obtains a plurality of first feature points for the eyes of the first face image, and designs a 201241781 sample pattern around each of the first feature points to obtain a favorable plurality of First feature information, and storing the plurality of first feature information, and determining whether the face has dynamic movement by using a facet comparison, and obtaining the second face image through the next face, combined with the search comparison range Obtaining a plurality of second feature points in the second face image, and obtaining a plurality of second feature point location information; and an analysis unit coupled to the processing unit for comparing the first person The relative position information of the face image and the second face image is different to determine the position, the moving state and the zoom ratio of the face, and then calculate the plurality of second special Position of the point; and a synthesizing means, coupled to the analysis unit, a preset virtual glasses model synthesis in a plurality of the second feature point position. Subsequently, the position of the dynamic third, fourth, and feature points is obtained by this method. According to another embodiment, the present invention provides a contact lens virtual try-on interactive service method, comprising: positioning a face through a frame in a plane, and obtaining a first face image; the first face image The pupil of the eye obtains a plurality of first feature points; samples a plurality of first feature information around each first feature point, and stores the plurality of first feature information; and synthesizes a preset contact lens model into the The position of a plurality of second feature points. Subsequently, the position of the dynamic third, fourth, and feature points is obtained by this method. In order to further explain this creation, it is helpful to review the work of the review committee by the following illustrations, illustrations and detailed descriptions of the invention. [Embodiment] 6 201241781 The detailed structure and complex relationship of this creation are explained with the following drawings to help your audit committee to understand. /, 连,,, σ Figure 1 shows a virtual try-on interactive service method for a real person's glasses according to an embodiment of the present invention, which includes: through a book-frame dislocation-face, and obtains - a person Face shadow ^ (step ^ S1 around two, := two A' shown in this embodiment on the - 昼 surface - dashed box & sub-requirer face front into the frame, in the box size and double The eye is aligned with the horizontal line to locate the face of the user, and the first face image is obtained. Then, the first feature point is obtained as the eye corners of the first face image (step si〇2) The feature point can be, but is not limited to, the two corners of the eye including the left and right eyes and the hand=select=' as shown in the cross in FIG. 2B. In this embodiment, the second face image can be: One or more first feature points are respectively obtained from the eye corner points and the mouth of the left and right eyes. In addition, the program automatically captures the feature -5 to determine the first face image in another way. Whether it meets the face logic point i Caixin gets the first-face image or the first special 5'., right-handed, then move on the next step Application of a search transportation management, in order to judge
斷該複數個第一牯料科UH 特徵點疋否位於該框架内,若否,則重新 二—人臉影像或該複數個第-特徵點,若是,則進 的方4^作其中’判斷特徵點位置是否符合人臉邏輯 邊二工(右)眼的兩側眼角點是否在虛線框左(右)半 内二— 觜角兩度疋否差距過大等。若有一項不符 201241781 合上述限定條件,則要求重新取得該第一特徵點或該第一 人臉影像。 接下來,在每個第一特徵點周圍設計取樣模式以取得 有利的複數個第一特徵資訊,並儲存該複數個第一特徵資 訊(步驟S103),且更包括同時儲存該複數個第一特徵點與 該複數個第一特徵點間的點間距。取樣模式進一步而言, 該特徵資訊係為像素色彩資訊,且該像素色彩資訊係由每 一個該特徵點輻射出m個方向,並在每一個方向上取得η 個像素點,以作為色彩資訊,且該m與η係為正整數。或 者是,該像素色彩資訊係由每一個該特徵點呈半圓輻射出 m個方向,並在每一個方向上取得η個像素點,以作為色 彩資訊,且該m與η係為正整數以及該半圓至少涵蓋一眼 角。如圖二C所示,於本實施例以取樣模式的方法抓取特 徵點之鄰近像素色彩資訊,同時記錄各特徵點之間的點間 距。例如,由特徵點以半圓輻射擴散的方式向外延伸8個 方向,並分別取7個點共56個色彩資訊當作此特徵點的特 徵資訊,且該半圓至少完全涵蓋一眼角。或者是,如圖二 D所示,於另一實施例,該取樣模式由特徵點以輻射擴散 的方式向外延伸8個方向,並分別取7個點共56個色彩資 訊當作此特徵點的特徵資訊。 接著,取得下一個晝面中之該第二人臉影像,並利用 晝面比對判斷該人臉是否有動態移動(步驟sl04)。其中於 時間間隔内,比對該畫面與下一個該畫面中之人臉,並動 態追蹤該複數個第一特徵點是否有移動軌跡。例如,於本 實施例,該晝面與下一個該晝面像素以相減的方式比對在 8 201241781 此時間間隔内移動的物抖 跡’表示人臉在這段時間有減,點附近有明顯移動痕 反之,若人臉沒有作 乍,則進行後續步驟與運算。 施例,亦可不做後續追縱處理。於Η 間間隔内移動的物體。戍 式,比對在此時 徵資訊中的白_點)1 者量^由=畫面像素差之特 度,若白點數量多,則表干人、别衫像中移動量的程 驟與運算。反之,若特徵點附^ =動’則進行後續步 明顯移動痕跡,财作後續白點’表示人臉沒有 接著’應用一雜訊遽除法 雜訊,以避免後面比對過程中受到;f下^亥晝面内的 其中之_。 门顿糊法、中值法與均值法之 當前述有移動痕跡時,將進一 +处 追縱特徵資訊,進而取得該第二人臉;七^對範圍並 徵點,進而取得複數個第二特徵數個第二特 對該第一人浐旦彡德盥兮铍 、汛(父驟sl〇5),並且比 f,‘二^^相對位置資訊差 异出該複數個第三特徵點的位置驟…,放比例’進而計 預設-比對範圍,以比對二至少 並取得前丨個笋^.排序该钹數個誤差值, ,。於本實施例,根據二特徵點 疋法.⑴搜尋狀態:當剛點選完特徵點第-次開::;: 201241781 或追蹤失敗時會進入此狀態,在此狀態下,比對的範圍是 靜態的,僅侷限在使用者初次點選特徵點的鄰近範圍内作 比對,意即強迫使用者臉對到虛線框的位置,才有可能進 入比對範圍。(2)追蹤狀態:在前一個晝面有成功比對特 徵點的情況下則為追蹤狀態,在此狀態下,比對的區域為 上一個晝面中比對到的特徵點之鄰近區域,意即此區域是 動態會跟著目前追到的特徵點移動的。且在比對的範圍 内’會包含依設計的取樣模式取N個像素,每個像素會以 取樣模式之方法來取得該像素點的臨近56個點,將這此點 的RGB與YCbCr色彩資訊與起初錄的第一特徵資訊作比 對,兩者誤差值為Error 1〜Error N,將該這n個值作排序, 取最小誤差的前i個像素為候選點,例如1 〇個,且再作一 次分群去除離群值,最後平均這些較集中的像素值座標, 該結果即是最後追縱結果。如果,若在;(Sf個點中找出的 Error值都太大,則表示使用者移動程度已超出追蹤範圍, 或有額外的遮蔽物出現在特徵點前,此時則判定追縱失 敗’不再進行後面的運算。 此外’於本實例更可進行一幾何計算,例如,根據該 複數個第一特徵點與該複數個第二特徵點的位置,以斜率 計算該人臉的傾斜度、根據該複數個第一特徵點間之距離 與該複數個第二特徵點間之距離,以長度變化計算該人臉 的遠近比例以及根據該複數個第一特徵點與該複數個第二 特徵點的比例,以比例變化計算該人臉的旋轉角度與俯仰 角度。若該人臉的傾斜度、遠近比例、旋轉角度或俯仰角 度超過一預設容許值,則重新取得該第二人臉影像。如圖 201241781 二A所不,根據該第一人臉影像中之兩眼兩側眼角點的座 ‘位於水平軸,而該第二人臉影像之兩眼兩側眼 則為均角度,因此,可以用斜率計算該人臉的傾斜; ^圖二乂所示’根據該第一人臉影像中兩眼眼角間的距離 亥第二人臉影像中兩眼眼角間的距離D2,以長度變 化计异該人臉的遠近比例。如圖三C所示,根據該第一^ 臉影像中兩眼兩側眼角的比例1:1與該第二人臉影像 眼眼角的比例相比於該第一人臉影像中兩眼兩側眼角的比 例為1.H或m ’以比例變化計算該人臉的旋轉角度以 及根據該第-人臉影像中兩眼與嘴巴位於水平轴,而^第 =臉影像中兩眼與嘴巴相較於該第一人臉影像中二 之比例為’則以比例變化計算該人臉的俯仰角 刑接由—預設的眼鏡資料庫提取預設的-眼鏡模 L/5成於該複數個第二特徵點的位置(步驟⑽),且 放的尺寸、該複數個第—特徵點與誤差值,以縮 ί貝科庫用以儲存眼鏡模型的資料。如圖四所示,由上述 步驟所計算出的二堆允門次 由上it 日月Η更用者動向的位置,又由於人臉大小不 同’而配合該複數的第一特徵 旋轉,並合成科 ^私對目认尺寸作縮放與 或者是㈣Ϊ 此步驟後,可結束該方法, 鏡模搜尋與追縱的步驟。於本實施例,該眼 壯 、為—維(3D)眼鏡模型,其製作方法可透i5摄傻 裝置(例如,數位相她— 衣卄力凌J透過攝像 相機)由一貫體眼鏡的正 201241781 正右侧面 個方向個为丨 五A與五B所示),作取,遠實體眼鏡的平面影像(如圖 鏡作正負90度之旋轉:不跫限於此。或者是,將該實體眼 面與正右側面三個平面旦^取得该實體眼鏡的正面、正左側 面影像應用影像合成軟二像。接著,將所取得的該三個平 模型(如圖五C所示)。卷與硬體,將其組合成該三維眼鏡 可應用各種參數(例如,二邊二維眼鏡模型製作完成後,亦 透明度參數等),用以調^轉參數、色彩參數、比例參數與 是應用選項(例如,選# 5亥三維眼鏡的位置與顏色,或者 三維眼鏡模型(如圖五:正面視圖或左右侧視圖)來欣賞該 眼鏡鏡R三維鏡_/^)。於本實施廳考慮不具有 RP 4- ^ «η U I作,然而,熟悉此技藝人士皆應 '、二/ Μ _可用於製作具有眼鏡鏡片之眼鏡模型。 圖/、係顯不根據本發明之另一實施例之一隱形眼鏡虛 擬試戴互動服務方I本實施例與上述之實施例差異在於 本實施例疋以眼睛瞳孔騎徵點與結合取樣模式以取得特 徵資訊’且剌於虛擬雜試戴隱形眼鏡,而其餘方法與 運异皆與上述貫施例近似,故於本實例僅詳述這些差異處 的細節,而與上述實施例近似之步驟與方法,則不再贅述。 於本實施例之3D模擬方法包括:透過一畫面内之一框架 以定位一人臉,並取得一第一人臉影像(步驟S601)。對該 第一人臉影像之眼睛瞳孔特徵處,取得複數個第一特徵點 (步驟s602),且複數個第一特徵點包括左右眼的曈孔以及 嘴角兩側’例如,如圖七A中晝叉所示,於本實施例,手 動點選該第一人臉影像之左右眼的瞳孔點以及嘴角兩侧 點,以分別所取得一或多個第一特徵點。在每個第一特徵 12 201241781 徵i訊(步第一特徵資訊,並儲存該複數個第一特 符合人臉邏輯的太二於本實施例’判斷特徵點位置是否 (右)半邊、隹式有左(右)眼的瞳孔是否在虛線框左 眼睛的瞳孔斑卢j的瞳孔與兩側眼角否相隔一定間距、 符合上述高度是否差距過大等。若有-項不 -人臉影像。、貝要求重新取得該第—特徵點或該第 接著’取得下—彳金 臉是否有動態移動(;驟人臉影像,並判斷該人 鞭特徵資訊,_ “二二4 ’搜尋比對範圍並追 點,進而取臉影像内複數個第二特徵 同瞎鍅户兮:數個第二特徵資訊(步驟s6〇5),且更包括 二’於二複數個第—特徵點間的點間距。如圖七B所 2施例以取樣的方法抓取特徵點之鄰近像素色彩 二射二己錄各特徵點之間的點間距,例如,由特徵點 影像與該第訊。比對該第一人臉 肋:〜彳冢,以判斷該人臉的位置、 放2’,進”算出該複數個第二特徵點的位置(步i 歹^根據该第一人臉影像中之瞳孔點的座標位於 7、’、’而該第二人臉影像瞳孔點座標則為上仰角度,因 在’ ^以用斜率計算該人臉的傾斜度。例如,根據該第一 人臉影像中兩眼瞳孔間的距離與該第二人 孔間的距離,以長度變化計算該人臉的遠近比例。1 根據该第-人臉影像中兩眼瞳孔與該第二人臉影像中兩眼 里孔之比♦]以比例艾化計异該人臉的旋轉角度以及根據 13 201241781 該第一人臉影像中兩眼瞳孔與嘴巴與而該第二人臉影像中 兩眼瞳孔與嘴巴間之比例,相較於該第一人臉影像中兩眼 與嘴巴之比例,以比例變化計算該人臉的旋轉角度與俯仰 角度。若該人臉的傾斜度、遠近比例、旋轉角度或俯仰角 度超過一預設容許值,則重新取得該第二人臉影像。以及 將預設的一隱形眼鏡模型合成於該複數個第二特徵點的位 置(步驟607),且根據該人臉的尺寸與該複數個第一特徵 點,以縮放與旋轉該眼鏡模型,進而合適地合成至該人臉。 於此步驟後,可結束該方法,或者是繼續下一個搜尋與追 蹤的步驟。 圖八係顯示根據本發明之一實施例之一眼鏡虛擬試 戴互動服務系統。該系統包括:一擷取單元81、一處理單 元82、一分析單元83與一合成單元84。該擷取單元81透 過一晝面内之一框架以定位一人臉,並取得一第一人臉影 像,且該擷取單元81係可為一攝影裝置,例如,網路攝影 機等。該處理單元82,耦接該擷取單元81,對該第一人臉 影像之眼睛特徵處如兩側眼角,取得複數個第一特徵點, 且在每個第一特徵點周圍依取樣模式取樣複數個第一特徵 資訊,並儲存該複數個第一特徵資訊,且取得下一個晝面 中之該第二人臉影像,並判斷該人臉是否有動態移動,且 取得該第二人臉影像内複數個第二特徵點,進而取得複數 個第二特徵資訊,且複數個第一特徵點包括但不限於左右 眼的兩側眼角以及嘴角兩側。且處理單元82更包括根據複 數個第一特徵點,以判斷該該第一人臉影像是否符合人臉 邏輯,若否,則重新取得該第一人臉影像。若是,則進行 201241781 下一步動作,該處理單元 斷該複數個第_特徵點 更匕括應用—搜尋運算,以判 取得該第-人臉影像“,立=該框架内,若否,則重新 樣模式之該特徵資訊係為=色—步動作。上述取 二每-個該特徵點輻射二處:單元82 取侍π個像素點,以 勹亚在母一個方向上 整數。或者是,該處理單元其中該m與η係為正 射出m個方向,並在矣/2由母—個該特徵點呈半圓韓 為色彩資訊,且=圓母=固方向上取得η個像素點,以作 為正整數。 +圓至少涵蓋一眼角,其令該,與〇係 該分析單元83,耦接竽 — 人臉影像與該第二人柃旦早兀82 ’用以比對該第- 斷該人臉的位置、移相對位置資訊之差異,以判 數個第二特徵點的位置。二:二放比:列’進而計算出該複 對該晝面與下—餘It 於時間間隔内,比 特徵點是否有移動軌跡 ^人臉’並追縱該複數個第- —個該畫面内的雜訊,且該= = = =法’以遽除下 中值法與均值法之其中之—。單係J向斯模糊法、 一比對範圍,以比料㈣析早凡更用於預設至少 二特徵資訊,並取該i數個第— =3 =與該複數個第 :!資:_個誤差值,且排序該;:=== 置,上述之i二;數r第二特徵點的位 特徵點與該複數個第二特徵數個第-的傾斜度、根據該複數個第-特上 15 201241781 第二特徵點間之距離,以長度變化計算該人臉的遠近比例 以及根據該複數個第一特徵點與該複數個第二特徵點的比 例,以比例變化計算該人臉的旋轉角度與俯仰角度,且若 該人臉的傾斜度、遠近比例、旋轉角度或俯仰苒度超過一 預設容許值,則重新取得該第二人臉影像。合成單元84, 耦接該分析單元83,由一預設的眼鏡資料庫85提取預設 的一眼鏡模型並將預設的該眼鏡模型合成於該複數個第二 特徵點的位置,且該合成單元根據該人臉的尺寸與該複數 個第一特徵點,以縮放與旋轉該眼鏡模型,進而合成至該 人臉。於本實例之眼鏡試戴虛擬模擬系統,透過適當的修 改操作與步驟亦可適用於隱形眼鏡虛擬試戴互動服務系 統。 唯以上所述者,僅為本發明之較佳實施例而已,當不 能以之限定本發明所實施之範圍,即大凡依本發明申請專 利範圍所作之均等變化與修飾,皆應仍屬於本發明專利涵 蓋之範圍内,謹請 貴審查委員明鑑,並祈惠准,是所至 禱0 16 201241781 【圖式簡單說明】 圖一係顯示根據本發明之一實施例之一種眼鏡虛擬試戴 互動服務方法; 圖二A、B、C、D係顯示根據圖一方法之操作手段; 圖三A、B、C係顯示人臉的傾斜、轉動與移動角度; 圖四係顯示眼鏡合成人臉的狀態示意圖; 圖五A、B、C係顯示根據本發明之一實施例之一三維眼 鏡製作方法; 圖六係顯示根據本發明之另一實施例之一種隱形眼鏡虛 擬試戴互動服務方法; 圖七A、B係顯示根據圖六方法之操作手段; • 圖八係顯示根據本發明之一實施例之一種眼鏡虛擬試戴 互動服務系統。 【主要元件符號說明】 步驟slOl〜sl07 步驟s601〜s607 81 影像擷取單元 82 處理單元 83 分析單元 84 合成單元 85 眼鏡資料庫 17Whether the UH feature points of the plurality of first materials are located in the frame, and if not, the second face image or the plurality of first feature points are re-created, and if so, the incoming party 4^ is judged Whether the feature point position conforms to the face logic side of the second (right) eye is on the left (right) half of the dotted line, and the angle between the two corners is too large. If one of the items does not conform to 201241781, the first feature point or the first face image is required to be retrieved. Next, a sampling mode is designed around each of the first feature points to obtain a plurality of favorable first feature information, and the plurality of first feature information is stored (step S103), and further includes simultaneously storing the plurality of first features The dot spacing between the point and the plurality of first feature points. Sampling mode Further, the feature information is pixel color information, and the pixel color information radiates m directions from each of the feature points, and obtains n pixel points in each direction as color information. And m and η are positive integers. Alternatively, the pixel color information radiates m directions by a semicircle for each of the feature points, and obtains n pixel points in each direction as color information, and the m and η are positive integers and the The semicircle covers at least one corner of the eye. As shown in FIG. 2C, in the embodiment, the color information of adjacent pixels of the feature points is captured by the sampling mode, and the dot pitch between the feature points is recorded. For example, the feature points are extended outward in eight directions by semi-circular radiation diffusion, and a total of 56 color information of 7 points are taken as the feature information of the feature point, and the semicircle at least completely covers one corner of the eye. Alternatively, as shown in FIG. 2D, in another embodiment, the sampling mode is extended by the feature point in a radiation diffusion manner in 8 directions, and 7 points of 56 color information are respectively taken as the feature point. Characteristic information. Then, the second face image in the next face is obtained, and the face is compared to determine whether the face has dynamic movement (step s104). The time interval is compared with the face of the picture and the next picture, and the plurality of first feature points are dynamically tracked for whether there is a moving track. For example, in this embodiment, the facet and the next facet pixel are compared in a subtractive manner to the object wobbled in the time interval of 8 201241781, indicating that the face is reduced during the time, and there is a point near the point. Obviously moving the mark and vice versa, if the face is not smashed, the subsequent steps and operations are performed. For example, it is also possible to do no follow-up treatment. An object that moves between intervals.戍, 白 在 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯Operation. On the other hand, if the feature point is attached to ^=moving', then the follow-up step will obviously move the trace, and the subsequent white point of the financial operation will indicate that the face does not follow the application of a noise to remove the noise to avoid the subsequent comparison process; f ^ Among them, _. When the Menton paste method, the median method and the mean method are used to move the traces, the feature information will be traced to the second face, and then the second face will be obtained. The number of features is second, the first person is 浐德彡, 汛 (father ssl〇5), and the position information of the plurality of third feature points is different from the relative position information of f, 'two ^^ Step..., put the ratio 'and then the preset-match range, to compare the two at least and get the number of errors before, and sort the number of error values, . In this embodiment, according to the second feature point method. (1) Search state: when the feature point first-time opening::;: 201241781 or tracking failure is entered, this state is entered. In this state, the range of the comparison is It is static, and it is only limited to the user's initial selection of the feature points in the vicinity of the feature point, which means that the user's face is forced to the position of the dotted frame, it is possible to enter the comparison range. (2) Tracking state: in the case where there is a successful comparison of the feature points in the previous one, it is the tracking state, in which the aligned region is the adjacent region of the matching feature points in the previous pupil. This means that this area is dynamically moving along with the feature points currently being chased. And within the range of the comparison, it will include N pixels according to the designed sampling mode, and each pixel will obtain the adjacent 56 points of the pixel by the sampling mode, and the RGB and YCbCr color information of the point will be obtained. Comparing with the first feature information recorded at the beginning, the error values of the two are Error 1~Error N, and the n values are sorted, and the first i pixels of the minimum error are taken as candidate points, for example, one, and Perform another grouping to remove the outliers, and finally average these more concentrated pixel value coordinates. The result is the final tracking result. If, if; (the Error value found in the Sf points is too large, it means that the user has moved beyond the tracking range, or there is an additional mask appearing before the feature point, then the tracking failure is determined. The following operations are no longer performed. Further, in this example, a geometric calculation can be performed. For example, according to the positions of the plurality of first feature points and the plurality of second feature points, the slope of the face is calculated by a slope, Calculating a distance ratio of the face according to a length change and a plurality of first feature points and the plurality of second feature points according to a distance between the plurality of first feature points and a distance between the plurality of second feature points The ratio of the rotation angle and the elevation angle of the face is calculated by the proportional change. If the inclination, the near-far ratio, the rotation angle or the elevation angle of the face exceed a predetermined allowable value, the second face image is retrieved. As shown in FIG. 201241781, AA, according to the first human face image, the eyes of the two corners of the two eyes are located on the horizontal axis, and the eyes of the two faces of the second human face are even angles, so The slope of the face can be calculated by the slope; ^ Figure 2 shows the distance D2 between the corners of the eyes in the second face image according to the distance between the corners of the eyes in the first face image, in terms of length variation The distance ratio of the face is different. As shown in FIG. 3C, according to the ratio of the ratio 1:1 of the eye corners of the two eyes in the first face image to the eye corner of the second face image, the first ratio is compared with the first In the face image, the ratio of the corners of the eyes on both sides of the eye is 1.H or m 'the rotation angle of the face is calculated by the proportional change and according to the first face image, the eyes and the mouth are located on the horizontal axis, and ^第=脸In the image, the ratio of the two eyes to the mouth is compared with the ratio of the second face in the first face image, and the pitch angle of the face is calculated by the proportional change. The preset glasses are extracted by the preset glasses database. /5 is the position of the plurality of second feature points (step (10)), and the size of the release, the plurality of first feature points and the error value are used to store the data of the glasses model. As shown in the fourth, the two piles of the gates calculated by the above steps are changed by the user. The position, and the size of the face is different, and the first feature of the complex number is rotated, and the synthetic size is scaled and/or (4) Ϊ After this step, the method can be ended, the model search and the tracking In the embodiment, the eye-dwelling-dimensional (3D) glasses model can be made through the i5 shooting silo device (for example, the digital phase of her - 卄 卄 J J through the camera camera) from the consistent body glasses正201241781 The right side of the direction is shown as 丨5A and 5B), for the plane image of the far physical glasses (the mirror is rotated by plus or minus 90 degrees: not limited to this. Or, The three sides of the solid eye surface and the right side face are obtained by applying the image synthesis soft image to the front side and the front left side image of the solid glasses. Then, the three flat models obtained are shown (as shown in FIG. 5C). Volume and hardware, combining them into the 3D glasses can apply various parameters (for example, after the two-sided two-dimensional glasses model is completed, also transparency parameters, etc.), used to adjust the parameters, color parameters, proportional parameters and applications Options (for example, select the location and color of the #5海3D glasses, or the 3D glasses model (Figure 5: front view or left and right side views) to enjoy the glasses mirror R 3 _/^). It is considered in this Office that there is no RP 4- ^ «η U I, however, those skilled in the art should use ', two / Μ _ can be used to make glasses models with spectacle lenses. FIG. 4 shows a contact lens virtual trial wear interactive service party I according to another embodiment of the present invention. This embodiment differs from the above embodiment in that the present embodiment uses the eye pupil riding point and the combined sampling mode to Obtaining the feature information' and wearing the contact lens in the virtual miscellaneous test, and the rest of the methods and the similarities are similar to the above-mentioned embodiments. Therefore, only the details of these differences are detailed in this example, and the steps similar to those of the above embodiment are The method will not be described again. The 3D simulation method of this embodiment includes: positioning a face through a frame in a screen, and obtaining a first face image (step S601). A plurality of first feature points are obtained at the pupil feature of the first face image (step s602), and the plurality of first feature points include the pupils of the left and right eyes and the sides of the mouth angle, for example, as shown in FIG. 7A. In the embodiment, the pupil point of the left and right eyes of the first face image and the points on both sides of the mouth angle are manually selected to obtain one or more first feature points respectively. In each of the first features 12 201241781, the first feature information is stored, and the plurality of first special matching face logics are stored in the present embodiment to determine whether the feature point position is (right) half, 隹Is the pupil of the left (right) eye in the left eye of the pupil of the left eye? The pupil of the pupil of the left eye is separated from the eye corners of both sides by a certain distance, whether the height is too large, etc. If there is - the item is not - the face image. Require the re-acquisition of the first feature point or the next step to get the next-彳 gold face whether there is dynamic movement (; face image, and judge the person whip feature information, _ "22 2" search comparison range and chase Point, and then take a plurality of second features in the face image with the same account: a plurality of second feature information (step s6〇5), and further includes a dot spacing between the two 'the plurality of first feature points. In the example of Fig. 7B, the sampling method is used to capture the dot spacing between the feature points of the neighboring pixels of the feature points, for example, by the feature point image and the first message. Face rib: ~彳冢 to determine the position of the face, 2', into" calculating the position of the plurality of second feature points (step i 歹 ^ according to the coordinates of the pupil point in the first face image is located at 7, ', ' and the second face image pupil point coordinates For the elevation angle, the slope of the face is calculated by '^ in the slope. For example, according to the distance between the pupils of the two eyes in the first face image and the distance between the second person holes, the length is calculated. The ratio of the distance between the faces of the face. 1 According to the ratio of the pupils of the two eyes in the first face image to the holes in the two eyes in the second face image, the rotation angle of the face is calculated according to the ratio and according to 13 201241781 The ratio between the pupil and the mouth of the two eyes in the first face image and the pupil and the mouth of the two eyes in the second face image is proportional to the ratio of the two eyes to the mouth in the first face image. The change calculates the rotation angle and the elevation angle of the face. If the inclination, the near-far ratio, the rotation angle, or the elevation angle of the face exceeds a preset allowable value, the second face image is re-acquired. a contact lens model synthesized in the plurality of Positioning the two feature points (step 607), and scaling and rotating the glasses model according to the size of the face and the plurality of first feature points, and then appropriately synthesizing to the face. After this step, the process ends. The method, or the step of continuing the next search and tracking. Figure 8 shows a virtual virtual touch-on interactive service system according to an embodiment of the present invention. The system includes: a capture unit 81, a processing unit 82, An analyzing unit 83 and a synthesizing unit 84. The capturing unit 81 transmits a face through a frame in a plane to obtain a first face image, and the capturing unit 81 can be a photographing device. For example, a network camera, etc. The processing unit 82 is coupled to the capturing unit 81, and obtains a plurality of first feature points, such as the two corners of the eye, on the first human face image, and each of the first feature points Sampling a plurality of first feature information around the feature point, storing the plurality of first feature information, and obtaining the second face image in the next face, and determining whether the face has dynamic movement, And obtaining a plurality of second feature points in the second facial image to obtain a plurality of second feature information, and the plurality of first feature points include, but are not limited to, both sides of the left and right eyes and the sides of the mouth. The processing unit 82 further includes determining, according to the plurality of first feature points, whether the first facial image conforms to the face logic, and if not, reacquiring the first facial image. If yes, perform the next step of 201241781, the processing unit breaks the plurality of _th feature points to further include the application-search operation to determine the first face image, and = if within the frame, if not, then The characteristic information of the sample mode is a color-step action. The above two points are radiated at each of the feature points: the unit 82 takes π pixel points to represent an integer in one direction of the parent. In the processing unit, the m and η are positively ejected in m directions, and in the 矣/2 is the mother--the feature point is a semi-circle Han as the color information, and the =-matrix=solid direction obtains n pixels, as a positive integer. The + circle covers at least one corner of the eye, which is coupled to the analysis unit 83, coupled to the facial image and the second person, the early second 82' is used to compare the person The difference between the position of the face and the relative position information is determined by judging the position of the second feature point. Two: two ratios: column 'and further calculating the ratio of the complex surface to the lower surface and the lower interval Whether the feature point has a moving track ^face" and traces the plural number - one in the picture The noise, and the = = = = method 'to eliminate the lower median method and the mean method -. The single-line J-direction fuzzy method, a comparison range, to use the material (four) analysis Presetting at least two feature information, and taking the number of the first -=3 = and the plural number of:: _ error values, and sorting the;; === set, the above i two; number r Calculating the distance between the bit feature points of the two feature points and the plurality of first-degree inclinations of the plurality of second features, and calculating the distance between the faces according to the distance between the second feature points of the plurality of first-top 15 201241781 And calculating a rotation angle and a pitch angle of the face according to a ratio of the plurality of first feature points to the plurality of second feature points, and if the inclination, the distance ratio, the rotation angle or the pitch of the face The second facial image is re-acquired when the temperature exceeds a preset allowable value. The synthesizing unit 84 is coupled to the analyzing unit 83, and the preset glasses model is extracted by a preset glasses database 85 and preset. The glasses model is synthesized at a position of the plurality of second feature points, and the synthesis unit is based on the person The size of the face and the plurality of first feature points are used to zoom and rotate the glasses model to be synthesized to the face. The glasses in this example are tried on the virtual simulation system, and the appropriate modification operations and steps can also be applied to the invisible shape. The glasses virtual trial wear interactive service system. The above is only the preferred embodiment of the present invention, and the scope of the present invention cannot be limited thereto, that is, the average variation of the patent application scope of the present invention is Modifications should still fall within the scope of the patent of the present invention. I would like to ask your review committee to give a clear explanation and pray for it. It is the prayer to be used. 0 16 201241781 [Simplified illustration of the drawings] Figure 1 shows an embodiment according to the present invention. One type of glasses virtual try-on interactive service method; Figure 2A, B, C, D show the operation means according to the method of Figure 1; Figure 3A, B, C show the inclination, rotation and movement angle of the face; FIG. 5A, B, and C show a method for fabricating a 3D glasses according to an embodiment of the present invention; FIG. 6 shows a method according to the present invention. Another embodiment of the contact lens virtual try-on interactive service method; FIG. 7A, B show the operation means according to the method of FIG. 6. FIG. 8 shows a virtual virtual touch-on interactive service according to an embodiment of the present invention. system. [Description of main component symbols] Steps slO1~sl07 Steps s601~s607 81 Image capture unit 82 Processing unit 83 Analysis unit 84 Synthesis unit 85 Glasses database 17