TW201241781A - Interactive service methods and systems for virtual glasses wearing - Google Patents

Interactive service methods and systems for virtual glasses wearing Download PDF

Info

Publication number
TW201241781A
TW201241781A TW100112053A TW100112053A TW201241781A TW 201241781 A TW201241781 A TW 201241781A TW 100112053 A TW100112053 A TW 100112053A TW 100112053 A TW100112053 A TW 100112053A TW 201241781 A TW201241781 A TW 201241781A
Authority
TW
Taiwan
Prior art keywords
face
virtual
feature
glasses
feature points
Prior art date
Application number
TW100112053A
Other languages
Chinese (zh)
Other versions
TWI433049B (en
Inventor
Nien-Chu Wu
Rui-Min Chih
Chi-Neng Liu
Chiou-Shan Chou
Wei-Ming Chen
Original Assignee
Claridy Solutions Inc
Kobayashi Optical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Claridy Solutions Inc, Kobayashi Optical Co Ltd filed Critical Claridy Solutions Inc
Priority to TW100112053A priority Critical patent/TWI433049B/en
Publication of TW201241781A publication Critical patent/TW201241781A/en
Application granted granted Critical
Publication of TWI433049B publication Critical patent/TWI433049B/en

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention related to an interactive service method and system for virtual glasses wearing. The method for wearing glasses comprises: locating a face by using a frame in an assist frame for obtaining a first face image so as to obtain a plurality of first characteristic points; obtaining a plurality of first characteristic information near each first characteristic point by designed sampling pattern; using image analysis to trace the movement and obtaining a plurality of second characteristic points in the second face image so as to obtain a plurality of second characteristic information by designed sampling pattern; comparing the first face image and the second face image to determining location, movement and scaling rate of the face so as to calculate location of the plurality of second characteristic points; and composing a predetermined virtual glasses model with the location of the plurality of second characteristic points.

Description

201241781 六、發明說明: 【發明所屬之技術領域】 ,本發明係關於一種提供真人影像之眼鏡三維(3d)虛擬 試戴互動服務系統與方法,尤指一種用於電子商務互動系 統上之眼鏡3D試戴互動服務系統與方法。 【先前技術】 隨著電子商務的蓮勃發展,愈來愈多的消費者也愈來 愈來依賴電子商務互動平台系統來選擇自己喜愛的貨物與 商品。而商品與模特兒的合併展示圖片、電子商品試戴系1 統與軟體,也愈來愈吸引消費者的注意,也進而引起消費 ,的購貝慾望。而提供真人影像之眼鏡試戴系統更是受到 肖費者’主目與歡迎的試戴系統之一’使用者只要應用自己201241781 VI. Description of the Invention: [Technical Field] The present invention relates to a three-dimensional (3d) virtual try-on interactive service system and method for providing real-life image glasses, in particular to a glasses 3D for use in an e-commerce interactive system. Try on interactive service systems and methods. [Prior Art] With the development of e-commerce, more and more consumers are increasingly relying on the e-commerce interactive platform system to choose their favorite goods and goods. The combination of merchandise and models to display pictures, electronic products try-on system and software, is also increasingly attracting the attention of consumers, which in turn leads to consumption, the desire to buy shellfish. The glasses try-on system that provides live-action images is also subject to the use of one of the short-sighted and popular try-on systems by users.

的照片’便可從數以萬計的眼鏡商品中找到自己喜 適的眼鏡。 Q 、U 然而,傳統的眼鏡試戴系統多為二維(2D)平面,使用 者=、此觀看自己試戴眼鏡的正面影像,而無法由左右側 來觀看自己試戴眼鏡的侧面影像。且傳統的眼鏡試戴系会 也並無法隨著消費者臉部的移動與轉動,適當地將^統 ,者的臉部作合成,因此,往往造成所合成=與 犬兀或不真實。 〜像過於 、、 因此’鑑於傳統眼鏡試戴系統與方法缺乏一有 來解決傳統的問題’因此亟需提出—新穎的眼鏡^ 碑戴互動服務系統與方法,能夠藉由精確的模擬蛊屣 ,以解決上述之問題。 、 201241781 【發明内容】 為解決上述問題,本發明係提供真人影像之眼鏡虛擬 試戴互動服務系統與方法,藉由精確的模擬與運算,以解 決當消費者臉部的移動與轉動時,無法將眼鏡與消費者的 臉部作適當的合成,以產生合成影像過於突兀或不真實之 問題。 根據一實施例,本發明提供真人影像之一種眼鏡虛擬 試戴互動服務方法,包括:透過一晝面内之一框架以定位 一人臉,並取得一第一人臉影像;對該第一人臉影像之眼 睛,取得複數個第一特徵點;在每個第一特徵點周圍取樣 複數個第一特徵資訊,並儲存該複數個第一特徵資訊;利 用晝面比對判斷該人臉是否有動態移動,並透過下一個該 晝面取得該第二人臉影像;結合搜尋比對範圍與追蹤特徵 資訊方法取得該第二人臉影像内複數個第二特徵點,進而 取得複數個第二特徵點定位資訊;比對該第一人臉影像與 該第二人臉影像的相對位置貢訊差異’以判斷該人臉的位 置、移動狀態與縮放比例,進而計算出該複數個第二特徵 點的位置;以及將預設的一眼鏡模型合成於該複數個第二 特徵點的位置。 根據一實施例,本發明提供真人影像之一種眼鏡虛擬 試戴互動服務系統,包括:一影像擷取單元,透過一晝面 内之一框架以定位一人臉,並取得一第一人臉影像;一處 理單元,耦接該影像擷取單元,對該第一人臉影像之眼睛, 取得複數個第一特徵點,且在每個第一特徵點周圍設計取 201241781 樣模式以取得有利的複數個第一特徵資訊,並儲存該複數 個第一特徵資訊,且利用晝面比對判斷該人臉是否有動態 移動,並透過下一個該晝面取得該第二人臉影像,結合搜 尋比對範圍與追蹤特徵資訊方法並取得該第二人臉影像内 複數個第二特徵點,進而取得複數個第二特徵點定位資 訊;一分析單元,耦接該處理單元,用以比對該第一人臉 影像與該第二人臉影像的相對位置資訊差異,以判斷該人 臉的位置、移動狀態與縮放比例,進而計算出該複數個第 二特徵點的位置;以及一合成單元,耦接該分析單元,將 預設的一虛擬眼鏡模型合成於該複數個第二特徵點的位 置。後續以此方法類推取得動態的第三、第四..的特徵點 的位置。 根據另一實施例,本發明提供一種隱形眼鏡虛擬試戴 互動服務方法,包括:透過一晝面内之一框架以定位一人 臉,並取得一第一人臉影像;對該第一人臉影像之眼睛瞳 孔,取得複數個第一特徵點;在每個第一特徵點周圍取樣 複數個第一特徵資訊,並儲存該複數個第一特徵資訊;以 及將預設的一隱形眼鏡模型合成於該複數個第二特徵點的 位置。後續以此方法類推取得動態的第三、第四..的特徵 點的位置。 為進一步對本創作有更深入的說明,乃藉由以下圖 示、圖號說明及發明詳細說明,冀能對貴審查委員於審 查工作有所助益。 【實施方式】 6 201241781 茲配合下列之圖式說明本創作之詳細結構,及复 關係,以利於貴審委做一瞭解。 /、連、,,σ 圖一係顯示根據本發明之一實施例之真人影 種眼鏡虛擬試戴互動服務方法,其包括:透過一書 -框架歧位-人臉,並取得—第—人臉影^(步^ S1繞二、:=二A所示’於本實施例於-晝面上顯示 -虛線框&亚要求制者臉部正面進人框架,以符 框大小並將雙眼對齊橫線,以定位使用者的臉部, I該第-人臉影像。接著,對該第—人臉影像之眼 處如兩側眼角,取得複數個第一特徵點(步驟si〇2),此福 -特徵點可以但不限於包括左右眼的兩側眼角以及 以手=選=’如圖二B中畫叉所示,於本實施例,可 :動2制第-人臉影像之左右眼的兩側眼角點以及嘴 分別取得一或多個第一特徵點。另外,於另 =V錢用人賴識方式讓程式自動抓取特徵 —、5 以判斷該該第一人臉影像是否符合人臉邏 符點i财新取得該第—人臉影像或該錢個第一特 5 ‘.、右疋,則進行下一步動作,應用一搜尋運管,以判The photo 'can find the best glasses from tens of thousands of eyeglasses. Q, U However, the traditional glasses try-on system is mostly a two-dimensional (2D) plane. The user =, this is to watch the front image of the glasses on their own, and it is impossible to view the side view of the glasses by the left and right sides. Moreover, the traditional glasses try-on system will not be able to properly synthesize the face of the person as the face moves and rotates, thus often causing the synthesis = and the barking or unreal. ~ It's too much, so 'in view of the lack of traditional glasses try-on systems and methods to solve the traditional problems', so it is urgent to propose - novel glasses ^ monumental interactive service systems and methods, can be accurately simulated, To solve the above problems. 201241781 SUMMARY OF THE INVENTION In order to solve the above problems, the present invention provides a virtual try-on interactive service system and method for real-life images, which can be solved by accurate simulation and calculation to solve the movement and rotation of the consumer's face. The glasses are properly combined with the consumer's face to create a problem that the synthetic image is too abrupt or unreal. According to an embodiment, the present invention provides a virtual virtual try-on interactive service method for a live-action image, comprising: positioning a face through a frame in a plane, and obtaining a first face image; the first face Obtaining a plurality of first feature points in the eye of the image; sampling a plurality of first feature information around each of the first feature points, and storing the plurality of first feature information; determining whether the face is dynamic by using a facet comparison Moving, and obtaining the second face image through the next face; combining the search comparison range and the tracking feature information method to obtain a plurality of second feature points in the second face image, thereby obtaining a plurality of second feature points Positioning information; comparing the relative position of the first face image and the second face image to determine the position, movement state and scaling of the face, thereby calculating the plurality of second feature points a position; and synthesizing a preset eyeglass model to a position of the plurality of second feature points. According to an embodiment, the present invention provides a virtual virtual touch-test interactive service system for a live-action image, comprising: an image capture unit that positions a face through a frame in a face and obtains a first face image; a processing unit coupled to the image capturing unit, obtains a plurality of first feature points for the eyes of the first face image, and designs a 201241781 sample pattern around each of the first feature points to obtain a favorable plurality of First feature information, and storing the plurality of first feature information, and determining whether the face has dynamic movement by using a facet comparison, and obtaining the second face image through the next face, combined with the search comparison range Obtaining a plurality of second feature points in the second face image, and obtaining a plurality of second feature point location information; and an analysis unit coupled to the processing unit for comparing the first person The relative position information of the face image and the second face image is different to determine the position, the moving state and the zoom ratio of the face, and then calculate the plurality of second special Position of the point; and a synthesizing means, coupled to the analysis unit, a preset virtual glasses model synthesis in a plurality of the second feature point position. Subsequently, the position of the dynamic third, fourth, and feature points is obtained by this method. According to another embodiment, the present invention provides a contact lens virtual try-on interactive service method, comprising: positioning a face through a frame in a plane, and obtaining a first face image; the first face image The pupil of the eye obtains a plurality of first feature points; samples a plurality of first feature information around each first feature point, and stores the plurality of first feature information; and synthesizes a preset contact lens model into the The position of a plurality of second feature points. Subsequently, the position of the dynamic third, fourth, and feature points is obtained by this method. In order to further explain this creation, it is helpful to review the work of the review committee by the following illustrations, illustrations and detailed descriptions of the invention. [Embodiment] 6 201241781 The detailed structure and complex relationship of this creation are explained with the following drawings to help your audit committee to understand. /, 连,,, σ Figure 1 shows a virtual try-on interactive service method for a real person's glasses according to an embodiment of the present invention, which includes: through a book-frame dislocation-face, and obtains - a person Face shadow ^ (step ^ S1 around two, := two A' shown in this embodiment on the - 昼 surface - dashed box & sub-requirer face front into the frame, in the box size and double The eye is aligned with the horizontal line to locate the face of the user, and the first face image is obtained. Then, the first feature point is obtained as the eye corners of the first face image (step si〇2) The feature point can be, but is not limited to, the two corners of the eye including the left and right eyes and the hand=select=' as shown in the cross in FIG. 2B. In this embodiment, the second face image can be: One or more first feature points are respectively obtained from the eye corner points and the mouth of the left and right eyes. In addition, the program automatically captures the feature -5 to determine the first face image in another way. Whether it meets the face logic point i Caixin gets the first-face image or the first special 5'., right-handed, then move on the next step Application of a search transportation management, in order to judge

斷該複數個第一牯料科UH 特徵點疋否位於該框架内,若否,則重新 二—人臉影像或該複數個第-特徵點,若是,則進 的方4^作其中’判斷特徵點位置是否符合人臉邏輯 邊二工(右)眼的兩側眼角點是否在虛線框左(右)半 内二— 觜角兩度疋否差距過大等。若有一項不符 201241781 合上述限定條件,則要求重新取得該第一特徵點或該第一 人臉影像。 接下來,在每個第一特徵點周圍設計取樣模式以取得 有利的複數個第一特徵資訊,並儲存該複數個第一特徵資 訊(步驟S103),且更包括同時儲存該複數個第一特徵點與 該複數個第一特徵點間的點間距。取樣模式進一步而言, 該特徵資訊係為像素色彩資訊,且該像素色彩資訊係由每 一個該特徵點輻射出m個方向,並在每一個方向上取得η 個像素點,以作為色彩資訊,且該m與η係為正整數。或 者是,該像素色彩資訊係由每一個該特徵點呈半圓輻射出 m個方向,並在每一個方向上取得η個像素點,以作為色 彩資訊,且該m與η係為正整數以及該半圓至少涵蓋一眼 角。如圖二C所示,於本實施例以取樣模式的方法抓取特 徵點之鄰近像素色彩資訊,同時記錄各特徵點之間的點間 距。例如,由特徵點以半圓輻射擴散的方式向外延伸8個 方向,並分別取7個點共56個色彩資訊當作此特徵點的特 徵資訊,且該半圓至少完全涵蓋一眼角。或者是,如圖二 D所示,於另一實施例,該取樣模式由特徵點以輻射擴散 的方式向外延伸8個方向,並分別取7個點共56個色彩資 訊當作此特徵點的特徵資訊。 接著,取得下一個晝面中之該第二人臉影像,並利用 晝面比對判斷該人臉是否有動態移動(步驟sl04)。其中於 時間間隔内,比對該畫面與下一個該畫面中之人臉,並動 態追蹤該複數個第一特徵點是否有移動軌跡。例如,於本 實施例,該晝面與下一個該晝面像素以相減的方式比對在 8 201241781 此時間間隔内移動的物抖 跡’表示人臉在這段時間有減,點附近有明顯移動痕 反之,若人臉沒有作 乍,則進行後續步驟與運算。 施例,亦可不做後續追縱處理。於Η 間間隔内移動的物體。戍 式,比對在此時 徵資訊中的白_點)1 者量^由=畫面像素差之特 度,若白點數量多,則表干人、别衫像中移動量的程 驟與運算。反之,若特徵點附^ =動’則進行後續步 明顯移動痕跡,财作後續白點’表示人臉沒有 接著’應用一雜訊遽除法 雜訊,以避免後面比對過程中受到;f下^亥晝面内的 其中之_。 门顿糊法、中值法與均值法之 當前述有移動痕跡時,將進一 +处 追縱特徵資訊,進而取得該第二人臉;七^對範圍並 徵點,進而取得複數個第二特徵數個第二特 對該第一人浐旦彡德盥兮铍 、汛(父驟sl〇5),並且比 f,‘二^^相對位置資訊差 异出該複數個第三特徵點的位置驟…,放比例’進而計 預設-比對範圍,以比對二至少 並取得前丨個笋^.排序该钹數個誤差值, ,。於本實施例,根據二特徵點 疋法.⑴搜尋狀態:當剛點選完特徵點第-次開::;: 201241781 或追蹤失敗時會進入此狀態,在此狀態下,比對的範圍是 靜態的,僅侷限在使用者初次點選特徵點的鄰近範圍内作 比對,意即強迫使用者臉對到虛線框的位置,才有可能進 入比對範圍。(2)追蹤狀態:在前一個晝面有成功比對特 徵點的情況下則為追蹤狀態,在此狀態下,比對的區域為 上一個晝面中比對到的特徵點之鄰近區域,意即此區域是 動態會跟著目前追到的特徵點移動的。且在比對的範圍 内’會包含依設計的取樣模式取N個像素,每個像素會以 取樣模式之方法來取得該像素點的臨近56個點,將這此點 的RGB與YCbCr色彩資訊與起初錄的第一特徵資訊作比 對,兩者誤差值為Error 1〜Error N,將該這n個值作排序, 取最小誤差的前i個像素為候選點,例如1 〇個,且再作一 次分群去除離群值,最後平均這些較集中的像素值座標, 該結果即是最後追縱結果。如果,若在;(Sf個點中找出的 Error值都太大,則表示使用者移動程度已超出追蹤範圍, 或有額外的遮蔽物出現在特徵點前,此時則判定追縱失 敗’不再進行後面的運算。 此外’於本實例更可進行一幾何計算,例如,根據該 複數個第一特徵點與該複數個第二特徵點的位置,以斜率 計算該人臉的傾斜度、根據該複數個第一特徵點間之距離 與該複數個第二特徵點間之距離,以長度變化計算該人臉 的遠近比例以及根據該複數個第一特徵點與該複數個第二 特徵點的比例,以比例變化計算該人臉的旋轉角度與俯仰 角度。若該人臉的傾斜度、遠近比例、旋轉角度或俯仰角 度超過一預設容許值,則重新取得該第二人臉影像。如圖 201241781 二A所不,根據該第一人臉影像中之兩眼兩側眼角點的座 ‘位於水平軸,而該第二人臉影像之兩眼兩側眼 則為均角度,因此,可以用斜率計算該人臉的傾斜; ^圖二乂所示’根據該第一人臉影像中兩眼眼角間的距離 亥第二人臉影像中兩眼眼角間的距離D2,以長度變 化计异該人臉的遠近比例。如圖三C所示,根據該第一^ 臉影像中兩眼兩側眼角的比例1:1與該第二人臉影像 眼眼角的比例相比於該第一人臉影像中兩眼兩側眼角的比 例為1.H或m ’以比例變化計算該人臉的旋轉角度以 及根據該第-人臉影像中兩眼與嘴巴位於水平轴,而^第 =臉影像中兩眼與嘴巴相較於該第一人臉影像中二 之比例為’則以比例變化計算該人臉的俯仰角 刑接由—預設的眼鏡資料庫提取預設的-眼鏡模 L/5成於該複數個第二特徵點的位置(步驟⑽),且 放的尺寸、該複數個第—特徵點與誤差值,以縮 ί貝科庫用以儲存眼鏡模型的資料。如圖四所示,由上述 步驟所計算出的二堆允門次 由上it 日月Η更用者動向的位置,又由於人臉大小不 同’而配合該複數的第一特徵 旋轉,並合成科 ^私對目认尺寸作縮放與 或者是㈣Ϊ 此步驟後,可結束該方法, 鏡模搜尋與追縱的步驟。於本實施例,該眼 壯 、為—維(3D)眼鏡模型,其製作方法可透i5摄傻 裝置(例如,數位相她— 衣卄力凌J透過攝像 相機)由一貫體眼鏡的正 201241781 正右侧面 個方向個为丨 五A與五B所示),作取,遠實體眼鏡的平面影像(如圖 鏡作正負90度之旋轉:不跫限於此。或者是,將該實體眼 面與正右側面三個平面旦^取得该實體眼鏡的正面、正左側 面影像應用影像合成軟二像。接著,將所取得的該三個平 模型(如圖五C所示)。卷與硬體,將其組合成該三維眼鏡 可應用各種參數(例如,二邊二維眼鏡模型製作完成後,亦 透明度參數等),用以調^轉參數、色彩參數、比例參數與 是應用選項(例如,選# 5亥三維眼鏡的位置與顏色,或者 三維眼鏡模型(如圖五:正面視圖或左右侧視圖)來欣賞該 眼鏡鏡R三維鏡_/^)。於本實施廳考慮不具有 RP 4- ^ «η U I作,然而,熟悉此技藝人士皆應 '、二/ Μ _可用於製作具有眼鏡鏡片之眼鏡模型。 圖/、係顯不根據本發明之另一實施例之一隱形眼鏡虛 擬試戴互動服務方I本實施例與上述之實施例差異在於 本實施例疋以眼睛瞳孔騎徵點與結合取樣模式以取得特 徵資訊’且剌於虛擬雜試戴隱形眼鏡,而其餘方法與 運异皆與上述貫施例近似,故於本實例僅詳述這些差異處 的細節,而與上述實施例近似之步驟與方法,則不再贅述。 於本實施例之3D模擬方法包括:透過一畫面内之一框架 以定位一人臉,並取得一第一人臉影像(步驟S601)。對該 第一人臉影像之眼睛瞳孔特徵處,取得複數個第一特徵點 (步驟s602),且複數個第一特徵點包括左右眼的曈孔以及 嘴角兩側’例如,如圖七A中晝叉所示,於本實施例,手 動點選該第一人臉影像之左右眼的瞳孔點以及嘴角兩侧 點,以分別所取得一或多個第一特徵點。在每個第一特徵 12 201241781 徵i訊(步第一特徵資訊,並儲存該複數個第一特 符合人臉邏輯的太二於本實施例’判斷特徵點位置是否 (右)半邊、隹式有左(右)眼的瞳孔是否在虛線框左 眼睛的瞳孔斑卢j的瞳孔與兩側眼角否相隔一定間距、 符合上述高度是否差距過大等。若有-項不 -人臉影像。、貝要求重新取得該第—特徵點或該第 接著’取得下—彳金 臉是否有動態移動(;驟人臉影像,並判斷該人 鞭特徵資訊,_ “二二4 ’搜尋比對範圍並追 點,進而取臉影像内複數個第二特徵 同瞎鍅户兮:數個第二特徵資訊(步驟s6〇5),且更包括 二’於二複數個第—特徵點間的點間距。如圖七B所 2施例以取樣的方法抓取特徵點之鄰近像素色彩 二射二己錄各特徵點之間的點間距,例如,由特徵點 影像與該第訊。比對該第一人臉 肋:〜彳冢,以判斷該人臉的位置、 放2’,進”算出該複數個第二特徵點的位置(步i 歹^根據该第一人臉影像中之瞳孔點的座標位於 7、’、’而該第二人臉影像瞳孔點座標則為上仰角度,因 在’ ^以用斜率計算該人臉的傾斜度。例如,根據該第一 人臉影像中兩眼瞳孔間的距離與該第二人 孔間的距離,以長度變化計算該人臉的遠近比例。1 根據该第-人臉影像中兩眼瞳孔與該第二人臉影像中兩眼 里孔之比♦]以比例艾化計异該人臉的旋轉角度以及根據 13 201241781 該第一人臉影像中兩眼瞳孔與嘴巴與而該第二人臉影像中 兩眼瞳孔與嘴巴間之比例,相較於該第一人臉影像中兩眼 與嘴巴之比例,以比例變化計算該人臉的旋轉角度與俯仰 角度。若該人臉的傾斜度、遠近比例、旋轉角度或俯仰角 度超過一預設容許值,則重新取得該第二人臉影像。以及 將預設的一隱形眼鏡模型合成於該複數個第二特徵點的位 置(步驟607),且根據該人臉的尺寸與該複數個第一特徵 點,以縮放與旋轉該眼鏡模型,進而合適地合成至該人臉。 於此步驟後,可結束該方法,或者是繼續下一個搜尋與追 蹤的步驟。 圖八係顯示根據本發明之一實施例之一眼鏡虛擬試 戴互動服務系統。該系統包括:一擷取單元81、一處理單 元82、一分析單元83與一合成單元84。該擷取單元81透 過一晝面内之一框架以定位一人臉,並取得一第一人臉影 像,且該擷取單元81係可為一攝影裝置,例如,網路攝影 機等。該處理單元82,耦接該擷取單元81,對該第一人臉 影像之眼睛特徵處如兩側眼角,取得複數個第一特徵點, 且在每個第一特徵點周圍依取樣模式取樣複數個第一特徵 資訊,並儲存該複數個第一特徵資訊,且取得下一個晝面 中之該第二人臉影像,並判斷該人臉是否有動態移動,且 取得該第二人臉影像内複數個第二特徵點,進而取得複數 個第二特徵資訊,且複數個第一特徵點包括但不限於左右 眼的兩側眼角以及嘴角兩側。且處理單元82更包括根據複 數個第一特徵點,以判斷該該第一人臉影像是否符合人臉 邏輯,若否,則重新取得該第一人臉影像。若是,則進行 201241781 下一步動作,該處理單元 斷該複數個第_特徵點 更匕括應用—搜尋運算,以判 取得該第-人臉影像“,立=該框架内,若否,則重新 樣模式之該特徵資訊係為=色—步動作。上述取 二每-個該特徵點輻射二處:單元82 取侍π個像素點,以 勹亚在母一個方向上 整數。或者是,該處理單元其中該m與η係為正 射出m個方向,並在矣/2由母—個該特徵點呈半圓韓 為色彩資訊,且=圓母=固方向上取得η個像素點,以作 為正整數。 +圓至少涵蓋一眼角,其令該,與〇係 該分析單元83,耦接竽 — 人臉影像與該第二人柃旦早兀82 ’用以比對該第- 斷該人臉的位置、移相對位置資訊之差異,以判 數個第二特徵點的位置。二:二放比:列’進而計算出該複 對該晝面與下—餘It 於時間間隔内,比 特徵點是否有移動軌跡 ^人臉’並追縱該複數個第- —個該畫面内的雜訊,且該= = = =法’以遽除下 中值法與均值法之其中之—。單係J向斯模糊法、 一比對範圍,以比料㈣析早凡更用於預設至少 二特徵資訊,並取該i數個第— =3 =與該複數個第 :!資:_個誤差值,且排序該;:=== 置,上述之i二;數r第二特徵點的位 特徵點與該複數個第二特徵數個第-的傾斜度、根據該複數個第-特上 15 201241781 第二特徵點間之距離,以長度變化計算該人臉的遠近比例 以及根據該複數個第一特徵點與該複數個第二特徵點的比 例,以比例變化計算該人臉的旋轉角度與俯仰角度,且若 該人臉的傾斜度、遠近比例、旋轉角度或俯仰苒度超過一 預設容許值,則重新取得該第二人臉影像。合成單元84, 耦接該分析單元83,由一預設的眼鏡資料庫85提取預設 的一眼鏡模型並將預設的該眼鏡模型合成於該複數個第二 特徵點的位置,且該合成單元根據該人臉的尺寸與該複數 個第一特徵點,以縮放與旋轉該眼鏡模型,進而合成至該 人臉。於本實例之眼鏡試戴虛擬模擬系統,透過適當的修 改操作與步驟亦可適用於隱形眼鏡虛擬試戴互動服務系 統。 唯以上所述者,僅為本發明之較佳實施例而已,當不 能以之限定本發明所實施之範圍,即大凡依本發明申請專 利範圍所作之均等變化與修飾,皆應仍屬於本發明專利涵 蓋之範圍内,謹請 貴審查委員明鑑,並祈惠准,是所至 禱0 16 201241781 【圖式簡單說明】 圖一係顯示根據本發明之一實施例之一種眼鏡虛擬試戴 互動服務方法; 圖二A、B、C、D係顯示根據圖一方法之操作手段; 圖三A、B、C係顯示人臉的傾斜、轉動與移動角度; 圖四係顯示眼鏡合成人臉的狀態示意圖; 圖五A、B、C係顯示根據本發明之一實施例之一三維眼 鏡製作方法; 圖六係顯示根據本發明之另一實施例之一種隱形眼鏡虛 擬試戴互動服務方法; 圖七A、B係顯示根據圖六方法之操作手段; • 圖八係顯示根據本發明之一實施例之一種眼鏡虛擬試戴 互動服務系統。 【主要元件符號說明】 步驟slOl〜sl07 步驟s601〜s607 81 影像擷取單元 82 處理單元 83 分析單元 84 合成單元 85 眼鏡資料庫 17Whether the UH feature points of the plurality of first materials are located in the frame, and if not, the second face image or the plurality of first feature points are re-created, and if so, the incoming party 4^ is judged Whether the feature point position conforms to the face logic side of the second (right) eye is on the left (right) half of the dotted line, and the angle between the two corners is too large. If one of the items does not conform to 201241781, the first feature point or the first face image is required to be retrieved. Next, a sampling mode is designed around each of the first feature points to obtain a plurality of favorable first feature information, and the plurality of first feature information is stored (step S103), and further includes simultaneously storing the plurality of first features The dot spacing between the point and the plurality of first feature points. Sampling mode Further, the feature information is pixel color information, and the pixel color information radiates m directions from each of the feature points, and obtains n pixel points in each direction as color information. And m and η are positive integers. Alternatively, the pixel color information radiates m directions by a semicircle for each of the feature points, and obtains n pixel points in each direction as color information, and the m and η are positive integers and the The semicircle covers at least one corner of the eye. As shown in FIG. 2C, in the embodiment, the color information of adjacent pixels of the feature points is captured by the sampling mode, and the dot pitch between the feature points is recorded. For example, the feature points are extended outward in eight directions by semi-circular radiation diffusion, and a total of 56 color information of 7 points are taken as the feature information of the feature point, and the semicircle at least completely covers one corner of the eye. Alternatively, as shown in FIG. 2D, in another embodiment, the sampling mode is extended by the feature point in a radiation diffusion manner in 8 directions, and 7 points of 56 color information are respectively taken as the feature point. Characteristic information. Then, the second face image in the next face is obtained, and the face is compared to determine whether the face has dynamic movement (step s104). The time interval is compared with the face of the picture and the next picture, and the plurality of first feature points are dynamically tracked for whether there is a moving track. For example, in this embodiment, the facet and the next facet pixel are compared in a subtractive manner to the object wobbled in the time interval of 8 201241781, indicating that the face is reduced during the time, and there is a point near the point. Obviously moving the mark and vice versa, if the face is not smashed, the subsequent steps and operations are performed. For example, it is also possible to do no follow-up treatment. An object that moves between intervals.戍, 白 在 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯 资讯Operation. On the other hand, if the feature point is attached to ^=moving', then the follow-up step will obviously move the trace, and the subsequent white point of the financial operation will indicate that the face does not follow the application of a noise to remove the noise to avoid the subsequent comparison process; f ^ Among them, _. When the Menton paste method, the median method and the mean method are used to move the traces, the feature information will be traced to the second face, and then the second face will be obtained. The number of features is second, the first person is 浐德彡, 汛 (father ssl〇5), and the position information of the plurality of third feature points is different from the relative position information of f, 'two ^^ Step..., put the ratio 'and then the preset-match range, to compare the two at least and get the number of errors before, and sort the number of error values, . In this embodiment, according to the second feature point method. (1) Search state: when the feature point first-time opening::;: 201241781 or tracking failure is entered, this state is entered. In this state, the range of the comparison is It is static, and it is only limited to the user's initial selection of the feature points in the vicinity of the feature point, which means that the user's face is forced to the position of the dotted frame, it is possible to enter the comparison range. (2) Tracking state: in the case where there is a successful comparison of the feature points in the previous one, it is the tracking state, in which the aligned region is the adjacent region of the matching feature points in the previous pupil. This means that this area is dynamically moving along with the feature points currently being chased. And within the range of the comparison, it will include N pixels according to the designed sampling mode, and each pixel will obtain the adjacent 56 points of the pixel by the sampling mode, and the RGB and YCbCr color information of the point will be obtained. Comparing with the first feature information recorded at the beginning, the error values of the two are Error 1~Error N, and the n values are sorted, and the first i pixels of the minimum error are taken as candidate points, for example, one, and Perform another grouping to remove the outliers, and finally average these more concentrated pixel value coordinates. The result is the final tracking result. If, if; (the Error value found in the Sf points is too large, it means that the user has moved beyond the tracking range, or there is an additional mask appearing before the feature point, then the tracking failure is determined. The following operations are no longer performed. Further, in this example, a geometric calculation can be performed. For example, according to the positions of the plurality of first feature points and the plurality of second feature points, the slope of the face is calculated by a slope, Calculating a distance ratio of the face according to a length change and a plurality of first feature points and the plurality of second feature points according to a distance between the plurality of first feature points and a distance between the plurality of second feature points The ratio of the rotation angle and the elevation angle of the face is calculated by the proportional change. If the inclination, the near-far ratio, the rotation angle or the elevation angle of the face exceed a predetermined allowable value, the second face image is retrieved. As shown in FIG. 201241781, AA, according to the first human face image, the eyes of the two corners of the two eyes are located on the horizontal axis, and the eyes of the two faces of the second human face are even angles, so The slope of the face can be calculated by the slope; ^ Figure 2 shows the distance D2 between the corners of the eyes in the second face image according to the distance between the corners of the eyes in the first face image, in terms of length variation The distance ratio of the face is different. As shown in FIG. 3C, according to the ratio of the ratio 1:1 of the eye corners of the two eyes in the first face image to the eye corner of the second face image, the first ratio is compared with the first In the face image, the ratio of the corners of the eyes on both sides of the eye is 1.H or m 'the rotation angle of the face is calculated by the proportional change and according to the first face image, the eyes and the mouth are located on the horizontal axis, and ^第=脸In the image, the ratio of the two eyes to the mouth is compared with the ratio of the second face in the first face image, and the pitch angle of the face is calculated by the proportional change. The preset glasses are extracted by the preset glasses database. /5 is the position of the plurality of second feature points (step (10)), and the size of the release, the plurality of first feature points and the error value are used to store the data of the glasses model. As shown in the fourth, the two piles of the gates calculated by the above steps are changed by the user. The position, and the size of the face is different, and the first feature of the complex number is rotated, and the synthetic size is scaled and/or (4) Ϊ After this step, the method can be ended, the model search and the tracking In the embodiment, the eye-dwelling-dimensional (3D) glasses model can be made through the i5 shooting silo device (for example, the digital phase of her - 卄 卄 J J through the camera camera) from the consistent body glasses正201241781 The right side of the direction is shown as 丨5A and 5B), for the plane image of the far physical glasses (the mirror is rotated by plus or minus 90 degrees: not limited to this. Or, The three sides of the solid eye surface and the right side face are obtained by applying the image synthesis soft image to the front side and the front left side image of the solid glasses. Then, the three flat models obtained are shown (as shown in FIG. 5C). Volume and hardware, combining them into the 3D glasses can apply various parameters (for example, after the two-sided two-dimensional glasses model is completed, also transparency parameters, etc.), used to adjust the parameters, color parameters, proportional parameters and applications Options (for example, select the location and color of the #5海3D glasses, or the 3D glasses model (Figure 5: front view or left and right side views) to enjoy the glasses mirror R 3 _/^). It is considered in this Office that there is no RP 4- ^ «η U I, however, those skilled in the art should use ', two / Μ _ can be used to make glasses models with spectacle lenses. FIG. 4 shows a contact lens virtual trial wear interactive service party I according to another embodiment of the present invention. This embodiment differs from the above embodiment in that the present embodiment uses the eye pupil riding point and the combined sampling mode to Obtaining the feature information' and wearing the contact lens in the virtual miscellaneous test, and the rest of the methods and the similarities are similar to the above-mentioned embodiments. Therefore, only the details of these differences are detailed in this example, and the steps similar to those of the above embodiment are The method will not be described again. The 3D simulation method of this embodiment includes: positioning a face through a frame in a screen, and obtaining a first face image (step S601). A plurality of first feature points are obtained at the pupil feature of the first face image (step s602), and the plurality of first feature points include the pupils of the left and right eyes and the sides of the mouth angle, for example, as shown in FIG. 7A. In the embodiment, the pupil point of the left and right eyes of the first face image and the points on both sides of the mouth angle are manually selected to obtain one or more first feature points respectively. In each of the first features 12 201241781, the first feature information is stored, and the plurality of first special matching face logics are stored in the present embodiment to determine whether the feature point position is (right) half, 隹Is the pupil of the left (right) eye in the left eye of the pupil of the left eye? The pupil of the pupil of the left eye is separated from the eye corners of both sides by a certain distance, whether the height is too large, etc. If there is - the item is not - the face image. Require the re-acquisition of the first feature point or the next step to get the next-彳 gold face whether there is dynamic movement (; face image, and judge the person whip feature information, _ "22 2" search comparison range and chase Point, and then take a plurality of second features in the face image with the same account: a plurality of second feature information (step s6〇5), and further includes a dot spacing between the two 'the plurality of first feature points. In the example of Fig. 7B, the sampling method is used to capture the dot spacing between the feature points of the neighboring pixels of the feature points, for example, by the feature point image and the first message. Face rib: ~彳冢 to determine the position of the face, 2', into" calculating the position of the plurality of second feature points (step i 歹 ^ according to the coordinates of the pupil point in the first face image is located at 7, ', ' and the second face image pupil point coordinates For the elevation angle, the slope of the face is calculated by '^ in the slope. For example, according to the distance between the pupils of the two eyes in the first face image and the distance between the second person holes, the length is calculated. The ratio of the distance between the faces of the face. 1 According to the ratio of the pupils of the two eyes in the first face image to the holes in the two eyes in the second face image, the rotation angle of the face is calculated according to the ratio and according to 13 201241781 The ratio between the pupil and the mouth of the two eyes in the first face image and the pupil and the mouth of the two eyes in the second face image is proportional to the ratio of the two eyes to the mouth in the first face image. The change calculates the rotation angle and the elevation angle of the face. If the inclination, the near-far ratio, the rotation angle, or the elevation angle of the face exceeds a preset allowable value, the second face image is re-acquired. a contact lens model synthesized in the plurality of Positioning the two feature points (step 607), and scaling and rotating the glasses model according to the size of the face and the plurality of first feature points, and then appropriately synthesizing to the face. After this step, the process ends. The method, or the step of continuing the next search and tracking. Figure 8 shows a virtual virtual touch-on interactive service system according to an embodiment of the present invention. The system includes: a capture unit 81, a processing unit 82, An analyzing unit 83 and a synthesizing unit 84. The capturing unit 81 transmits a face through a frame in a plane to obtain a first face image, and the capturing unit 81 can be a photographing device. For example, a network camera, etc. The processing unit 82 is coupled to the capturing unit 81, and obtains a plurality of first feature points, such as the two corners of the eye, on the first human face image, and each of the first feature points Sampling a plurality of first feature information around the feature point, storing the plurality of first feature information, and obtaining the second face image in the next face, and determining whether the face has dynamic movement, And obtaining a plurality of second feature points in the second facial image to obtain a plurality of second feature information, and the plurality of first feature points include, but are not limited to, both sides of the left and right eyes and the sides of the mouth. The processing unit 82 further includes determining, according to the plurality of first feature points, whether the first facial image conforms to the face logic, and if not, reacquiring the first facial image. If yes, perform the next step of 201241781, the processing unit breaks the plurality of _th feature points to further include the application-search operation to determine the first face image, and = if within the frame, if not, then The characteristic information of the sample mode is a color-step action. The above two points are radiated at each of the feature points: the unit 82 takes π pixel points to represent an integer in one direction of the parent. In the processing unit, the m and η are positively ejected in m directions, and in the 矣/2 is the mother--the feature point is a semi-circle Han as the color information, and the =-matrix=solid direction obtains n pixels, as a positive integer. The + circle covers at least one corner of the eye, which is coupled to the analysis unit 83, coupled to the facial image and the second person, the early second 82' is used to compare the person The difference between the position of the face and the relative position information is determined by judging the position of the second feature point. Two: two ratios: column 'and further calculating the ratio of the complex surface to the lower surface and the lower interval Whether the feature point has a moving track ^face" and traces the plural number - one in the picture The noise, and the = = = = method 'to eliminate the lower median method and the mean method -. The single-line J-direction fuzzy method, a comparison range, to use the material (four) analysis Presetting at least two feature information, and taking the number of the first -=3 = and the plural number of:: _ error values, and sorting the;; === set, the above i two; number r Calculating the distance between the bit feature points of the two feature points and the plurality of first-degree inclinations of the plurality of second features, and calculating the distance between the faces according to the distance between the second feature points of the plurality of first-top 15 201241781 And calculating a rotation angle and a pitch angle of the face according to a ratio of the plurality of first feature points to the plurality of second feature points, and if the inclination, the distance ratio, the rotation angle or the pitch of the face The second facial image is re-acquired when the temperature exceeds a preset allowable value. The synthesizing unit 84 is coupled to the analyzing unit 83, and the preset glasses model is extracted by a preset glasses database 85 and preset. The glasses model is synthesized at a position of the plurality of second feature points, and the synthesis unit is based on the person The size of the face and the plurality of first feature points are used to zoom and rotate the glasses model to be synthesized to the face. The glasses in this example are tried on the virtual simulation system, and the appropriate modification operations and steps can also be applied to the invisible shape. The glasses virtual trial wear interactive service system. The above is only the preferred embodiment of the present invention, and the scope of the present invention cannot be limited thereto, that is, the average variation of the patent application scope of the present invention is Modifications should still fall within the scope of the patent of the present invention. I would like to ask your review committee to give a clear explanation and pray for it. It is the prayer to be used. 0 16 201241781 [Simplified illustration of the drawings] Figure 1 shows an embodiment according to the present invention. One type of glasses virtual try-on interactive service method; Figure 2A, B, C, D show the operation means according to the method of Figure 1; Figure 3A, B, C show the inclination, rotation and movement angle of the face; FIG. 5A, B, and C show a method for fabricating a 3D glasses according to an embodiment of the present invention; FIG. 6 shows a method according to the present invention. Another embodiment of the contact lens virtual try-on interactive service method; FIG. 7A, B show the operation means according to the method of FIG. 6. FIG. 8 shows a virtual virtual touch-on interactive service according to an embodiment of the present invention. system. [Description of main component symbols] Steps slO1~sl07 Steps s601~s607 81 Image capture unit 82 Processing unit 83 Analysis unit 84 Synthesis unit 85 Glasses database 17

Claims (1)

201241781 七、申請專利範圍: 1. -種眼鏡虛擬試戴互動 透過-畫面内之—框年:包括: 第 一人臉影像; 、叱位—人臉,並取得十 於眼睛特徵處,依取樣 與特徵資訊並儲存該複數個第二;個第-魏 第一特徵點間的點間距; 哥徵資讯與該複數個 書面用中動之 1象判斷^搜尋追縱該特徵資訊取得下-個 數個第人臉影像,並取得該第二人臉影像内複 數個第-特d進而取得複數個第二特 比對該第—人臉影像與該第二人臉影像的相對位置 資訊之差異,以判斷該人臉的位置、移動狀態與縮放比 例,進而計算出該複數個第二特徵點的位置;以及 將預設的一眼鏡模型合成於該複數個第二特徵點的 位置。 2 ·如申明專利範圍第1項所述之眼鏡虛擬試戴互動服務 方法’其中該複數個第一特徵點包括左右眼的兩側眼角。 3. 如申請專利範圍第2項所述之眼鏡虛擬試戴互動服務 方法’其中該複數個第一特徵點更包括嘴角兩側。 4. 如申請專利範圍第1項所述之眼鏡虛擬試戴互動服務 方法,更包括: 根據該複數個第一特徵點,以判斷該該第—人臉影像 是否符合人臉邏輯,若否,則重新取得該第一人臉影像 或該複數個第一特徵點。 5. 如申請專利範圍第1項所述之眼鏡虛擬試戴互動服務 201241781 方法’更包括: 於該 1匡用_一内搜尋運否算’以判斷該複數個第一特徵點是否位 數個第1特^則重新取得該第-人臉影像或該複 μ所述之眼鏡虛擬試戴互動服務 7 、$負°凡係為像素色彩資訊。 方法,月其專中^象圍素第^所述之眼鏡虛擬試戴互動服務 個方向,甘'、办貧訊係由每—個特徵點轄射出m 8彩二整r像― 涵蓋一眼角。亥^η係為正整數以及該半圓至少 9.方專包I範圍第1項所述之眼鏡虛擬試戴互動服務 臉於Γ=、Γ内,比對該晝面與下一個該畫面中之人 10·如申复數個第-特徵點是否有移動軌跡。 方法賴第1項所敎眼鏡虛擬試戴互動服務 除法’以遽除下-個該畫面内的雜訊。 務方法圍第:項所述之眼鏡三維試戴互動服 均值法係為高斯模糊法、中值法與 19 201241781 範圍第1項所述之眼鏡虛擬試戴互動服務 該複對範圍’以比對該複數個第-特徵資訊與 該複數個第一做育訊,並取該複數個第-特徵資訊與 ==個最小誤差值,取得該複數 13法如!=利範圍第1項所述之眼鏡虛擬互動服務方 根據該複數個第—特徵點與該複數個第二特徵點的 置,以斜率計算該人臉的傾斜度。 4·如申μ專利㈣第彳項所狀眼鏡虛擬試戴互動服務 方法,更包括: 根據該複數個第一特徵點間之距離與該複數個第二 寺徵點間之距離’以長度變化計算該人臉的遠近比例。 =申明專利關第彳項所述之眼鏡虛擬試戴互動 方法’更包括: 根據該複數個第—特徵點與該複數個第二特徵點的 比例U比彳㈣化計算該人臉的旋轉肖度與俯仰角度。 如申#專利|&圍第13、14或15項所述之眼鏡虛擬試 戴互動服務方法,更包括: 若k人臉的傾斜度、遠近比例、旋轉角度或俯仰角 度超過-預設容許值,則重聽㈣第二人臉影像。 •如申請專利範圍第1項所述之眼鏡虛擬互動服務方 法,更包括: 20 201241781 —興該複數個第— 眼鏡模型’進而合成至該人臉:S · ’ 法’其中為1項所述之眼鏡虛擬互動服務方 M 鏡挺型係為—三維眼鏡模型。 仪如申請專利範圍$ 務方法,更包括: ^ W之眼鏡二維虛擬互動服 透過一攝像裝置,由? φ -細+丄 鏡,並取得該三個方向之平面影·;象,·以^白攝一實體眼 型。组合該三財向之平面影像,叫得該三維眼鏡模 Μ. ^眼鏡虛擬試戴互動服務系統,包括. 一擷取單元,透 加 臉:並取^第_人臉‘内之—框架以定位—人 依取樣模C…耦,亥擷取單元’係於眼睛特徵處, 該二切-特徵點與特徵資訊並健存 —個書面與搜尋追縱特徵資訊取得下 内複數個第二特並取得該第二人臉影像 -分取得複數個第二特徵資訊; 臉影像盘兮妾5亥處理單元,用以比對該第-人 斷該人臉目對位置資訊之差異,以判 複數個第二特徵點:位i態與縮放比例’進而計算出該 型合"'成mu接該分析單元,將預設的一眼鏡模 战於•數個第二特徵點的位置;以及 21 201241781 一眼鏡資料庫,該資料庫儲存眼鏡模型的資料。 21_如巾料·圍第2Q項所叙眼财賴戴互動服 務糸統,其中該指頁取單元係可為一攝影裝置。 22如申請專利範圍第2〇項所述之眼鏡虛擬試戴互動服 ^糸、’先,其中該複數個第一特徵點包括左右眼的兩側眼 角。 23 =申料郷圍第&賴叙眼财擬試戴互動服 CM /I先其中§玄複數個第一特徵點更包括嘴角兩側。 %如申請相範圍第2G項所叙⑽虛賴戴互動服 =糸統’其中該處理單元更包括根據複數個第一特徵 不’以判斷該該第-人臉影像是否符合人臉邏輯,若 ’則重新取得該第—人臉影像或該複數個第一特徵 務乾圍第20項所述之眼鏡虛擬試戴互動朋 =統J中該處理單元更包括應用一搜尋運算,以與 新取?,:第一特徵點是否位於該框架内,若否,則重 26如H 臉影像或該複數個第—特徵點。 務系項所&眼鏡虛擬試戴互動服 單元係由每像素色彩資訊,且該處理 方向上取得η個^輕射出⑺個方向,並在每一個 η係為正整數。 以作為色彩資訊’其中該历與 27^rfitiilL20 單元由每二個該=:射素出_訊,且該處理 铽點呈丰®輻射出m個方向,並在每 22 201241781 圓取得Γ個像素點’以作為色彩資訊,且該半 二==其中該係為正整數。 務***乾圍第20項所述之眼鏡虛擬試戴互動服 下;元於:間間隔内’比對該晝面與 否有移動執跡,並應-特徵點是 晝面内的雜訊。慮除法,以遽除下一個該 29務範圍第28項所述之眼鏡虛擬試戴互動服 均值法之其^軸祕法係為高斯模糊法1值法與 服專:ϊ圍第20項所述之之眼鏡虛擬試戴互動 該分析單元預設一比對範圍,以比對該 複數個第-^貧訊與該複數個第二特徵資訊,並取該 誤差值且^資訊與該複數個第二特徵#之間複數個 差值,^ 數餘差值,絲彳K i個最小誤 值進而取得該複數個第二特徵點的位置。 服:系:專:Ϊ圍第2〇„項所述之之眼鏡虛擬試戴互動 兮、〃赵★二中§亥分析單元根據該複數個第一特徵點與 Γ〜徵點的位置’以斜率計算該人臉的傾斜 個距離與該複數個第二 及根攄今〜奴.長度殳化计异該人臉的遠近比例以 例,以比:二第:特徵點與該複數個第二特徵點的比 文计异該人臉的旋轉角度與俯仰角度,且 ' “、傾斜度、遠近比例、旋轉 過一預設容許值,則錄取得該第二人臉影像,角度超 23 :3 201241781 32. 如申請專利範圍第20項所述之之眼鏡虛擬試戴互動 服務系統,其中該合成單元根據該人臉的尺寸與該複數 個第一特徵點,以縮放與旋轉該眼鏡模型,進而合成至 該人臉。 33. —種眼鏡虛擬試戴互動服務方法,包括: 透過一晝面内之一框架以定位一人臉,並取得一第一 人臉影像; 於眼睛曈孔特徵處,依取樣模式取得複數個第一特 徵點與特徵資訊並儲存該複數個第一特徵資訊與該複 數個第一特徵點間的點間距; 利用動態影像判斷與搜尋追蹤特徵資訊取得下一個 晝面中之該第二人臉影像,並取得該第二人臉影像内複 數個第二特徵點,進而取得複數個第二特徵資訊;; 比對該第一人臉影像與該第二人臉影像的相對位置 資訊之差異,以判斷該人臉的位置、移動狀態與縮放比 例,進而計算出該複數個第二特徵點的位置;以及 將預設的一隱形眼鏡模型合成於該複數個第二特徵 點的位置。 34. 如申請專利範圍第33項所述之眼鏡虛擬試戴互動服 務方法,其中該複數個第一特徵點更包括嘴角兩側。 35. 如申請專利範圍第33項所述之眼鏡虛擬試戴互動服 務方法,更包括: 根據複數個第一特徵點,以判斷該該第一人臉影像 是否符合人臉邏輯,若否,則重新取得該第一人臉影像 或該複數個第一特徵點。 24 201241781 :6方Ϊ申範圍第33項所述之眼鏡虛擬試戴互動服 位於尋==該複數個第-特徵點是否 複數個第-特徵:重新取得該第一人臉影像或該 m,、中特徵一貝訊係為像素色彩資訊,且兮像辛 =::::特_出—向,= 與η係為正^像素點,以作為色彩資訊,且該爪 服 38務::請圍第33項所述之眼鏡虛擬試戴互動 於時間間隔内’比對該晝面與下—個該 臉,亚動態追蹤該複數個第-特徵點是否有移動轨 務方法°,更= 所述之眼鏡虛擬試戴互動服 …預設一比對範圍,以比對該複數個第—盥 5亥複數個第二特徵資訊,絲職數 後資斑 該複數個第二特徵資之間複數個誤差 : 個誤差值,並取得前i個最小誤差值,進複數 個第二特徵點的位置。 ^值㈣取得該複數 4〇務===圍第33項所述之眼鏡虛擬試戴互動服 特徵點的 根據該複數個第一特徵點與該複數個第二 25 201241781 位置,以斜率計算該人臉的傾斜度、根據該複數個第一 特徵點間之距離與該複數個第二特徵點間之距離,以長 度變化計算該人臉的遠近比例以及根據該複數個第一特 徵點與該複數個第二特徵點的比例,以比例變化計算該 人臉的旋轉角度與俯仰角度,且若該人臉的傾斜度、遠 近比例、旋轉角度或俯仰角度超過一預設容許值,則重 新取得該第二人臉影像。 41.如申請專利範圍第33項所述之眼鏡虛擬試戴互動服 務方法,更包括: 根據該人臉的尺寸與該複數個第一特徵點,以縮放 與旋轉該眼鏡模型,進而合成至該人臉。 26201241781 VII. Patent application scope: 1. - Spectacle glasses virtual try-on interaction through - in the picture - frame year: including: first face image; , 叱 position - face, and get ten points in the eye, according to sampling And character information and storing the plurality of second; the first point of the first feature point between the Wei-Wei; the Ge Zheng information and the plurality of written use of the 1st image judgment ^ search for the feature information obtained under - a plurality of first face images, and obtaining a plurality of first-specific ds in the second facial image to obtain a plurality of second special ratios of the relative position information of the first human face image and the second human face image a difference is obtained to determine a position, a moving state, and a scaling ratio of the face, thereby calculating a position of the plurality of second feature points; and synthesizing a preset eyeglass model to a position of the plurality of second feature points. [2] The glasses virtual try-on interactive service method as described in claim 1 wherein the plurality of first feature points include both eye corners of the left and right eyes. 3. The method according to claim 2, wherein the plurality of first feature points further comprises two sides of the mouth corner. 4. The method for virtual interactive wear service of the glasses according to claim 1, further comprising: determining, according to the plurality of first feature points, whether the first face image conforms to face logic, and if not, Then, the first face image or the plurality of first feature points are retrieved. 5. The method of virtual virtual touch-on interactive service 201241781 as described in claim 1 of the patent application scope further comprises: determining the number of digits of the plurality of first feature points by using the search for the number of the first feature points. The first special feature re-acquires the first face image or the virtual virtual touch interactive service 7 of the complex μ, and the negative color is the pixel color information. Method, the month of the special ^ ^ 围 第 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ . Hai ^ η is a positive integer and the semicircle is at least 9. The special virtual service of the glasses according to the first item of the scope I is in the face of the Γ =, Γ, compared to the face and the next picture Person 10·If the application of several first-feature points has a moving track. The method relies on the virtual touch-up interactive service of the glasses in the first item. The division removes the noise in the picture. The method of the three-dimensional trial-and-wear interactive clothing mean method is the Gaussian fuzzy method, the median method and the virtual virtual touch-on interactive service of the glasses according to the first item in the 2012 201281 range. The plurality of first-characteristic information and the plurality of first-time information are used to learn the information, and the plurality of first-characteristic information and the==minimum error value are obtained, and the complex number 13 method is obtained, as described in the first item of the ! The glasses virtual interaction service party calculates the inclination of the face based on the slope according to the plurality of first feature points and the plurality of second feature points. 4. The method of virtual tentative interactive service of glasses according to the fourth item of claim (4), further comprising: changing according to the distance between the plurality of first feature points and the distance between the plurality of second temple points Calculate the distance ratio of the face. = Assuming that the virtual dummy interaction method of the glasses described in the patent application item further comprises: calculating the rotation of the face according to the ratio U of the plurality of first feature points to the plurality of second feature points Degree and pitch angle. Such as the application of the patent #&, the virtual virtual touch-on interactive service method described in Item 13, 14, or 15 includes: if the inclination of the face of the k face, the ratio of the distance, the angle of rotation or the angle of the pitch exceeds - the preset allowable For the value, listen to (4) the second face image. • The method of virtual interactive service of glasses as described in claim 1 of the patent application, further comprising: 20 201241781 — the plurality of the first - glasses model 'and then synthesized to the face: S · 'method' which is one of the items The glasses virtual interactive service party M mirror type is a three-dimensional glasses model. Instrumentation such as the patent application scope of the method, including: ^ W glasses two-dimensional virtual interactive clothing through a camera device, by? φ - fine + 丄 mirror, and obtain the plane shadow of the three directions; image, · white to take a solid eye. Combine the three-dimensional image of the three financial glasses, called the three-dimensional glasses model. ^ glasses virtual try-on interactive service system, including. A capture unit, through the face: and take the ^ _ face's inside - frame to Positioning—the person is based on the sampling mode C...coupling, the sea extraction unit is tied to the eye feature, the two-cut-feature point and the feature information are stored and saved—a written and searched tracking feature information is obtained under the second plurality of special features. And obtaining the second facial image-divided to obtain a plurality of second feature information; the facial image recording device is configured to compare the difference between the face information and the position information of the face-to-person to determine the plural a second feature point: a position i state and a scaling ratio 'and then calculating the type " 'm is connected to the analysis unit, and the preset one glasses is modeled on the position of the plurality of second feature points; and 21 201241781 A glasses database that stores information on glasses models. 21_If the towel material, the 2Q item, the eye-catching interaction service system, the finger-taking unit can be a photographic device. 22. The virtual virtual touch-on interactive suit of the glasses according to the second aspect of the patent application is 糸, ‘first, wherein the plurality of first feature points include both side corners of the left and right eyes. 23 = The application of the 第 第 赖 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼 眼%If the application phase range is described in item 2G (10) virtual Lai Dai interactive service = ' ', where the processing unit further includes determining whether the first face image conforms to the face logic, if the processing unit further includes 'Re-acquisition of the first-face image or the plurality of first feature traits, the virtual virtual touch-on interaction of the glasses described in item 20, the processing unit further includes applying a search operation to ? , : Whether the first feature point is located in the frame, and if not, the weight 26 is the H face image or the plurality of first feature points. The system item & glasses virtual try-on interactive service unit is composed of color information per pixel, and η light-light emission (7) directions are obtained in the processing direction, and each η is a positive integer. As the color information 'where the calendar and the 27^rfitiilL20 unit are emitted by every two of the =: october, and the processing point is ® ® 辐射 radiating m directions, and obtaining a pixel every 22 201241781 circle Point 'as color information, and the half two == where the system is a positive integer. The system is divided into the virtual virtual touch-on interactive service of the 20th item; the element is in the interval: there is a movement in the interval, and the feature point is the noise in the face. Considering the method, in order to eliminate the next method of the virtual suiting and wearing the average value method of the glasses according to item 28 of the 29th scope of the 29th, the method is the Gaussian fuzzy method and the value method: the 20th item The virtual virtual try-on interaction of the glasses, the analysis unit presets a comparison range to compare the plurality of first-th-thin information with the plurality of second feature information, and takes the error value and the information and the plurality of The plurality of differences between the second feature #, the sum of the residual values, and the minimum error of the K i are obtained to obtain the positions of the plurality of second feature points. Service: Department: Specialized: 眼镜 第 第 第 第 第 第 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 虚拟 ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ ★ The slope calculates the distance between the tilt of the face and the plurality of second and roots, and the length of the face is different from the ratio of the face to the second: the feature point and the second number The angle of the feature point is different from the rotation angle and the pitch angle of the face, and '', the inclination, the distance ratio, and the rotation exceed a preset allowable value, the second face image is recorded, and the angle is over 23:3. 201241781 32. The glasses virtual try-on interactive service system according to claim 20, wherein the synthesizing unit zooms and rotates the glasses model according to the size of the face and the plurality of first feature points, and further Synthesized to the face. 33. A virtual virtual try-on interactive service method, comprising: positioning a face through a frame in a face and obtaining a first face image; at the pupil feature of the eye, obtaining a plurality of numbers according to a sampling mode a feature point and feature information and storing a point spacing between the plurality of first feature information and the plurality of first feature points; using the motion image determination and the search tracking feature information to obtain the second face image in the next facet And obtaining a plurality of second feature points in the second face image, thereby obtaining a plurality of second feature information; comparing a difference between the relative position information of the first face image and the second face image, Determining a position, a moving state, and a scaling ratio of the face, thereby calculating a position of the plurality of second feature points; and synthesizing a preset contact lens model at a position of the plurality of second feature points. 34. The method of virtual virtual touch interactive service according to claim 33, wherein the plurality of first feature points further comprises two sides of the mouth corner. 35. The method for virtual interaction of glasses according to claim 33, further comprising: determining, according to the plurality of first feature points, whether the first face image conforms to face logic, and if not, Retrieving the first face image or the plurality of first feature points. 24 201241781 : The virtual virtual touch-on interactive suit of the glasses mentioned in Item 33 of the prescription is located in the search == whether the plurality of first-feature points are plural--the feature: re-acquiring the first face image or the m, The middle feature is a pixel color information, and the image is symplectic =:::: special _ out-to, = and η are positive ^ pixel points, as color information, and the claw service 38:: Please refer to the virtual dummy test of the glasses mentioned in Item 33 in the time interval to compare the face and the face with the face, and the sub-dynamic tracking of the plurality of first-feature points has a moving orbit method °, more = The glasses virtual try-on interactive service... preset a comparison range to compare the second feature information of the plurality of first-盥5 亥, and the second feature between the number of posts and the second feature A plurality of errors: error values, and the first i minimum error values are obtained, and the positions of the plurality of second feature points are entered. ^ Value (4) Obtain the complex number 4 = === According to the 33rd item, the virtual virtual try-on interactive clothing feature point is calculated according to the slope of the plurality of first feature points and the plurality of second 25 201241781 positions. The inclination of the face, the distance between the plurality of first feature points and the distance between the plurality of second feature points, the distance ratio of the face is calculated as a length change, and the plurality of first feature points are a ratio of the plurality of second feature points, the rotation angle and the pitch angle of the face are calculated by a proportional change, and if the inclination, the near-far ratio, the rotation angle, or the pitch angle of the face exceed a predetermined allowable value, re-acquisition The second face image. 41. The method of virtual virtual touch-wear interaction service according to claim 33, further comprising: scaling and rotating the glasses model according to the size of the face and the plurality of first feature points, and synthesizing to the human face. 26
TW100112053A 2011-04-07 2011-04-07 Interactive service methods and systems for virtual glasses wearing TWI433049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100112053A TWI433049B (en) 2011-04-07 2011-04-07 Interactive service methods and systems for virtual glasses wearing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100112053A TWI433049B (en) 2011-04-07 2011-04-07 Interactive service methods and systems for virtual glasses wearing

Publications (2)

Publication Number Publication Date
TW201241781A true TW201241781A (en) 2012-10-16
TWI433049B TWI433049B (en) 2014-04-01

Family

ID=47600174

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100112053A TWI433049B (en) 2011-04-07 2011-04-07 Interactive service methods and systems for virtual glasses wearing

Country Status (1)

Country Link
TW (1) TWI433049B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI492174B (en) * 2013-02-23 2015-07-11 南臺科技大學 Cloud body-sensory virtual-reality eyeglasses prescription system
TWI501751B (en) * 2013-01-31 2015-10-01 Univ Southern Taiwan Sci & Tec Vision testing system
TWI506564B (en) * 2013-05-29 2015-11-01
US9867533B2 (en) 2015-04-02 2018-01-16 Coopervision International Holding Company, Lp Systems and methods for determining an angle of repose of an asymmetric lens
CN112418138A (en) * 2020-12-04 2021-02-26 兰州大学 Glasses try-on system and program
TWI755671B (en) * 2019-01-04 2022-02-21 美商沃比帕克公司 Virtual try-on systems and methods for spectacles
US11676347B2 (en) 2020-04-15 2023-06-13 Warby Parker Inc. Virtual try-on systems for spectacles using reference frames

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI501751B (en) * 2013-01-31 2015-10-01 Univ Southern Taiwan Sci & Tec Vision testing system
TWI492174B (en) * 2013-02-23 2015-07-11 南臺科技大學 Cloud body-sensory virtual-reality eyeglasses prescription system
TWI506564B (en) * 2013-05-29 2015-11-01
US9867533B2 (en) 2015-04-02 2018-01-16 Coopervision International Holding Company, Lp Systems and methods for determining an angle of repose of an asymmetric lens
TWI632529B (en) * 2015-04-02 2018-08-11 古柏威順國際控股有限合夥公司 Systems and methods for determining an angle of repose of an asymmetric lens
TWI755671B (en) * 2019-01-04 2022-02-21 美商沃比帕克公司 Virtual try-on systems and methods for spectacles
US11783557B2 (en) 2019-01-04 2023-10-10 Warby Parker Inc. Virtual try-on systems and methods for spectacles
US11676347B2 (en) 2020-04-15 2023-06-13 Warby Parker Inc. Virtual try-on systems for spectacles using reference frames
CN112418138A (en) * 2020-12-04 2021-02-26 兰州大学 Glasses try-on system and program
CN112418138B (en) * 2020-12-04 2022-08-19 兰州大学 Glasses try-on system

Also Published As

Publication number Publication date
TWI433049B (en) 2014-04-01

Similar Documents

Publication Publication Date Title
US20200257266A1 (en) Generating of 3d-printed custom wearables
TW201241781A (en) Interactive service methods and systems for virtual glasses wearing
CN103140879B (en) Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program
US9817248B2 (en) Method of virtually trying on eyeglasses
JP4473754B2 (en) Virtual fitting device
JP2019076733A (en) Imaging of body
CN105574921B (en) Automated texture mapping and animation from images
US9342877B2 (en) Scaling a three dimensional model using a reflection of a mobile device
US20120299945A1 (en) Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
US20070258656A1 (en) Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
CN104898832B (en) Intelligent terminal-based 3D real-time glasses try-on method
KR20180069786A (en) Method and system for generating an image file of a 3D garment model for a 3D body model
US9799136B2 (en) System, method and apparatus for rapid film pre-visualization
US20210264684A1 (en) Fitting of glasses frames including live fitting
CN110275968A (en) Image processing method and device
CN107103513A (en) A kind of virtual try-in method of glasses
EP4296947A1 (en) Calibration information determination method and apparatus, and electronic device
KR101977519B1 (en) Generating and displaying an actual sized interactive object
CN106570747A (en) Glasses online adaption method and system combining hand gesture recognition
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
CN112508784A (en) Panoramic image method of planar object contour model based on image stitching
Kanis et al. Combination of Positions and Angles for Hand Pose Estimation
CN112580463A (en) Three-dimensional human skeleton data identification method and device
Tharaka Real time virtual fitting room with fast rendering
WO2002063568A1 (en) Method and system for generating virtual and co-ordinated movements by sequencing viewpoints