TWM363746U - Image-integrating based human-machine interface device - Google Patents

Image-integrating based human-machine interface device Download PDF

Info

Publication number
TWM363746U
TWM363746U TW98205998U TW98205998U TWM363746U TW M363746 U TWM363746 U TW M363746U TW 98205998 U TW98205998 U TW 98205998U TW 98205998 U TW98205998 U TW 98205998U TW M363746 U TWM363746 U TW M363746U
Authority
TW
Taiwan
Prior art keywords
image
human
interface device
machine interface
unit
Prior art date
Application number
TW98205998U
Other languages
Chinese (zh)
Inventor
Yeong-Sung Lin
Original Assignee
Tlj Intertech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tlj Intertech Inc filed Critical Tlj Intertech Inc
Priority to TW98205998U priority Critical patent/TWM363746U/en
Publication of TWM363746U publication Critical patent/TWM363746U/en

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides an image-integrating based human-machine interface device capable of simultaneously displaying the images of the outside and those stored therein on the screen when the user interacts with the interface device. The interface device is characterized by using an image-capturing device for capturing images on the outside to be integrated in proportion to the images stored in the interactive interface device, such that the user is able to view both of the images on the screen and thus is more accurately aware of the positions to increase the feeling of realness as a result.

Description

M363746 五、新型說明: 【新型所屬之技術領域】 裝置叙人機介面 内部影像進行影像融合之影像與原 【先前技術】 、置 由於科技的日新月異,電子產品的資 於鍵盤或滑鼠等傳統方i 』 、°知入不再侷限 像辨識輸入,均屬於更方便 手馬“或影 像辨識的輸入方式,不像°對於透過影 手不像a輪人方式需要鍵盤、滑鼠或 寫華讀w備,僅f將❹者影像躲錢行分析, P可轉換為輪人資訊以達到控制電子產品的效果。 -般以擷取影像作為輸人控制的互動裝置,其作法大 ··透過將所操取之影像進行辨識分析,可得到影像 特疋區域所移動的距離,以讓顯示晝面上 作相對應的移動,即可達到控制功能。此外,在= 上二》又计有觸發控㈣的功能,京尤是設定某些區域為產生互 動範圍,當外部影像移動到預先設定的觸發區域内,或是 、/觸务區域内產生特定的觸發動作,此時觸發事件即可 2執仃,以達到互動效果。其中,執行影像辨識時多以物 版移動變化量或物體變化程度作為判斷依據,藉此取得使 用者移動距離的大小或快慢,並且使顯示畫面上的指標或 影像作出相對應地移動。 雖然影像辨識的輸入控制技術越來越普遍,但以目前 M363746 常見的人機互動裝置還是存有不便之處。例如,人機互動 裝置在使用時,使用者較不清楚目前所在位置是否可在影 像擷取的範圍内,如此,有可能導致動作的影像未***取 到。關於此類情況,有些裝置可透過子母視窗,類似在顯 示畫面的小角落顯示所擷取的影像,以確保擷取影像正 確,但該些裝置仍有使用上的問題存在。例如,使用者益 法知悉其移動距離須多大,顯示晝面上的指標才能移到I 確位置,此外,顯示晝面上通常僅會以單調指標指示目前 位置,或是透過模擬人物表示目前所在位置,使得畫面呈 現上缺乏真貫感’特別是應用在遊戲機上的人 得遊戲者缺乏身入其境的感受。上述習知的:: 動^置的缺點將造成使用者相當程度之不便利。 之*面因:直;Τ人機互動裝置在控制上更準確,㈣ 互:置的:用佳的互動效果,且讓使用者在人機 【=:=用上較方便,是目前增決的問題。 黎於上述習知技術之缺點,本創作 ::為基礎之人機介面裝置,透過將所心:二象 原本設定之内部影像進行融合 卜“像與 可同時顯示於書面上,以提昇互::外像與内部影像 果。 —叫汁互動時更準確、更真實之效 本創作是以影像融合為基礎的 介面裝置是由影_取模組與 ^置,该人機 中,影像榻取模組,係用來㈣外:面拉組所組成。其 末_外部影像;而互動介面模 M363746 組,係與影像操取模組電性連接,用以將所設定之内” 像與影像擷取模組所操取之外部影像進行融合: 資料庫單元,係用以儲存預先設定的二 构跑融口早几’係依據所設定之融合比例將内部影像盘所 取到的外部影像進行影像融合,以產生融合影像^及 顯示單元,係用以顯示前面所產生之融合影像。 及 於H恕樣中,互動介面模組除了擷取外部影像用 :!像f析、影像融合外,尚具有觸發事件之判二= 力月b。δ亥互動介面模組復包含 β 〆、 單元,1中,,⑽抛二“象辨識早兀以及功能觸發 旦…二、 象戠早元,係用以辨識所擷取之外部 成辨識資訊’·而功能觸發單元,係依據前述之辨 硪-貝亂觸發該互動介面模組,使互 辨 功能動作,再由該顯示單元顯式執行該功能動作=之 —於另—較㈣樣中,互動介面模 、 二主要係用以對影㈣取模組_取 = =像;影像與内部影像融合而成之二; =仃m奴影像融合魏縣合比例進行影像之^ 先、:妾下來’針對人機介面裝置之運作流程作說明,首 ^過影㈣取模組操取外部影像並傳送至互=面; 妾者,將所擷取的外部影像與原 — 乂 、 :-融合比例之方式進行融合或直接進:二, 顯示晝面中同時看到外部影像與内部影像;存= M363746 影像融合係透過比烟鍾生融合,當❹者觀看顯示晝 面才則έ有種外部影像以若隱若現或是半透明的感覺呈 現在晝面上。此外,人齡面裝置可預先相部影像中定 ,出哪些區塊或影像為觸發區域,且該些觸發區域可被設 定多個,在使用者透過人機介面裝置進行互動時,當外部 影像進入該些觸發區域時,影像辨識單元㈣識出該外部 影像已進人該區域,且產生辨識資訊,接著互動介面模組 中的功能觸發單元依據_齡訊產生相對應之功能動 作,並透過顯示單元顯示相對應之影像,以達到互動之效 果。 相較於習知技術,本創作之人機介面裝置可使外部影 像與内像同時呈現於顯示晝面上,使用者可更清楚自 己在畫面上的㈣位置’以便準4地移動到所欲到達之區 域;此外,㈣❹切像齡时面上,可讓使用者使 有/、貝感而非只是單調的指標移動或模擬圖樣等 方式’ ^將該人機介面裳置應用於遊戲機上,其效果將 勝於傳統方式。 【實施方法】 以下係藉由特定的具體實例說明本創作之實施方 熟悉此技藝之人士可由本說明書所揭示之内容輕易地 瞭解本創作之其他優點與功效。 〜如$ 1圖所示’係本創作之人機介面裝置之系統構造 °如目所示’該人機介面裝置1主要由影像擷取模 組1〇與互動介面龜11触成。如前所述,該影像擷取 7 M3 63 746 核組ίο可為攝影機、網路視訊裝置或使用 =’邮像_模組1G的功能是用來操取外部影; 體使用者之臉部、手部、腳部或是其他物 輸入”方% rf滅職作為輸人的資訊。相較於其它 工i方式,本創作無需利用滑鼠、控制器等輪入钟 =2透過輸入的影像對該互動介面模組 : 互動介面模、組u,係主要用以將所設定之内部景m 榻取模組所掏取之外部影像進行融合,該外部:像^像 到互動介面模組u, 口。玄外“像被傳送 單元η卜融合單元二、+包含有資料庫 庫單元⑵用來儲存^^=123 ’其中’該資料 顯示之影像…虫12二玄些内部影像是預先設定 與外部影像進二i 據融合比例將内部影像 卜^像進仃融合以產生融合影像 透過使用者進行設定,且不同融合 。比例疋可 像亦會不同,最後在將該融合 影 進行顯示。 %丨号达主及頒不早兀123 於-較佳實施例中,前面所述的 影像以及前景影像,其中 。〜像了刀為月景 像,前景影像則包含移動式視為背景影 像或是背景影像之數量至少有之衫像層,且前景影 與内部影像會以所設定之融合比“ 部影像 通常會將外部影像與背景影像進行融:^二例來§兄, 上,在使用將所揭取之外部影像融合其 在使用上並不會產生觀看問題,透過上述方式所產生 M363746 之影像稱之為融合影像,具體而言,該融合影像係將前景 影像、外部影像以及背景影像之相同位置的晝素(pixel)點 依所設定之比例融合而成。 於另一較佳實施例中,該融合單元可將影像晝面區分 為複數個區域,並對應地設定複數個不同之融合比例,再 ,依據該融合比例將該内部影像與該外部影像進行融合以產 生融合影像,更能增添融合影像的變化性。 於再一較佳實施例中,該互動介面模組11還包含有 • 疊合單元126,係用來對外部影像、内部影像或是融合影 像進行疊合,其用意在於前景影像中多數影像是重要或是 用於控制互動之影像,為了避免被外部影像甚至是背景影 像所覆蓋影響,因此會透過疊合單元126將該些影像作疊 合。 第2圖係說明影像融合之示意圖。如圖所示,背景影 像201中包含左上方的太陽及下方的山脈,由於該些影像 屬於不移動之影像,因此該些影像被視為背景影像201, 而前景影像203右上方有一觸控按鈕,由於其為重要影像 且作為互動之用,因此將之視為前景影像203,而中間表 示所擷取到的外部影像202,可為使用者的手臂或是特定 ' 物體,當三張影像進行融合時,該背景影像201與該外部 • 影像202以設定之融合比例方式融合,因此當我們從顯示 晝面觀看時,所擷取到的影像(手臂或特定物體)會呈現 看似半透明之狀態,而前景影像203裡的觸控按鈕係作為 互動之用,因此不會被背景影像201或外部影像202所影 9 M363746 響,其融合/疊合後之 晝面上。 影像為融合/疊合影像204,並 顯示於 在第】圖内,該人機介面穸 124以及功tun# # - 4置1包自有影像辨識單元 久刀月匕觸發早兀125,苴中 用以辨識外邻&俏, 该衫像辨識單元124 外心像以形成辨識資訊, 兀125則是依據辨識資訊觸發互動介面Γ 月匕1發單 功此動作,以進行觸發控制財對應之 除了辨識影像之形態、形狀或變;像辨識單元⑶ 疋否有觸發互動功能,因此預先在㈣影像 ^像 觸發區域,用以對該觸發區域中的 一 斷。此時,影像辨識單元⑶對 ° ^像進仃觸發判 影像進行辨識並產生辨識資訊,::功域之外部 據所辨識資訊以觸發該互動介面模έ且力元⑵依 作。 误、、且執订對應之功能動 本創作之人機介面裝置1的 輪入資吨以谁耔3的即疋以外部影像做為 來辨η:: 此影像辨識單元124即是用 來辨識所擷取之外部影像。其辨識互動是 =用 :卿模組10所顧取之外部影像進入聊 觸發區域之特定位置重疊時,…戈疋與 該外部影像已觸發互動功能,另外::辨,早70124判別 組10所操取之外部影像進入觸發區域貝]疋备影像揭取模 部影像有特定手勢動作或是特定;動方式到該外 手、翻轉等動作’則判定該外部影像觸發互動:動: 10 M363746 娜Γ 會產生辨識資訊並傳訊至功能觸發 觸發單^125則依據該辨識資訊啟動互 動”面核組Η作相對應的互動功能,此時,所謂 是影像變化、聲音播放或是其它後續影像互動等等 都έ透過顯不單元123顯示出來。 .. 於另-較佳實施例中,該影像辨識單元以 1=㈣影像定義至少—觸發區域,再對該觸發區3 畫面進行註冊以產生一註冊蚩 匕忒之 影像操取模組1〇所擷取之觸發區域的匕畫對面與該 中是否存在外部—二= 功=^所判斷之結果觸發該互動介面模組n執行對應之 於再一較佳實施例中,該 外部影像進行註冊以產生—註冊先對 ::::_組_取之外部影像進行比二判 ;=中的動態物體影像’俾提昇該影像辨識單元= 革例如,影像辨識單元124 的辨熾 當外部影像中存在其他她^將外部月景影像註冊, 冊之影像與所擷取之影:::比所註 物體的形態、形狀或移動軌跡。.了更精確的辨識出該 具體而言’由於互動介面模組】 的移動狀態’因此預先產生觸發區域之下事—=部影 fl_e),即註冊書面。接荽 ^ 们旦面(next 之觸發區域再將影像揭取模纽10所操取 X〔域的畫面(c_frame)與註冊畫面進行比對,即 M363746 可持續_是衫外部影㈣餘移動巾 如第3圖所示,為本創作之人機介面裝置 的具體實施例。在步驟咖中, 乂驟 部I彡#推彡-也% 冢綠取杈組將外 作動介面模組以 像辨1單-八I 力14' S302中,互動介面模組内的影 可為外部影像之尺寸、顏色、形狀 =貝讯 移動距錐、妒氣、* —、 、次理、方位、 私動逮度或變化幅度等。在 ::元::部影像與内部影像依照預定融::二: 岫合。在步驟S304中,A人罝分腚乩A 仃〜像 以及融合影像進行影像疊^此遇人單7像、内部影像 r有不能被覆蓋影響之影像時二:::前r “象辨4早冗判斷外部影像與觸發區域之關 衫像是否進入觸發區域内,或是在 /外部 :及= 7中,顯示單元顯示前述所有二 ΐ =ΐΓ模組執行對應之功能動作。透過上述各二 本創作之應用除了在面裝置運作方式。 可將該人機介面裝置声用 :σ σ之輻入控制外,亦 像作為輸入資料,:==。透過將所掏取之影 人物移動之操控方式,在標移動或是對應模擬 能更準她”卜,同時二=中,除了使用者移動 使用者影像顯示於晝面中更有身 M363746 入其境之感受。接下來係以本創作之人機介面裝置應用於 遊戲機之實施例。 第4a-4c圖說明本創作之人機介面裝置應用在打地鼠 遊戲之實施例。 如第4a圖所示’人機介面裝置之晝面下方出現地鼠 /同404-406 ’上方則有相對應之觸發區域4〇1_4〇3,此外, 晝面内還有擷取的外部影像,這裡是使用者手臂4〇7;由 =地咏洞404-406屬於遊戲中重要影像,因此被視為前景 二像’且使用者手臂407不會影響地鼠洞4〇4-4〇6之呈現, 虽然’背景影像部分通常會有f景圖樣,而本範例中未繪 如第4b圖所示 從戲進仃時队石吧氣涧4U4冒出 2 ’使用者可依據畫面所呈現使用者之影像,將使用者 向觸發區域_,此時互動介面模組内的影像 會進行辨識,其判斷方式如前所述有兩種,一種 1¾區域4G1之特定區域内若有擷取影像(使用者手 4〇7)即觸發’或是另—種則是操取影像(使用者手臂則 :::區域401,並作出某手勢或動作才糊 _用上述何種方式皆可達成觸發啟動之判斷。 …、 產峰角圖所示’當影像辨識單元認為使用者手臂彻 作,在這打地鼠遊戲之實施例中,是以棒子打 地既之衫像進行互動情況,因此,在 牛子打 有觸發動作後,功能觸發單元會執行對庫之=兀_出 置反£'寸間或壬現時間等,皆是透過功 M363746 能觸發單元將該些影像進行顯 以達到遊戲互動效果。亚王現於顯示單元上’ 帝子=」第^^圖說明本創作之人機介面裝置應用在 二群广列’透過揭取影像與電子水族箱501内 勺,',、群502產生互動。 如第=圖所示,在電子水族箱5Gi时著魚群5〇2 像力’在此實施例中簡略背景影像,這裡背景影 = 石頭等。此外,這裡所謂互動是指當使用者 f者电子水無箱501時,魚群观也受到使用者所吸引 而朝向觀看者,其說明如後。 如第%圖所示’當使用者靠近影像操取模組, 所擷取之外部影像在本實施例中為人臉影像5〇3,該 影像503經影像融合會辱 " 》。日為L畫面上,由於魚群5〇2 厅、衫像’因此’不受背景影像或外部影像之干擾。此 時,由影像辨識單元進行辨識,當掘取的影像呈現人臉特 徵時’ 1視為該使用者正看著電子水族箱5〇1的情境。 所不,當影像辨識單元判斷出人臉影像顯 不;二面上%,則傳送辨識資訊到功能觸發單元,該功处 觸發單元找出對應之資料,並執行相對應之互動動作,2 该圖所不’所有的魚群5G2皆轉頭過來朝向使用者方向二 且向人月狀位置聚集,因此產生使用者與魚群502互動之致 果此吟使用者的臉部會若隱若現的顯示於電子水族浐 5〇1’這種顯示晝面的效果,符合使用者觀看真實水= 臉部映射於水族箱鏡面的情境。 可 34 M3 63 746 於—較佳實施例中,影像辨識單元尚可進一 ==表情,藉以進行後續之互動控制。以電子^ 二則::;〇Γ咖識單元所辨識的使用者為愉悅的表 、】〜、群502可對應地產生跳躍或律動的 所辨識的使用者表情為悲傷的表情,= .m生垂碩喪氣或緩慢移動的情況。 ^ : = 識的使用者為陌生人(第-次出現的: 者)I、群5〇2可能對應產生有敵意的表情。 ㈣交於習知技術,本創作之人機介面裝置同時將外部 ^ =内部影像進行融合,使得互動效果更好,M363746 V. New description: [New technical field] The image of the image fusion of the internal image of the device and the original [previous technology], due to the rapid development of technology, the electronic products are invested in traditional methods such as keyboard or mouse. i 』, ° knowing is no longer limited to the image recognition input, are more convenient to the horse or "image recognition input method, unlike ° for the shadow player does not need a keyboard, mouse or write Chinese w Prepare, only f will analyze the image of the leader, and P can be converted into the information of the wheel to control the effect of the electronic product. - Generally, the image is used as an interactive device for input control, and its practice is large. The image of the operation is subjected to identification analysis, and the distance moved by the image-specific area can be obtained, so that the corresponding movement can be performed on the display surface to achieve the control function. In addition, the trigger control is also included in the =2. The function is to set certain areas to create a range of interactions. When the external image moves into a preset trigger area, or a specific trigger action is generated in the / touch area, At this time, the triggering event can be executed in order to achieve the interactive effect. Among them, when performing image recognition, the amount of movement change of the object version or the degree of change of the object is used as a basis for judging, thereby obtaining the magnitude or speed of the moving distance of the user, and The indicators or images on the display screen move correspondingly. Although the input control technology of image recognition is more and more common, there are inconveniences in the current human-machine interaction devices commonly used in M363746. For example, human-machine interaction devices are in use. At this time, the user is less aware of whether the current location is within the range of image capture, and thus, the image of the action may not be manipulated. In such cases, some devices may be displayed through the mother-child window, similar to the display. The small corners of the screen display the captured images to ensure that the captured images are correct, but there are still problems with the use of these devices. For example, the user knows how much the moving distance must be, and displays the indicators on the surface. Move to the I position, in addition, the display surface usually only indicates the current position with a monotonous indicator, or through the simulation of the character Show the current location, so that the lack of real sense of the screen presentation 'especially the player who is applied to the game machine lacks the feeling of being inside. The above-mentioned shortcomings: The shortcomings of the action will cause the user to be quite The degree is not convenient. The * surface factor: straight; Τ human-machine interaction device is more accurate in control, (4) mutual: set: use good interactive effect, and let users in the human machine [=:= use is more convenient It is the problem of the current increase. Li is the shortcoming of the above-mentioned conventional technology, this creation:: The basic human-machine interface device, through the integration of the internal image of the original: the two images originally set up In writing, to enhance each other:: external image and internal image fruit. - It is more accurate and more realistic when it comes to juice interaction. The creation of the interface device based on image fusion is based on the image capture module and the image capture module, which is used for (4) : The face pull group is composed. The end of the external image; the interactive interface module M363746 is electrically connected to the image manipulation module for integrating the set image with the external image captured by the image capture module: The unit is configured to store a preset two-component running fuse, and the image is merged with the external image taken by the internal image disk according to the set fusion ratio to generate a fused image and a display unit. The fusion image generated in the front is displayed. In the case of H, the interactive interface module uses the external image for::,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, The interface module includes β 〆, unit, 1 , and (10) throwing 2 “like recognition early and function triggering... 2, symbolizing early element, used to identify the extracted external identification information” The triggering unit triggers the interactive interface module according to the foregoing identification-beauty to make the mutual recognition function act, and then the display unit explicitly performs the function action=in the other-more (four) sample, the interactive interface mode Second, the main system Take the shadow (4) module _ take = = image; image and internal image fusion of the second; = 仃m slave image fusion Wei County proportional image for the first,: 妾 down' operation for the human-machine interface device The flow is explained, the first ^ shadow (4) takes the module to take the external image and transmits it to the mutual = face; the latter, the external image captured is merged with the original - 乂, :- fusion ratio or directly into: Second, the external image and the internal image are displayed simultaneously in the display surface; save = M363746 The image fusion system is integrated with the smoke clock, and when the viewer watches the display, the external image is looming or translucent. Presented on the face. In addition, the human face device can determine which blocks or images are the trigger regions in the pre-phase images, and the trigger regions can be set multiple, when the user interacts through the human-machine interface device, when the external image When entering the triggering area, the image recognition unit (4) recognizes that the external image has entered the area, and generates identification information, and then the function triggering unit in the interactive interface module generates a corresponding functional action according to the age information, and transmits The display unit displays the corresponding image to achieve the effect of interaction. Compared with the prior art, the human-machine interface device of the present invention can make the external image and the inner image appear on the display surface at the same time, and the user can more clearly know the position of the (four) on the screen to move to the desired position. In the area where it arrives; in addition, (4) the face of the image-cutting surface allows the user to move or simulate the pattern with or without a monotonous indicator. ^ ^The human-machine interface is applied to the game machine. The effect will be better than the traditional way. [Embodiment] The following describes the implementation of the present invention by a specific specific example. Those skilled in the art can easily understand other advantages and effects of the present invention by the contents disclosed in the present specification. ~ As shown in Fig. 1, the system structure of the human-machine interface device of the present invention is shown in the figure. The human-machine interface device 1 is mainly formed by the image capturing module 1 and the interactive interface turtle 11. As mentioned above, the image captures the 7 M3 63 746 core group. ίο can be used as a camera, a network video device, or the function of using the ''mail image_module 1G' is used to manipulate external images; the face of the user , hand, foot or other input "%% rf extermination as a loss of information. Compared to other methods, this creation does not need to use the mouse, controller, etc. wheel into the clock = 2 through the input image The interactive interface module: the interactive interface module, the group u, is mainly used to fuse the external image captured by the set internal scene m to the module, the external: like the image to the interactive interface module u , mouth. Xuanwai "like the transfer unit n b fusion unit 2, + contains the database library unit (2) used to store ^ ^ = 123 'where 'the data display image ... insect 12 two Xuan some internal images are preset The internal image is merged with the external image into a fusion ratio to generate a fused image through the user for setting, and different fusion. The ratio 疋 can also be different, and finally the fused image is displayed. % 丨 达 及 及 及 及 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 较佳 较佳 较佳 较佳 较佳 较佳 较佳~ Like a knife for the moon scene, the foreground image contains the mobile image as the background image or the background image, at least the number of the shirt layer, and the foreground image and the internal image will be set to the fusion ratio. The external image is merged with the background image: ^Two cases come to the § brother, and the image of M363746 generated by the above method is called when the external image is extracted and used in the use. The fused image, in particular, the fused image is obtained by merging pixel points of the same position of the foreground image, the external image and the background image according to the set ratio. In another preferred embodiment, the fusion The unit can divide the image surface into a plurality of regions, and correspondingly set a plurality of different fusion ratios, and then fuse the internal image with the external image according to the fusion ratio to generate a fused image, which can further add a fused image. In a further preferred embodiment, the interactive interface module 11 further includes a stacking unit 126 for external image, internal image or The fusion image is superimposed, and the intention is that most of the images in the foreground image are important or used to control the interactive image. In order to avoid being affected by the external image or even the background image, the image is made by the superimposing unit 126. Fig. 2 is a schematic diagram showing image fusion. As shown in the figure, the background image 201 includes the upper left sun and the lower mountain. Since the images belong to the non-moving image, the images are regarded as the background. The image 201, and the foreground image 203 has a touch button on the upper right side. Since it is an important image and is used for interaction, it is regarded as a foreground image 203, and the middle represents the captured external image 202, which can be a user. The arm or a specific 'object', when the three images are merged, the background image 201 and the external image 202 are fused in a set fusion ratio, so when we view from the display side, the captured image (arm or specific object) will appear to be semi-transparent, and the touch buttons in foreground image 203 are used for interaction, so no It will be reflected by the background image 201 or the external image 202 9 M363746, which is merged/superimposed on the top surface. The image is a fused/superimposed image 204, and is displayed in the first picture, the human interface 穸124 And the function tun# # - 4 sets 1 package of own image recognition unit for a long time, the trigger is early 125, which is used to identify the outer neighbor & Pretty, the shirt image recognition unit 124 external image to form identification information, 兀 125 It is based on the identification information to trigger the interactive interface Γ 匕 匕 1 to send a single action, in order to trigger the control corresponding to the shape, shape or change of the identification image; the image recognition unit (3) 触发 whether there is a trigger interaction function, so in advance (4) The image ^ image triggering area is used for a break in the triggering area. At this time, the image recognition unit (3) recognizes the image of the image and generates the identification information, and: the external information of the work area is identified. To trigger the interactive interface module and force (2) to rely on. The erroneous, and the corresponding function of the human interface device 1 of the human-computer interface is determined by the external image as the 轮:: This image recognition unit 124 is used for identification The external image captured. The identification interaction is = when: the external image taken by the Qing module 10 is overlapped with the specific position of the chat trigger area, ... the interaction between the Ge and the external image has been triggered, and another:: Identification, early 70124 discriminating group 10 The external image of the operation enters the trigger area. The image is extracted from the image of the module and has a specific gesture or specific; the action to the external hand, flipping, etc. determines that the external image triggers the interaction: Action: 10 M363746 Γ The identification information will be generated and the function will be transmitted to the function trigger trigger list ^125, according to the identification information, the interactive function of the face-core group will be activated. At this time, the image change, sound play or other subsequent image interaction, etc. The display unit is displayed through the display unit 123. In another preferred embodiment, the image recognition unit defines at least a trigger area with 1=(four) images, and then registers the trigger area 3 screen to generate a registration.蚩匕忒 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像 影像Corresponding to a further preferred embodiment, the external image is registered to generate - register the first pair of :::: _ group _ taking the external image to perform a second sentence; = the dynamic object image in the ' 俾 enhance the image recognition Unit = leather For example, the image recognition unit 124 discriminates when there is another image in the external image that registers the external moon image, the image of the album and the captured image::: the shape, shape or movement of the object being inscribed More precisely, the specific state of the 'interaction interface module' is moved. Therefore, the trigger area is pre-generated - the part shadow fl_e), that is, the registration is written. The trigger area is then compared with the registration screen by the X-domain picture (c_frame) of the image-receiving module 10, that is, the M363746 is sustainable _ is the outer shadow of the shirt (four), and the moving towel is as shown in FIG. A specific embodiment of the human-machine interface device of the present invention. In the step coffee, the 部 部 彡 彡 彡 彡 也 也 也 也 也 也 也 也 冢 冢 将 将 将 将 将 将 将 将 将 外 外 外 外 外 外 外 外 外 外 ' ' ' In S302, the shadow in the interactive interface module can be the size of the external image. , color, shape = Beixun moving distance cone, helium, * -, , secondary, azimuth, private movement catch or change range, etc. In:: yuan:: part of the image and internal image according to the scheduled fusion:: two: In step S304, the A person 罝 腚乩 A 仃 像 像 像 像 像 像 像 像 像 融合 融合 融合 融合 融合 融合 融合 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此 此It is determined whether the external image and the trigger image of the trigger area enter the trigger area, or in / external: and = 7, the display unit displays all the above two ΐ = ΐΓ module to perform the corresponding functional action. The application of each of the above two creations is in addition to the operation of the face device. The human-machine interface device can be used to sound outside the control of σ σ, also as input data, :==. Through the manipulation method of moving the captured character, the target movement can be more accurate in the standard movement or the corresponding simulation. At the same time, in addition to the user moving the user image displayed in the face, M363746 is included. The experience of the environment is followed by the human-machine interface device of the present invention applied to the embodiment of the game machine. The 4a-4c diagram illustrates the application of the human-machine interface device of the present invention to the hamster game. The upper part of the 'human-machine interface device' appears below the hamster/the same 404-406' has a corresponding trigger area 4〇1_4〇3. In addition, there is an external image captured inside the surface. Here is the use The arm 4〇7; by = 咏 hole 404-406 belongs to the important image in the game, so it is regarded as the foreground two image 'and the user arm 407 does not affect the presentation of the squirrel hole 4〇4-4〇6, although 'The background image part usually has a f-scape pattern, but in this example, it is not shown in Figure 4b. When the game enters the game, the team stone is smashing 4U4 and the 2' user can display the image of the user according to the picture. Move the user to the trigger area _, at this time the shadow in the interactive interface module It will be identified. There are two ways to judge it as described above. If there is a captured image in the specific area of the 4G1 area (user's hand 4〇7), it will trigger 'or another type is to capture the image ( The user's arm is :::: area 401, and a gesture or action is made to make a paste. _ In any of the above ways, the triggering start judgment can be achieved. ..., the peak angle diagram is shown as 'When the image recognition unit considers the user's arm to be thorough In the embodiment of the hamster game, the interaction is performed by the stick figure of the bat, so after the cow has a trigger action, the function trigger unit executes the counter 兀 出 出£' inch or time, etc., all of them can be displayed by the M363746 trigger unit to achieve the interactive effect of the game. Yawang is now on the display unit 'Dizi =' The human-machine interface device is applied in two groups of 'through the image and the electronic aquarium 501, ', and the group 502 interacts. As shown in the figure =, the fish aquarium 5〇2 image in the electronic aquarium 5Gi Force' in this embodiment abbreviated background image, this Background shadow = stone, etc. In addition, the interaction here means that when the user's electronic water is empty, the fish view is also attracted to the user and is directed toward the viewer, as explained in the figure below. When the user approaches the image manipulation module, the captured external image is a human face image 5〇3 in this embodiment, and the image 503 is humiliated by image fusion. The day is the L image, due to the fish group 5 〇2 Hall, shirt image 'so' is not interfered with background image or external image. At this time, it is recognized by the image recognition unit. When the image is displayed, the face is regarded as the user is watching the electronic The situation of the aquarium 5〇1. No, when the image recognition unit determines that the face image is not displayed; on the two sides, the identification information is transmitted to the function trigger unit, and the work trigger unit finds the corresponding data and executes Corresponding interactive action, 2 The figure does not 'all the fish 5G2 turn around head toward the user direction 2 and gather at the human month position, thus causing the user to interact with the fish group 502. If the ministry will be hidden Display the electronic aquarium Chan 5〇1 'this day display surface, in line with the user context and see the real face = mapped water aquarium mirror. 34 M3 63 746 In the preferred embodiment, the image recognition unit can further enter a == expression for subsequent interactive control. The user identified by the electronic device is a happy watch, and the group 502 can correspondingly generate a jump or rhythm of the recognized user expression as a sad expression, = .m A situation in which a person is frustrated or moves slowly. ^ : = The user who knows is a stranger (first occurrence: one) I, group 5〇2 may correspond to a hostile expression. (4) handed over to the conventional technology, the human-machine interface device of the creation simultaneously fuses the external ^ = internal image, so that the interaction effect is better.

兵口技術之人機互動裝置使料’無法準確 S :便或單調性等問題,因此’實已解決習知技術之 上述實施例僅例示性說明本創作之原理及其功效 ¥用於限制本創作。任何熟習此項技藝之人士均可在不、土 Γ新型之精神及料下,對上述實施例進行㈣與^ :圍:本創作之權利保護範圍’應如後述之申請專利 【圖式簡單說明】 圖;第!圖係本創作之人機介面裝置之具體實施例的架構 第2圖係本創作關於影像融合之示意圖; 第3圖係本創作之人機介面萝罟 风"㈤衣置之運作步驟的具體實 M363746 施例; 第4a-4c圖說明本創作之人機介面裝置應用在打地鼠 遊戲之實施例;以及 第5a-5c圖說明本創作之人機介面裝置應用在電子水 族箱之實施例。 【主要元件符號說明】 1 人機介面裝置 10 影像擷取模組 11 互動介面模組 121 資料庫單元 122 融合單元 123 顯示單元 124 影像辨識單元 125 功能觸發單元 126 疊合單元 201 背景影像 202 外部影像 203 前景影像 204 融合/疊合影像 S301-S307 步驟 401〜403 觸發區域 404〜406 地鼠洞 407 使用者手臂 501 電子水族箱 502 魚群 503 人臉影像The human-machine interaction device of the warhead technology makes the material 'inaccurate S: or monotonous, etc., so the above-mentioned embodiments of the prior art have only exemplified the principle and function of the present invention. creation. Anyone who is familiar with this skill can carry out (4) and ^: Wai: The scope of the rights protection of this creation should be patented as described later in the spirit of the new and the new spirit of the bandit. 】 Figure; the first! Figure 2 is a schematic diagram of a specific embodiment of the human-machine interface device of the present invention. Figure 2 is a schematic diagram of the image fusion of the present invention; Figure 3 is a specific diagram of the operation steps of the human-machine interface of the creation of the creation of the machine. M363746 example; 4a-4c illustrates an embodiment of the human interface device of the present invention applied to a hamster game; and 5a-5c illustrates an embodiment of the human interface device of the present invention applied to an electronic aquarium . [Main component symbol description] 1 Human interface device 10 Image capture module 11 Interactive interface module 121 Database unit 122 Fusion unit 123 Display unit 124 Image recognition unit 125 Function trigger unit 126 Superimposed unit 201 Background image 202 External image 203 foreground image 204 fused/superimposed image S301-S307 steps 401~403 trigger area 404~406 gopher hole 407 user arm 501 electronic aquarium 502 fish group 503 face image

Claims (1)

M363746 、申請專利範圍: 種乂衫像融合為基礎之人機介面裝置,其包括. 影像,取模組,係用來擷取外部影像了以及· I面模組,係與該影像擷取模組 資料^ 合,其+,該互動介面模組包括: —、’、早兀,係用以儲存該内部影像; 像與像融合比例將該内部影 上像進仃融合以產生融合影像·以及 2. 、、早711,係用以顯示該融合影像。 =請專利範圍第!項之人機介面裝置, 動介面模組復包括: ,、中忒互 办认辨5戠單元’係用以 / 識資訊;以及 气。亥外。卩衫像以形成辨 ,能觸發單元,係依據該辨識資 面枳組執行對應之功能動作 ::。亥互動介 行該功能動作之結果。 不早兀顯示執 :申請專利範圍第2項之人機介 4. 像辨識單元係預先對該影像揭取模取=該影 影像進行註冊以產生-外部影像兮冊查之该外部 該外部影像註冊晝面與該影像==透過將 影像進行比對,以判斷該外部、由斤㈣之外部 像,俾提昇該影像辨識單元的辨識率。的動恶物體影 如申凊專利範圍第2項之人機介面裝置,其中,該影 M363746 I辨。戠單兀由该内部影像定義至少一觸發區域,以對 該觸㈣域之外部㈣進行觸,俾使該功能 二早讀據所_之結果觸㈣互動介面模組執行 子應之功能動作。 5. 6. 8. 如申請專利範圍第2項之人機介面裝置,其中,奸 :辨識單元依據該外部影像之移動,使功能觸發單元 將對應該外部影像之位置的㈣影像作相應之移動。 士申。月專利範圍第4項之人機介面裝置,其中,該影 ,辨識單元將移動中之該内部影像定義至少一觸發區 域,’再對該觸發區域之晝面進行註冊以產生—註冊畫 7 ’透過將該註冊晝面與該影像擷取模組所揭取之觸 發區f的畫面進行比對,以判斷該觸發區域中是否存 卜P〜像,俾使該功能觸發單元依士 觸發該互動介面模組執行對應之功絲作。^、、·°果 / ―清專利範圍第2項之人機介面裝置,#中,該辨 識資訊為該外部影像之尺寸、顏色、形態、形狀、紋 理、方位、移動距離、移動速度或變化幅度。 如申請專利範圍第2項之人機介衫置,其中,該資 ,庫單,、融合單元、顯示單元、影像辨識單元或功 能觸發單元係透過電腦軟體形態方式實現。 如申請專利難第丨項之人機介面裝置,其中,該融 合單元於影像晝面中不同區域設定不同之融合比例, 以依據對應該區域之融合比例將該内部影像 影像進行融合以產生融合影像。 〃 18 9. M363746 10.如申請專利範圍第1項之人機介面裝置,其中,該互 動介面模組復包括疊合單元,係用以對該影像擷取模 、'且所ϋ取之泫外部影像、該内部影像或該融合影像進 行疊合。 U·如申請專利範圍第10項之人機介面裝置,其中,該疊 合單元係透過電腦軟體形態方式實現。M363746, the scope of application for patents: a human-machine interface device based on fusion, including: image, module, used to capture external images, and I-face module, and the image capture module The group data is combined with the +, and the interactive interface module comprises: -, ', early, for storing the internal image; the image is blended with the image to merge the image to produce a fused image, and 2., 711, is used to display the fused image. = Please patent scope! The human-machine interface device of the item, the dynamic interface module includes: , and the Chinese-following office recognizes that the 5戠 unit is used for / information; and gas. Outside the sea. The shirt image is formed to identify the unit, and the corresponding function action is performed according to the identification group. The interaction of the function is the result of this function. No early display: the human machine interface of the second application of the patent scope 4. The image recognition unit pre-extracts the image capture = the image is registered to generate - the external image is checked for the external external image Registering the image with the image == By comparing the images to determine the external image of the external (4), the recognition rate of the image recognition unit is increased. The moving object is the human-machine interface device of claim 2, wherein the image is M363746 I. The 戠 兀 定义 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 定义 定义 定义 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少 至少5. 6. 8. For the human-machine interface device of claim 2, wherein the identification unit moves the function trigger unit to move the corresponding image of the position corresponding to the external image according to the movement of the external image. . Shishen. The human-machine interface device of the fourth aspect of the patent, wherein the image recognition unit defines at least one trigger region for the internal image in motion, and then registers the face of the trigger region to generate a registration picture 7 Comparing the registration face with the screen of the trigger zone f extracted by the image capturing module to determine whether the P-type image is stored in the trigger area, and causing the function triggering unit to trigger the interaction The interface module performs the corresponding work. ^, , · ° ° fruit / ― clear patent scope of the second human interface device, #, the identification information is the size, color, shape, shape, texture, orientation, moving distance, moving speed or change of the external image Amplitude. For example, the man-machine suit of the second item of the patent application scope, wherein the capital, the warehouse order, the fusion unit, the display unit, the image recognition unit or the function triggering unit are realized by the computer software form. For example, the human-machine interface device of the patent application is characterized in that the fusion unit sets different fusion ratios in different regions of the image plane to fuse the internal image images according to the fusion ratio of the corresponding regions to generate a fusion image. . 363 18 9. M363746 10. The human-machine interface device of claim 1, wherein the interactive interface module comprises a stacking unit for capturing the image, and extracting the image The external image, the internal image, or the fused image is superimposed. U. The human-machine interface device of claim 10, wherein the superimposing unit is realized by a computer software form. 12. t中請專利範圍第i項之人機介面裝置,其中,該内 部影像為前景影像或背景影像。 13. t申請專利範圍第12項之人機介面裝置,其中,該前 景影像為移動式或固定式之影像層。 14. t申請專利範圍f 12項之人機介面裝置,其中,該前 尽影像或背景影像之數量為至少一層。 入申π專利祀圍第12項之人機介面裝置,其中,該融 5影像係㈣前景影像、科部影像以及該背景影像 同位置的晝素(plxel)點依所設定之比例融合而得12. The human-machine interface device of the scope of claim i, wherein the internal image is a foreground image or a background image. 13. The human-machine interface device of claim 12, wherein the foreground image is a mobile or fixed image layer. 14. The human-machine interface device of claim 12, wherein the number of the front-end image or the background image is at least one layer. The human-machine interface device of the 12th item of the application of the π patent, wherein the image of the image (4), the image of the department, and the plxel point of the background image are merged according to the set ratio 邱專利補第1項之人機介面裝置,其中,該外 ^為使用者之臉部、手部、腳部或其他物體之形 〜、形狀或移動執跡。 .如申请專利範圍第1 像揭取模电為㈣機介面裝置’其中,該影 置。為H網路視訊裝置或擷取影像之裝 •如申5青專利範圍第 示單元為: 曰* 1項之人機介面裝置,其中,該顯 曰曰a幕、電視螢幕或用於顯示影像之裝置。 19 M363746 19.如申請專利範圍第1項之人機介面裝置,其中,該人 機介面裝置可應用於遊戲機、控制面板或互動式顯示 器。The Qiu patent supplements the human-machine interface device of item 1, wherein the outer surface is the shape, shape or movement of the user's face, hand, foot or other object. If the patent application scope is the first image, the mode is the (4) machine interface device', which is the effect. For the H network video device or capture image installations • The application unit of the Shen 5 Green Patent Range is: 曰*1 human-machine interface device, where the display screen, TV screen or display image Device. 19 M363746. The human interface device of claim 1, wherein the human interface device is applicable to a gaming machine, a control panel or an interactive display. 2020
TW98205998U 2009-04-13 2009-04-13 Image-integrating based human-machine interface device TWM363746U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98205998U TWM363746U (en) 2009-04-13 2009-04-13 Image-integrating based human-machine interface device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98205998U TWM363746U (en) 2009-04-13 2009-04-13 Image-integrating based human-machine interface device

Publications (1)

Publication Number Publication Date
TWM363746U true TWM363746U (en) 2009-08-21

Family

ID=44385699

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98205998U TWM363746U (en) 2009-04-13 2009-04-13 Image-integrating based human-machine interface device

Country Status (1)

Country Link
TW (1) TWM363746U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335827B2 (en) 2012-07-17 2016-05-10 Wistron Corp. Gesture input systems and methods using 2D sensors
TWI740659B (en) * 2020-09-23 2021-09-21 國立臺灣科技大學 Auxiliary rehabilitation detection system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9335827B2 (en) 2012-07-17 2016-05-10 Wistron Corp. Gesture input systems and methods using 2D sensors
TWI740659B (en) * 2020-09-23 2021-09-21 國立臺灣科技大學 Auxiliary rehabilitation detection system and method

Similar Documents

Publication Publication Date Title
CN106227439B (en) Device and method for capturing digitally enhanced image He interacting
US10257423B2 (en) Method and system for determining proper positioning of an object
CN104471511B (en) Identify device, user interface and the method for pointing gesture
JP5256269B2 (en) Data generation apparatus, data generation apparatus control method, and program
US11921414B2 (en) Reflection-based target selection on large displays with zero latency feedback
WO2017054465A1 (en) Information processing method, terminal and computer storage medium
US20120056989A1 (en) Image recognition apparatus, operation determining method and program
CN110262733A (en) Calculate the character recognition in equipment
JP2017522682A (en) Handheld browsing device and method based on augmented reality technology
CN107433036A (en) The choosing method and device of object in a kind of game
TW200949617A (en) A video based apparatus and method for controlling the cursor
TW201239743A (en) Continued virtual links between gestures and user interface elements
US10166477B2 (en) Image processing device, image processing method, and image processing program
TW200945174A (en) Vision based pointing device emulation
TW200951777A (en) Image recognizing device, operation judging method, and program
DK201670654A1 (en) Devices, Methods, and Graphical User Interfaces for Messaging
US11889222B2 (en) Multilayer three-dimensional presentation
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
KR20240049844A (en) Control AR games on fashion items
US20210081092A1 (en) Information processing system, information processing method, and program
TWM363746U (en) Image-integrating based human-machine interface device
US12032733B2 (en) User controlled three-dimensional scene
WO2023066005A1 (en) Method and apparatus for constructing virtual scenario, and electronic device, medium and product
US20240276058A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation

Legal Events

Date Code Title Description
MK4K Expiration of patent term of a granted utility model