TW201939260A - Method, apparatus, and terminal for simulating mouse operation by using gesture - Google Patents

Method, apparatus, and terminal for simulating mouse operation by using gesture Download PDF

Info

Publication number
TW201939260A
TW201939260A TW107147676A TW107147676A TW201939260A TW 201939260 A TW201939260 A TW 201939260A TW 107147676 A TW107147676 A TW 107147676A TW 107147676 A TW107147676 A TW 107147676A TW 201939260 A TW201939260 A TW 201939260A
Authority
TW
Taiwan
Prior art keywords
gesture
user
operation event
mouse
event
Prior art date
Application number
TW107147676A
Other languages
Chinese (zh)
Other versions
TWI695311B (en
Inventor
賀三元
Original Assignee
香港商阿里巴巴集團服務有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港商阿里巴巴集團服務有限公司 filed Critical 香港商阿里巴巴集團服務有限公司
Publication of TW201939260A publication Critical patent/TW201939260A/en
Application granted granted Critical
Publication of TWI695311B publication Critical patent/TWI695311B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are a method, apparatus, and terminal for simulating a mouse operation by using a gesture, said method comprising: obtaining gesture information obtained by a gesture collection apparatus collecting a user gesture; identifying the gesture information to obtain a gesture operation event of the user; searching for a preset mapping set according to the gesture operation event of the user, said preset mapping set comprising a correlation between at least one set of gesture operation events and mouse operation events, said mouse cursor operation event comprising at least a mouse single-click event and a mouse movement event; if the gesture operation event of the user is found in the preset mapping set, then triggering a mouse operation event corresponding to the gesture operation event of the user.

Description

一種利用手勢模擬滑鼠操作的方法、裝置及終端Method, device and terminal for simulating mouse operation using gestures

本說明書實施例關於手勢識別技術領域,尤其關於一種利用手勢模擬滑鼠操作的方法、裝置及終端。The embodiments of the present specification relate to the technical field of gesture recognition, and in particular, to a method, a device, and a terminal for simulating mouse operation using gestures.

隨著資訊技術的發展,智慧型終端已成為人們生活中不可缺少的部分,使用者可以通過智慧型終端實現多種操作。目前,在使用者通過智慧型終端實現操作的過程中,通常借助滑鼠對智慧型終端進行操作。但是,在實際使用過程中,將不可避免地出現滑鼠無法使用情況,例如,滑鼠故障、滑鼠電量用盡等情況,針對該種情況,現有技術中並無任何的應急措施,從而,在該種情況下,使用者則無法對智慧型終端進行操作,使用者體驗較差。With the development of information technology, smart terminals have become an indispensable part of people's lives, and users can implement a variety of operations through smart terminals. At present, in a process in which a user implements operations through a smart terminal, the smart terminal is usually operated by using a mouse. However, in actual use, there will inevitably be situations in which the mouse cannot be used, for example, mouse failure, mouse power exhaustion, etc. For this situation, there are no emergency measures in the prior art. In this case, the user cannot operate the smart terminal, and the user experience is poor.

針對上述技術問題,本說明書實施例提供一種利用手勢模擬滑鼠操作的方法、裝置及終端,技術方案如下:
根據本說明書實施例的第一方面,提供一種利用手勢模擬滑鼠操作的方法,該方法包括:
獲取手勢採集設備採集使用者手勢所得到的手勢資訊;
對所述手勢資訊進行識別,得到使用者的手勢操作事件;
根據所述使用者的手勢操作事件查找預設映射集,所述預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,所述滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件;
若在所述預設映射集中查找到所述使用者的手勢操作事件,則觸發與所述使用者的手勢操作事件對應的滑鼠操作事件。
根據本說明書實施例的第二方面,提供一種利用手勢模擬滑鼠操作的裝置,該裝置包括:
獲取模組,用於獲取手勢採集設備採集使用者手勢所得到的手勢資訊;
識別模組,用於對所述手勢資訊進行識別,得到使用者的手勢操作事件;
查找模組,用於根據所述使用者的手勢操作事件查找預設映射集,所述預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,所述滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件;
觸發模組,用於若在所述預設映射集中查找到所述使用者的手勢操作事件,則觸發與所述使用者的手勢操作事件對應的滑鼠操作事件。
根據本說明書實施例的第三方面,提供一種終端,包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的計算機程式,其中,所述處理器執行所述程式時實現本說明書實施例提供的任一利用手勢模擬滑鼠操作的方法。
本說明書實施例所提供的技術方案,通過獲取手勢採集設備採集使用者手勢所得到的手勢資訊,對手勢資訊進行識別,得到使用者的手勢操作事件,根據該使用者的手勢操作事件查找包括至少一組手勢操作事件與滑鼠操作事件的對應關係的預設映射集,若在預設映射集中查找到使用者的手勢操作事件,則觸發與使用者的手勢操作事件對應的滑鼠操作事件,從而實現了利用手勢模擬滑鼠操作,為使用者提供了一種新穎的智慧型終端操作方法,在一定程度上可以滿足使用者需求,提升使用者體驗。
應當理解的是,以上的一般描述和後文的細節描述僅是示例性和解釋性的,並不能限制本說明書實施例。
此外,本說明書實施例中的任一實施例並不需要達到上述的全部效果。
In view of the above technical problems, the embodiments of the present specification provide a method, a device, and a terminal for simulating mouse operation using gestures. The technical solutions are as follows:
According to a first aspect of the embodiments of the present specification, a method for simulating a mouse operation using gestures is provided. The method includes:
Acquiring gesture information obtained by a gesture collection device collecting a user's gesture;
Identifying the gesture information to obtain a gesture operation event of the user;
Find a preset mapping set according to the gesture operation event of the user, the preset mapping set includes a correspondence relationship between at least one set of gesture operation events and a mouse operation event, wherein the mouse operation event includes at least a mouse point Select events, mouse movement events;
If a gesture operation event of the user is found in the preset mapping set, a mouse operation event corresponding to the gesture operation event of the user is triggered.
According to a second aspect of the embodiments of the present specification, a device for simulating mouse operation using gestures is provided. The device includes:
An acquisition module, configured to acquire gesture information obtained by a gesture acquisition device collecting user gestures;
A recognition module, configured to recognize the gesture information to obtain a gesture operation event of a user;
A search module is configured to find a preset mapping set according to the gesture operation event of the user, and the preset mapping set includes at least one correspondence relationship between a gesture operation event and a mouse operation event, wherein the mouse operation Events include at least mouse click events and mouse movement events;
A triggering module is configured to trigger a mouse operation event corresponding to the gesture operation event of the user if the gesture operation event of the user is found in the preset mapping set.
According to a third aspect of the embodiments of the present specification, there is provided a terminal including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the program when the processor executes the program. Any method provided by the embodiments of the specification for simulating mouse operation using gestures.
The technical solution provided in the embodiment of the present specification acquires gesture information obtained by collecting gestures of a user through a gesture acquisition device, recognizes the gesture information, and obtains a gesture operation event of the user. According to the gesture operation event of the user, the search includes at least A preset mapping set of the correspondence between gesture operation events and mouse operation events. If a user's gesture operation event is found in the preset mapping set, a mouse operation event corresponding to the user's gesture operation event is triggered. Thus, the mouse operation is simulated by using gestures, and a novel smart terminal operation method is provided for the user, which can meet the user's needs to a certain extent and improve the user experience.
It should be understood that the above general description and the following detailed description are merely exemplary and explanatory, and should not limit the embodiments of the present specification.
In addition, any one of the embodiments in this specification does not need to achieve all the effects described above.

為了使本領域技術人員更好地理解本說明書實施例中的技術方案,下面將結合本說明書實施例中的附圖,對本說明書實施例中的技術方案進行詳細地描述,顯然,所描述的實施例僅僅是本說明書的一部分實施例,而不是全部的實施例。基於本說明書中的實施例,本領域普通技術人員所獲得的所有其他實施例,都應當屬於保護的範圍。
請參見圖1,為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的應用場景示意圖。圖1中包括智慧型終端110、影像採集設備120,在該應用場景下,影像採集設備120可以針對使用者手勢(圖1中未示出)採集手勢資訊,將採集到的手勢資訊傳輸給智慧型終端110,智慧型終端110則可以執行本說明書實施例提供的利用手勢模擬滑鼠操作的方法,以通過執行該方法確定使用者手勢,並確定該使用者手勢所對應的滑鼠操作事件,觸發該滑鼠操作事件,實現對智慧型終端110進行操作。
舉例來說,假設使用者通過智慧型終端110觀看視訊,在觀看過程中,使用者想要暫停視訊播放,若使用者通過操作滑鼠(圖1中未示出)實現暫停視訊播放,具體動作過程可以包括:使用者移動滑鼠,使得智慧型終端110的顯示界面上顯示出滑鼠指針,進一步,使用者移動滑鼠,使得滑鼠指針移動至“暫停”控件上,最後,使用者按下滑鼠左鍵並鬆開,在滑鼠左鍵被鬆開後,視訊即暫停播放。
對應於上述通過操作滑鼠實現暫停視訊播放的動作過程,在本說明書實施例中,首先,使用者可以正對影像採集設備120作出用於指示在智慧型終端110的顯示界面上顯示出滑鼠指針的手勢,智慧型終端110則可以根據該手勢在顯示界面上顯示出滑鼠指針;進一步,使用者正對影像採集設備120作出用於指示在智慧型終端110的顯示界面上移動滑鼠指針的手勢,智慧型終端110可以根據該手勢在顯示界面上移動滑鼠指針,直至將滑鼠指針移動至“暫停”控件上;進一步,使用者正對影像採集設備120作出用於表示滑鼠左鍵被按下並鬆開的手勢,智慧型終端110可以根據該手勢觸發滑鼠指針點選“暫停”控件,實現暫停視訊播放。
需要說明的是,上述通過影像採集設備120採集使用者手勢的手勢資訊僅僅作為舉例,在實際應用中,還可以通過其他設備,例如紅外感測器採集使用者手勢的手勢資訊,本說明書實施例對此並不做限制。
還需要說明的是,圖1中所示例的影像採集120與智慧型終端110的佈設方式僅僅作為舉例,在實際應用中,智慧型終端110可自帶有攝像頭或紅外感測器,本說明書實施例對此並不做限制。
如下,結合上述圖1所示應用場景,示出下述實施例對本說明書實施例提供的利用手勢模擬滑鼠操作的方法進行說明。
請參見圖2,為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的方法的實施例流程圖,該方法在上述圖1所示應用場景的基礎上,可應用於圖1中所示例的智慧型終端110上,包括以下步驟:
步驟202:獲取手勢採集設備採集使用者手勢所得到的手勢資訊。
在本說明書實施中,基於圖1所示例的應用場景,影像採集設備120則為手勢採集設備,那麼,手勢採集設備採集使用者手勢所得到的手勢資訊即為影像採集120採集到的使用者手勢影像。
此外,由上述描述可知,手勢採集設備還可以為紅外感測器,對應的,手勢採集設備採集使用者手勢所得到的手勢資訊即為紅外感測器採集到的紅外感應訊號。
步驟204:對手勢資訊進行識別,得到使用者的手勢操作事件。
首先說明,在本說明書實施例中,為了實現利用手勢模擬滑鼠操作,可基於實際應用中對滑鼠的操作定義一些手勢,為了描述方便,將所定義的手勢稱為預設手勢。
在一實施例中,可以定義三類預設手勢,分別用於指示在智慧型終端110的顯示界面上顯示出滑鼠指針、用於指示滑鼠左鍵處於按下狀態,以及用於指示滑鼠左鍵處於未按下狀態,例如,請參見圖3,為本說明書一示例性實施例示出的預設手勢的示意圖,如圖3所示,該預設手勢至少可以包括:握拳手勢(圖3(a)所示)、手掌打開手勢(圖3(b)所示)、單指伸直手勢(圖3(c)所示)。其中,手掌打開手勢用於指示在智慧型終端110顯示界面上顯示出滑鼠指針,握拳手勢用於指示滑鼠左鍵處於按下狀態,單指伸直手勢則用於指示滑鼠左鍵處於未按下狀態。
同時,為了實現利用手勢模擬滑鼠操作,可以基於實際應用中滑鼠操作的類型劃分滑鼠操作事件,例如,可至少劃分出兩類滑鼠操作事件,分別為滑鼠點選事件、滑鼠移動事件。進一步,基於每一類型的滑鼠操作事件的操作特徵,建立滑鼠操作事件與手勢操作事件的對應關係,例如,對於滑鼠移動事件而言,其操作特徵是“滑鼠進行移動”,基於此,可以定義一類用於表示使用者的手勢發生移動的第一手勢操作事件,該第一手勢操作事件即對應滑鼠移動事件;對於滑鼠點選事件而言,其操作特徵是“滑鼠左鍵被按下”,由此可見,對於滑鼠點選事件而言,關於到使用者手勢的變換,基於此,可以定義一類用於表示使用者的手勢發生變換的第二手勢操作事件,該第二手勢操作事件即對應滑鼠點擊事件。
基於上述預設手勢,上述第一手勢操作事件和第二手勢操作事件的定義,可以得到如下表1所示例的手勢操作事件:
表1

由上述表1可知,利用上述手勢操作事件映射滑鼠移動事件和滑鼠點選事件,可以實現手勢操作事件到現有的滑鼠事件的映射,例如,如下述表2所示,為手勢操作事件與現有的滑鼠事件之間映射關係的一種示例:
表2

由上述表2可知,本說明書實施例中,使用者通過做出預設手勢,實現相應的手勢操作事件,即可複用現有的滑鼠事件,從而可以相容現有控件內部封裝的滑鼠事件。
此外,除上述表1中所示例的手勢操作事件以外,手勢操作事件還可以包括:手掌變單指事件,用於表示滑鼠指針的狀態從懸停狀態調整為工作狀態;單指變手掌事件,用於表示滑鼠指針的狀態從工作狀態調整為懸停狀態。
需要說明的是,當滑鼠指針的狀態為懸停狀態時,無法在顯示界面上移動滑鼠指針,若需移動滑鼠指針,則可以先通過手掌變單指事件,將滑鼠指針的狀態從懸停狀態調整為工作狀態。
由上述描述可知,不論是第一手勢操作事件,抑或是第二手勢操作事件,均關於到使用者在前後兩次所做出的手勢之間的區別(具體為手勢相同,但相對位置發生變化;手勢不同),因而,在本說明書實施例中,可以分別對當前獲取到的手勢資訊與前一次獲取到的手勢資訊進行識別,以得到使用者當前做出的手勢與使用者前一次做出的手勢,在此說明,為了描述方便,將使用者當前做出的手勢稱為第一手勢,將使用者前一次做出的手勢稱為第二手勢。
後續,可以首先判斷第一手勢與第二手勢是否屬於上述預設手勢,若是,則繼續判斷第一手勢與第二手勢是否相同,若相同,則進一步確定第一手勢相對於第二手勢的物理位移,若該物理位移大於預設閾值,則得到用於表示使用者的手勢由第二手勢所在位置移動到第一手勢所在位置的第一手勢操作事件;若第一手勢與二手勢不相同,則可以得到用於表示使用者的手勢由第二手勢變換為第一手勢的第二手勢操作事件。
需要說明的是,在上述過程中,通過確定第一手勢相對於第二手勢的物理位移大於預設閾值時,再得到第一手勢操作事件,可以避免由於使用者做出一些輕微移動而導致錯誤操作。
此外,在本說明書實施例中,若識別到的手勢不屬於上述預設手勢,則可以將滑鼠指針的狀態設置為懸停狀態。
如下,以手勢資訊為使用者的手勢影像為例,對手勢資訊進行識別的過程進行說明:
首先,在使用者手勢影像中擷取出使用者的手勢區域,例如,在實際應用中,使用者的手勢往往置於使用者身體之前,從而可以利用手勢區域與背景區域具有不同的深度值這一特徵,在使用者手勢影像中擷取出手勢區域。具體的,根據使用者手勢影像中像素點的深度值,統計得到該影像的灰度直方圖,灰度直方圖則可以表示出該影像中具有某種灰度級的像素點的個數。由於在使用者手勢影像中,手勢區域相對於背景區域的面積較小,且灰度值較小,因此,在前述灰度直方圖中,可以按照灰度值從大到小的順序,查找像素點個數變化較大的灰度值,將查找到的灰度值作為用於區域分割的灰度閾值,例如,灰度閾值為235,那麼,則可以根據該灰度閾值對使用者手勢影像進行二元化,在得到的二元化影像中,白色像素點所表示的區域即為手勢區域。
進一步,利用預設的特徵擷取算法對該手勢區域進行特徵擷取,例如,預設的特徵擷取算法可以為SIFT特徵擷取算法、基於小波和相對矩的形狀特徵擷取與分配算法、模型法等,所擷取到的特徵可以包括:手勢區域的質心、手勢區域的特徵向量、手指數量等等。
最後,通過擷取到的特徵進行手勢識別,確定使用者做出的手勢。
在上述描述中,在第一手勢與第二手勢相同的場景下,確定第一手勢相對於第二手勢的物理位移的具體過程,可以參見現有技術中的描述,本說明書實施例對此不再詳述。
後續,可以將確定出的物理位移換算成以英寸為單位,進一步使用該物理位移除以智慧型終端110的屏幕上每一像素點對應實際距離,該實際距離也以英寸為單位,得到的結果即為滑鼠指針移動的像素點的個數。
步驟206:根據使用者的手勢操作事件查找預設映射集,該預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件。
步驟208:若在預設映射集中查找到使用者的手勢操作事件,則觸發與使用者的手勢操作事件對應的滑鼠操作事件。
如下,對上述步驟206至步驟208進行詳細說明:
在本說明書實施例中,可以預先設置映射集,該映射集中包括至少一組手勢操作事件與滑鼠操作事件的對應關係,例如,按照上述描述,該映射集可以如下述表3所示:
表3

基於上述表2所示例的映射集,在本說明書實施例中,得到使用者的手勢操作事件之後,則可以根據該手勢操作事件查找圖2所示例的映射集,若查找到該手勢操作事件,則觸發對應的滑鼠操作事件。
本發明所提供的技術方案,通過獲取手勢採集設備採集使用者手勢所得到的手勢資訊,對手勢資訊進行識別,得到使用者的手勢操作事件,根據該使用者的手勢操作事件查找包括至少一組手勢操作事件與滑鼠操作事件的對應關係的預設映射集,若在預設映射集中查找到使用者的手勢操作事件,則觸發與使用者的手勢操作事件對應的滑鼠操作事件,從而實現了利用手勢模擬滑鼠操作,為使用者提供了一種新穎的智慧型終端操作方法,在一定程度上可以滿足使用者需求,提升使用者體驗。
相應於上述方法實施例,本說明書實施例還提供一種利用手勢模擬滑鼠操作的裝置,請參見圖4,為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的裝置的實施例框圖,該裝置可以包括:獲取模組41、識別模組42、查找模組43,觸發模組44。
其中,獲取模組41,可以用於獲取手勢採集設備採集使用者手勢所得到的手勢資訊;
識別模組42,可以用於對所述手勢資訊進行識別,得到使用者的手勢操作事件;
查找模組43,可以用於根據所述使用者的手勢操作事件查找預設映射集,所述預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,所述滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件;
觸發模組44,可以用於若在所述預設映射集中查找到所述使用者的手勢操作事件,則觸發與所述使用者的手勢操作事件對應的滑鼠操作事件。
在一實施例中,所述手勢採集設備為影像採集設備,所述手勢資訊為所述影像採集設備採集到的使用者手勢影像。
在一實施例中,所述識別模組42可以包括(圖4中未示出):
區域擷取子模組,用於在所述使用者手勢影像中擷取出使用者的手勢區域;
特徵擷取子模組,用於利用預設的特徵擷取算法對所述手勢區域進行特徵擷取;
特徵識別子模組,用於通過擷取到的特徵進行手勢識別,得到使用者的手勢操作事件。
在一實施例中,所述使用者的手勢操作事件至少包括:用於表示所述使用者的手勢發生移動的第一手勢操作事件、用於表示所述使用者的手勢發生變換的第二手勢操作事件;
其中,所述第一手勢操作事件對應所述滑鼠移動事件,所述第二手勢操作事件對應所述滑鼠點擊事件。
在一實施例中,所述識別模組42可以包括(圖4中未示出):
手勢識別子模組,用於分別對當前獲取到的手勢資訊與前一次獲取到的手勢資訊進行識別,得到所述使用者當前做出的第一手勢與所述使用者前一次做出的第二手勢;
第一判斷子模組,用於判斷所述第一手勢與所述第二手勢是否屬於預設手勢;
第二判斷子模組,用於若所述第一手勢與所述第二手勢屬於預設手勢,則判斷所述第一手勢與所述第二手勢是否相同;
位移確定子模組,用於若所述第一手勢與所述第二手勢相同,則確定所述第一手勢相對於所述第二手勢的物理位移;
第一確定子模組,用於若所述物理位移大於預設閾值,則得到用於表示所述使用者的手勢由所述第二手勢所在位置移動到所述第一手勢所在位置的第一手勢操作事件;
第二確定子模組,用於若所述第一手勢與所述第二手勢不同,則得到用於表示所述使用者的手勢由所述第二手勢變換為所述第一手勢的第二手勢操作事件。
在一實施例中,所述預設手勢至少包括:
握拳手勢、手掌打開手勢、單指伸直手勢。
可以理解的是,獲取模組41、識別模組42、查找模組43,以及觸發模組44作為四種功能獨立的模組,既可以如圖4所示同時配置在裝置中,也可以分別單獨配置在裝置中,因此圖4所示的結構不應理解為對本說明書實施例方案的限定。
此外,上述裝置中各個模組的功能和作用的實現過程具體詳見上述方法中對應步驟的實現過程,在此不再贅述。
本說明書實施例還提供一種終端,其至少包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的計算機程式,其中,處理器執行所述程式時實現前述的利用手勢模擬滑鼠操作的方法。該方法至少包括:獲取手勢採集設備採集使用者手勢所得到的手勢資訊;對所述手勢資訊進行識別,得到使用者的手勢操作事件;根據所述使用者的手勢操作事件查找預設映射集,所述預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,所述滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件;若在所述預設映射集中查找到所述使用者的手勢操作事件,則觸發與所述使用者的手勢操作事件對應的滑鼠操作事件。
在一實施例中,所述手勢採集設備為影像採集設備,所述手勢資訊為所述影像採集設備採集到的使用者手勢影像。
在一實施例中,所述對所述手勢資訊進行識別,得到使用者的手勢操作事件,包括:
在所述使用者手勢影像中擷取出使用者的手勢區域;
利用預設的特徵擷取算法對所述手勢區域進行特徵擷取;
通過擷取到的特徵進行手勢識別,得到使用者的手勢操作事件。
在一實施例中,所述使用者的手勢操作事件至少包括:用於表示所述使用者的手勢發生移動的第一手勢操作事件、用於表示所述使用者的手勢發生變換的第二手勢操作事件;
其中,所述第一手勢操作事件對應所述滑鼠移動事件,所述第二手勢操作事件對應所述滑鼠點擊事件。
在一實施例中,所述對所述手勢資訊進行識別,得到使用者的手勢操作事件,包括:
分別對當前獲取到的手勢資訊與前一次獲取到的手勢資訊進行識別,得到所述使用者當前做出的第一手勢與所述使用者前一次做出的第二手勢;
判斷所述第一手勢與所述第二手勢是否屬於預設手勢,若是,則判斷所述第一手勢與所述第二手勢是否相同;
若相同,則確定所述第一手勢相對於所述第二手勢的物理位移;若所述物理位移大於預設閾值,則得到用於表示所述使用者的手勢由所述第二手勢所在位置移動到所述第一手勢所在位置的第一手勢操作事件;
若不同,則得到用於表示所述使用者的手勢由所述第二手勢變換為所述第一手勢的第二手勢操作事件。
在一實施例中,所述預設手勢至少包括:握拳手勢、手掌打開手勢、單指伸直手勢。
圖5示出了本說明書實施例所提供的一種更為具體的終端硬體結構示意圖,該終端可以包括:處理器510、記憶體520、輸入/輸出介面530、通訊介面540和匯流排550。其中處理器510、記憶體520、輸入/輸出介面530和通訊介面540通過匯流排550實現彼此之間在設備內部的通訊連接。
處理器510可以採用通用的CPU(Central Processing Unit,中央處理器)、微處理器、特定應用積體電路(Application Specific Integrated Circuit,ASIC)、或者一個或多個積體電路等方式實現,用於執行相關程式,以實現本說明書實施例所提供的技術方案。
記憶體520可以採用ROM(Read Only Memory,唯讀記憶體)、RAM(Random Access Memory,隨機存取記憶體)、靜態儲存設備,動態儲存設備等形式實現。記憶體520可以儲存操作系統和其他應用程式,在通過軟體或者固件來實現本說明書實施例所提供的技術方案時,相關的程式代碼保存在記憶體520中,並由處理器510來調用執行。
輸入/輸出介面530用於連接輸入/輸出模組,以實現資訊輸入及輸出。輸入輸出/模組可以作為組件配置在設備中(圖5中未示出),也可以外接於設備以提供相應功能。其中輸入設備可以包括鍵盤、滑鼠、觸控螢幕、麥克風、各類感測器等,輸出設備可以包括顯示器、揚聲器、振動器、指示燈等。
通訊介面540用於連接通訊模組(圖5中未示出),以實現本設備與其他設備的通訊交互。其中通訊模組可以通過有線方式(例如USB、網線等)實現通訊,也可以通過無線方式(例如移動網路、WIFI、藍牙等)實現通訊。
匯流排550包括一通路,在設備的各個組件(例如處理器510、記憶體520、輸入/輸出介面530和通訊介面540)之間傳輸資訊。
需要說明的是,儘管上述設備僅示出了處理器510、記憶體520、輸入/輸出介面530、通訊介面540以及匯流排550,但是在具體實施過程中,該設備還可以包括實現正常運行所必需的其他組件。此外,本領域的技術人員可以理解的是,上述設備中也可以僅包含實現本說明書實施例方案所必需的組件,而不必包含圖中所示的全部組件。
本說明書實施例還提供一種計算機可讀儲存媒體,其上儲存有計算機程式,該程式被處理器執行時實現前述的利用手勢模擬滑鼠操作的方法。該方法至少包括:獲取手勢採集設備採集使用者手勢所得到的手勢資訊;對所述手勢資訊進行識別,得到使用者的手勢操作事件;根據所述使用者的手勢操作事件查找預設映射集,所述預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,所述滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件;若在所述預設映射集中查找到所述使用者的手勢操作事件,則觸發與所述使用者的手勢操作事件對應的滑鼠操作事件。
計算機可讀媒體包括永久性和非永久性、可移動和非可移動媒體可以由任何方法或技術來實現資訊儲存。資訊可以是計算機可讀指令、資料結構、程式的模組或其他資料。計算機的儲存媒體的例子包括,但不限於相變內存(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可擦除可程式化唯讀記憶體(EEPROM)、快閃記憶體或其他內存技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁盒式磁帶,磁帶磁碟儲存或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被計算設備存取的資訊。按照本文中的界定,計算機可讀媒體不包括暫存電腦可讀媒體(transitory media),如調變的資料訊號和載波。
通過以上的實施方式的描述可知,本領域的技術人員可以清楚地瞭解到本說明書實施例可借助軟體加必需的通用硬體平臺的方式來實現。基於這樣的理解,本說明書實施例的技術方案本質上或者說對現有技術做出貢獻的部分可以以軟體產品的形式體現出來,該計算機軟體產品可以儲存在儲存媒體中,如ROM/RAM、磁碟、光碟等,包括若干指令用以使得一台計算機設備(可以是個人計算機,伺服器,或者網路設備等)執行本說明書實施例各個實施例或者實施例的某些部分所述的方法。
上述實施例闡明的系統、裝置、模組或單元,具體可以由計算機晶片或實體實現,或者由具有某種功能的產品來實現。一種典型的實現設備為計算機,計算機的具體形式可以是個人計算機、膝上型計算機、蜂巢式電話、相機電話、智慧型電話、個人數位助理、媒體播放器、導航設備、電子郵件收發設備、遊戲控制台、平板計算機、可穿戴設備或者這些設備中的任意幾種設備的組合。
本說明書中的各個實施例均採用遞增的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於裝置實施例而言,由於其基本相似於方法實施例,所以描述得比較簡單,相關之處參見方法實施例的部分說明即可。以上所描述的裝置實施例僅僅是示意性的,其中所述作為分離部件說明的模組可以是或者也可以不是物理上分開的,在實施本說明書實施例方案時可以把各模組的功能在同一個或多個軟體和/或硬體中實現。也可以根據實際的需要選擇其中的部分或者全部模組來實現本實施例方案的目的。本領域普通技術人員在不付出創造性勞動的情況下,即可以理解並實施。
以上所述僅是本說明書實施例的具體實施方式,應當指出,對於本技術領域的普通技術人員來說,在不脫離本說明書實施例原理的前提下,還可以做出若干改進和潤飾,這些改進和潤飾也應視為本說明書實施例的保護範圍。
In order to enable those skilled in the art to better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification. Obviously, the described implementations The examples are only a part of the embodiments of this specification, but not all the examples. Based on the embodiments in this specification, all other embodiments obtained by a person having ordinary skill in the art should fall within the protection scope.
Please refer to FIG. 1, which is a schematic diagram of an application scenario of simulating a mouse operation using gestures according to an exemplary embodiment of the present specification. Figure 1 includes a smart terminal 110 and an image capture device 120. In this application scenario, the image capture device 120 can collect gesture information for user gestures (not shown in Figure 1), and transmit the collected gesture information to the smart Type terminal 110, and smart terminal 110 may execute the method of simulating mouse operation using gestures provided in the embodiments of this specification to determine a user gesture by executing the method, and determine a mouse operation event corresponding to the user gesture. The mouse operation event is triggered to implement the operation on the smart terminal 110.
For example, suppose a user watches a video through the smart terminal 110. During the watching process, the user wants to pause the video playback. If the user operates the mouse (not shown in Figure 1) to pause the video playback, the specific action The process may include: the user moves the mouse so that the mouse pointer is displayed on the display interface of the smart terminal 110; further, the user moves the mouse so that the mouse pointer moves to the "pause"control; finally, the user presses Left-click and release, the video pauses when the left-click is released.
Corresponding to the above-mentioned action process of pausing video playback by operating a mouse, in the embodiment of the present specification, first, the user may make an instruction to the image capture device 120 to instruct the mouse to be displayed on the display interface of the smart terminal 110 The pointer gesture, the smart terminal 110 can display the mouse pointer on the display interface according to the gesture; further, the user is making an image acquisition device 120 for instructing to move the mouse pointer on the display interface of the smart terminal 110 The smart terminal 110 can move the mouse pointer on the display interface according to the gesture until the mouse pointer is moved to the "pause" control. Further, the user is making an image acquisition device 120 to indicate that the mouse is left The gesture that the key is pressed and released, the smart terminal 110 can trigger the mouse pointer to click the "pause" control according to the gesture, so as to pause the video playback.
It should be noted that the above-mentioned collection of gesture information of the user's gesture through the image capture device 120 is merely an example. In actual applications, the gesture information of the user's gesture can also be collected by other devices, such as an infrared sensor. There are no restrictions on this.
It should also be noted that the arrangement of the image acquisition 120 and the smart terminal 110 shown in FIG. 1 is only an example. In practical applications, the smart terminal 110 may have a camera or an infrared sensor. Examples do not limit this.
As follows, in combination with the application scenario shown in FIG. 1 described above, the following embodiment is illustrated to describe the method for simulating mouse operation using gestures provided by the embodiment of this specification.
Please refer to FIG. 2, which is a flowchart of an embodiment of a method for simulating a mouse operation using gestures according to an exemplary embodiment of the present specification. The method is applicable to FIG. 1 based on the application scenario shown in FIG. 1 described above. The illustrated smart terminal 110 includes the following steps:
Step 202: Acquire gesture information obtained by the gesture acquisition device to collect a user gesture.
In the implementation of this specification, based on the application scenario illustrated in FIG. 1, the image capture device 120 is a gesture capture device. Then, the gesture information obtained by the gesture capture device when collecting the user ’s gesture is the user gesture collected by the image capture 120. image.
In addition, from the above description, it can be known that the gesture acquisition device can also be an infrared sensor. Correspondingly, the gesture information obtained by the gesture acquisition device by collecting the user's gesture is the infrared sensing signal collected by the infrared sensor.
Step 204: Recognize gesture information to obtain a gesture operation event of the user.
First of all, in the embodiment of the present specification, in order to implement a mouse operation using gestures, some gestures may be defined based on the operation of the mouse in actual applications. For convenience of description, the defined gestures are referred to as preset gestures.
In one embodiment, three types of preset gestures can be defined, which are respectively used to indicate that the mouse pointer is displayed on the display interface of the smart terminal 110, used to indicate that the left mouse button is pressed, and used to indicate the sliding The left mouse button is not pressed. For example, please refer to FIG. 3, which is a schematic diagram of a preset gesture according to an exemplary embodiment of the present specification. As shown in FIG. 3, the preset gesture may include at least: a fist gesture (Figure 3 (a)), palm open gesture (shown in Fig. 3 (b)), single finger straightening gesture (shown in Fig. 3 (c)). Among them, the palm open gesture is used to indicate that the mouse pointer is displayed on the display interface of the smart terminal 110, the fist gesture is used to indicate that the left mouse button is in a pressed state, and the single-finger straight gesture is used to indicate that the left mouse button is in Not pressed.
At the same time, in order to use gestures to simulate mouse operations, mouse operation events can be divided based on the type of mouse operation in the actual application. For example, at least two types of mouse operation events can be divided into mouse click events, mouse click events, Move event. Further, based on the operation characteristics of each type of mouse operation event, a correspondence between a mouse operation event and a gesture operation event is established. For example, for a mouse movement event, the operation feature is "mouse moves", based on Therefore, a type of first gesture operation event used to indicate that a user ’s gesture has moved may be defined, and the first gesture operation event corresponds to a mouse movement event; for a mouse click event, its operation characteristic is “mouse "The left button was pressed", it can be seen that, for the mouse click event, regarding the transformation to the user's gesture, based on this, a second gesture operation event can be defined to indicate that the user's gesture has changed. The second gesture operation event corresponds to a mouse click event.
Based on the above-mentioned preset gestures, the above-mentioned definitions of the first gesture operation event and the second gesture operation event can obtain the gesture operation events as shown in Table 1 below:
Table 1

It can be known from the above Table 1 that using the above gesture operation events to map the mouse movement event and the mouse click event can map the gesture operation event to the existing mouse event. For example, as shown in Table 2 below, it is a gesture operation event An example of a mapping relationship with an existing mouse event:
Table 2

As can be seen from Table 2 above, in the embodiment of the present specification, by making a preset gesture and realizing a corresponding gesture operation event, the user can reuse the existing mouse event, thereby being compatible with the mouse event encapsulated inside the existing control. .
In addition, in addition to the gesture operation events illustrated in Table 1 above, the gesture operation events may also include: a palm change to a single finger event, used to indicate that the state of the mouse pointer is adjusted from a hovering state to a working state; a single finger to change palm event , Used to indicate that the state of the mouse pointer is adjusted from the working state to the hovering state.
It should be noted that when the state of the mouse pointer is hovering, the mouse pointer cannot be moved on the display interface. If you need to move the mouse pointer, you can first change the single-finger event through the palm to change the state of the mouse pointer Adjusted from hovering state to working state.
From the above description, whether the first gesture operation event or the second gesture operation event is related to the difference between the gestures made by the user two times (specifically, the gestures are the same, but the relative position occurs Change; gestures are different). Therefore, in the embodiment of the present specification, the currently acquired gesture information and the previously acquired gesture information can be identified separately to obtain the user's current gesture and the user's previous gesture. For the convenience of description, the gesture currently made by the user is referred to as the first gesture, and the gesture previously made by the user is referred to as the second gesture.
Subsequently, it may be first determined whether the first gesture and the second gesture belong to the foregoing preset gestures, and if so, continue to determine whether the first gesture and the second gesture are the same, and if they are the same, then further determine that the first gesture is relative to the second hand Potential physical displacement, if the physical displacement is greater than a preset threshold, a first gesture operation event indicating that the user ’s gesture moves from the location of the second gesture to the location of the first gesture is obtained; If the potentials are not the same, a second gesture operation event indicating that the user's gesture is changed from the second gesture to the first gesture can be obtained.
It should be noted that, in the above process, by determining that the physical displacement of the first gesture relative to the second gesture is greater than a preset threshold, and then obtaining the first gesture operation event, it may be avoided due to some slight movements by the user. Incorrect operation.
In addition, in the embodiment of the present specification, if the recognized gesture does not belong to the above-mentioned preset gesture, the state of the mouse pointer may be set to a hovering state.
The following uses the gesture information as the user's gesture image as an example to describe the process of identifying the gesture information:
First, the user's gesture area is extracted from the user's gesture image. For example, in actual applications, the user's gesture is often placed before the user's body, so that the gesture area and the background area have different depth values. Feature to capture the gesture area from the user's gesture image. Specifically, according to the depth value of the pixels in the user gesture image, a gray histogram of the image is statistically obtained, and the gray histogram can indicate the number of pixels having a certain gray level in the image. In the user gesture image, the area of the gesture area relative to the background area is small, and the gray value is small. Therefore, in the foregoing gray histogram, the pixels can be found in the order of the gray value from large to small. The gray value with a large number of points changes, and the found gray value is used as the gray threshold for region segmentation. For example, the gray threshold is 235. Then, the user gesture image can be based on the gray threshold. Binarize. In the obtained binary image, the area represented by the white pixels is the gesture area.
Further, feature extraction is performed on the gesture area by using a preset feature extraction algorithm. For example, the preset feature extraction algorithm may be a SIFT feature extraction algorithm, a shape feature extraction and distribution algorithm based on wavelets and relative moments, The model method, etc., the captured features may include: the centroid of the gesture area, the feature vector of the gesture area, the number of fingers, and so on.
Finally, gesture recognition is performed through the captured features to determine the gestures made by the user.
In the above description, the specific process of determining the physical displacement of the first gesture with respect to the second gesture in the same scene as the first gesture can be referred to the description in the prior art. No more details.
Subsequently, the determined physical displacement can be converted into inches, and the physical bit can be used to remove the actual distance corresponding to each pixel point on the screen of the smart terminal 110. The actual distance is also obtained in inches. The result is the number of pixels that the mouse pointer moves.
Step 206: Find a preset mapping set according to a gesture operation event of the user. The preset mapping set includes at least one correspondence relationship between gesture operation events and mouse operation events, wherein the mouse operation event includes at least a mouse click event Mouse movement event.
Step 208: If a gesture operation event of the user is found in the preset mapping set, a mouse operation event corresponding to the gesture operation event of the user is triggered.
The above steps 206 to 208 are described in detail as follows:
In the embodiment of the present specification, a mapping set may be set in advance, and the mapping set includes a correspondence relationship between at least one set of gesture operation events and mouse operation events. For example, according to the above description, the mapping set may be as shown in Table 3 below:
table 3

Based on the mapping set illustrated in Table 2 above, in the embodiment of the present specification, after obtaining the gesture operation event of the user, the mapping set illustrated in FIG. 2 can be searched according to the gesture operation event. If the gesture operation event is found, The corresponding mouse operation event is triggered.
According to the technical solution provided by the present invention, gesture information obtained by acquiring gestures of a user by acquiring a gesture acquisition device is obtained, the gesture information is identified, and a gesture operation event of the user is obtained. According to the gesture operation event of the user, the search includes at least one group A preset mapping set of the correspondence between gesture operation events and mouse operation events. If a user's gesture operation event is found in the preset mapping set, a mouse operation event corresponding to the user's gesture operation event is triggered, thereby achieving In order to use gestures to simulate mouse operation, it provides users with a novel smart terminal operation method, which can meet user needs to a certain extent and improve user experience.
Corresponding to the foregoing method embodiments, an embodiment of the present specification further provides a device for simulating mouse operation using gestures. Please refer to FIG. 4, which is an embodiment of a device for simulating mouse operation using gestures according to an exemplary embodiment of the present specification. Block diagram, the device may include: an acquisition module 41, an identification module 42, a search module 43, and a trigger module 44.
Wherein, the acquisition module 41 may be used to acquire gesture information obtained by a gesture acquisition device by collecting user gestures;
The recognition module 42 may be used for recognizing the gesture information to obtain a gesture operation event of the user;
The search module 43 may be configured to search for a preset mapping set according to the gesture operation event of the user, where the preset mapping set includes a correspondence relationship between at least one set of gesture operation events and a mouse operation event, wherein the slide Mouse operation events include at least mouse click events and mouse movement events;
The trigger module 44 may be configured to trigger a mouse operation event corresponding to the gesture operation event of the user if the gesture operation event of the user is found in the preset mapping set.
In one embodiment, the gesture acquisition device is an image acquisition device, and the gesture information is a user gesture image collected by the image acquisition device.
In an embodiment, the identification module 42 may include (not shown in FIG. 4):
An area capturing sub-module for capturing a gesture area of a user from the user gesture image;
A feature extraction submodule, configured to use a preset feature extraction algorithm to perform feature extraction on the gesture region;
The feature recognition sub-module is used for gesture recognition through the captured features to obtain the gesture operation event of the user.
In an embodiment, the gesture operation event of the user includes at least a first gesture operation event used to indicate that the user ’s gesture has moved, and a second hand used to indicate that the user ’s gesture has changed. Potential operation event
The first gesture operation event corresponds to the mouse movement event, and the second gesture operation event corresponds to the mouse click event.
In an embodiment, the identification module 42 may include (not shown in FIG. 4):
A gesture recognition sub-module for recognizing the currently acquired gesture information and the previously acquired gesture information respectively to obtain a first gesture currently made by the user and a second gesture previously made by the user gesture;
A first determining sub-module, configured to determine whether the first gesture and the second gesture belong to a preset gesture;
A second determining submodule, configured to determine whether the first gesture and the second gesture are the same if the first gesture and the second gesture are preset gestures;
A displacement determining submodule, configured to determine a physical displacement of the first gesture relative to the second gesture if the first gesture is the same as the second gesture;
A first determining sub-module is configured to obtain a first position for indicating that the gesture of the user is moved from the position of the second gesture to the position of the first gesture if the physical displacement is greater than a preset threshold. A gesture operation event;
A second determining submodule, configured to obtain, if the first gesture is different from the second gesture, a method for indicating that the user ’s gesture is changed from the second gesture to the first gesture The second gesture operation event.
In an embodiment, the preset gesture includes at least:
Fist gesture, palm open gesture, single finger straight gesture.
It can be understood that the acquisition module 41, the identification module 42, the search module 43, and the trigger module 44 are four independent modules, which can be configured in the device at the same time as shown in FIG. 4, or can be separately It is separately arranged in the device, so the structure shown in FIG. 4 should not be understood as a limitation on the scheme of the embodiment of the present specification.
In addition, the implementation process of the functions and functions of each module in the above device is described in detail in the implementation process of the corresponding steps in the above method, and is not repeated here.
An embodiment of the present specification further provides a terminal, which includes at least a memory, a processor, and a computer program stored on the memory and can run on the processor, wherein the processor implements the foregoing simulation of sliding with gestures when executing the program. Mouse operation method. The method at least includes: acquiring gesture information obtained by a gesture acquisition device to collect a user's gesture; identifying the gesture information to obtain a gesture operation event of the user; and finding a preset mapping set according to the gesture operation event of the user, The preset mapping set includes a correspondence relationship between at least one set of gesture operation events and mouse operation events, wherein the mouse operation event includes at least a mouse click event and a mouse movement event; if the preset mapping When a gesture operation event of the user is found in a centralized manner, a mouse operation event corresponding to the gesture operation event of the user is triggered.
In one embodiment, the gesture acquisition device is an image acquisition device, and the gesture information is a user gesture image collected by the image acquisition device.
In one embodiment, the step of recognizing the gesture information to obtain a gesture operation event of the user includes:
Extracting a gesture area of the user from the user gesture image;
Performing feature extraction on the gesture region using a preset feature extraction algorithm;
Gesture recognition is performed through the captured features to obtain the gesture operation event of the user.
In an embodiment, the gesture operation event of the user includes at least a first gesture operation event used to indicate that the user ’s gesture has moved, and a second hand used to indicate that the user ’s gesture has changed. Potential operation event
The first gesture operation event corresponds to the mouse movement event, and the second gesture operation event corresponds to the mouse click event.
In one embodiment, the step of recognizing the gesture information to obtain a gesture operation event of the user includes:
Recognizing the currently acquired gesture information and the previously acquired gesture information, respectively, to obtain a first gesture currently made by the user and a second gesture previously made by the user;
Determining whether the first gesture and the second gesture belong to a preset gesture, and if so, determining whether the first gesture and the second gesture are the same;
If they are the same, determine the physical displacement of the first gesture with respect to the second gesture; if the physical displacement is greater than a preset threshold, then a gesture for indicating that the user is obtained by the second gesture is obtained A first gesture operation event where the location moves to the location of the first gesture;
If they are different, a second gesture operation event indicating that the gesture of the user is changed from the second gesture to the first gesture is obtained.
In one embodiment, the preset gestures include at least: a fist gesture, a palm opening gesture, and a single-finger straightening gesture.
FIG. 5 shows a more specific schematic diagram of the hardware structure of a terminal provided by an embodiment of the present specification. The terminal may include a processor 510, a memory 520, an input / output interface 530, a communication interface 540, and a bus 550. The processor 510, the memory 520, the input / output interface 530, and the communication interface 540 implement a communication connection within the device through a bus 550.
The processor 510 may be implemented by using a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits. Relevant programs are executed to implement the technical solutions provided by the embodiments of this specification.
The memory 520 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 520 may store an operating system and other application programs. When the technical solutions provided in the embodiments of the present specification are implemented by software or firmware, related program codes are stored in the memory 520 and are called and executed by the processor 510.
The input / output interface 530 is used to connect an input / output module for information input and output. The input / output / module can be configured in the device as a component (not shown in Figure 5), or it can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, and an indicator light.
The communication interface 540 is used to connect a communication module (not shown in FIG. 5) to implement communication interaction between the device and other devices. The communication module can realize communication through a wired method (such as USB, network cable, etc.), and can also realize communication through a wireless method (such as mobile network, WIFI, Bluetooth, etc.).
The bus 550 includes a path for transmitting information between various components of the device (for example, the processor 510, the memory 520, the input / output interface 530, and the communication interface 540).
It should be noted that although the above device only shows the processor 510, the memory 520, the input / output interface 530, the communication interface 540, and the bus 550, in the specific implementation process, the device may further include a device for achieving normal operation. Required additional components. In addition, a person skilled in the art can understand that the foregoing device may also include only components necessary to implement the solutions of the embodiments of the present specification, and does not necessarily include all the components shown in the drawings.
An embodiment of the present specification also provides a computer-readable storage medium having a computer program stored thereon, which is executed by a processor to implement the aforementioned method of simulating a mouse operation using gestures. The method at least includes: acquiring gesture information obtained by a gesture acquisition device to collect a user's gesture; identifying the gesture information to obtain a gesture operation event of the user; and finding a preset mapping set according to the gesture operation event of the user, The preset mapping set includes a correspondence relationship between at least one set of gesture operation events and mouse operation events, wherein the mouse operation event includes at least a mouse click event and a mouse movement event; if the preset mapping When a gesture operation event of the user is found in a centralized manner, a mouse operation event corresponding to the gesture operation event of the user is triggered.
Computer-readable media includes permanent and non-permanent, removable and non-removable media. Information can be stored by any method or technology. Information can be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), Read-only memory (ROM), electrically erasable and programmable read-only memory (EEPROM), flash memory or other memory technologies, CD-ROM, CD-ROM, digital versatile disc ( DVD) or other optical storage, magnetic tape cartridges, magnetic tape storage or other magnetic storage devices, or any other non-transmission media, can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
It can be known from the description of the foregoing embodiments that those skilled in the art can clearly understand that the embodiments of the present specification can be implemented by means of software plus a necessary universal hardware platform. Based on this understanding, the technical solutions of the embodiments of the present specification can be embodied in the form of software products that are essentially or contribute to the existing technology. The computer software products can be stored in storage media such as ROM / RAM, magnetic Discs, optical discs, etc., include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or portions of the embodiments of this specification.
The system, device, module, or unit described in the foregoing embodiments may be specifically implemented by a computer chip or entity, or a product with a certain function. A typical implementation device is a computer. The specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an e-mail receiving and sending device, and a game. Console, tablet, wearable, or a combination of any of these devices.
Each embodiment in this specification is described in an incremental manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, as for the device embodiment, since it is basically similar to the method embodiment, it is described relatively simply. For the relevant part, refer to the description of the method embodiment. The device embodiments described above are only schematic, and the modules described as separate components may or may not be physically separated. When implementing the solutions of the embodiments of this specification, the functions of each module may be Implemented in the same software or hardware. Some or all of the modules may also be selected according to actual needs to achieve the objective of the solution of this embodiment. Those of ordinary skill in the art can understand and implement without creative efforts.
The above are only specific implementations of the embodiments of the present specification. It should be noted that, for those of ordinary skill in the art, without departing from the principles of the embodiments of the present specification, several improvements and retouches can be made. Improvement and retouching should also be regarded as the protection scope of the embodiments of the present specification.

110‧‧‧無線存取點110‧‧‧Wireless Access Point

120‧‧‧無線存取點 120‧‧‧Wireless Access Point

202‧‧‧步驟 202‧‧‧step

204‧‧‧步驟 204‧‧‧step

206‧‧‧步驟 206‧‧‧step

208‧‧‧步驟 208‧‧‧step

41‧‧‧第二切換單元 41‧‧‧Second Switching Unit

42‧‧‧切換通知單元 42‧‧‧ Switch notification unit

43‧‧‧無線存取點切換單元 43‧‧‧Wireless Access Point Switching Unit

44‧‧‧快取資料報文接收單元 44‧‧‧Cache data message receiving unit

510‧‧‧無線存取點二次切換單元 510‧‧‧Wireless Access Point Secondary Switching Unit

520‧‧‧記憶體 520‧‧‧Memory

530‧‧‧處理器 530‧‧‧Processor

540‧‧‧記憶體 540‧‧‧Memory

550‧‧‧處理器 550‧‧‧ processor

為了更清楚地說明本說明書實施例或現有技術中的技術方案,下面將對實施例或現有技術描述中所需要使用的附圖作簡單地介紹,顯而易見地,下面描述中的附圖僅僅是本說明書實施例中記載的一些實施例,對於本領域普通技術人員來講,還可以根據這些附圖獲得其他的附圖。In order to more clearly explain the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly introduced below. Obviously, the drawings in the following description are only For some ordinary people skilled in the art, some of the embodiments described in the description of the specification can also obtain other drawings according to these drawings.

圖1為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的應用場景示意圖; FIG. 1 is a schematic diagram of an application scenario using gestures to simulate mouse operations according to an exemplary embodiment of the present disclosure; FIG.

圖2為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的方法的實施例流程圖; FIG. 2 is a flowchart of an embodiment of a method for simulating a mouse operation using gestures according to an exemplary embodiment of the present disclosure;

圖3為本說明書一示例性實施例示出的預設手勢的示意圖; FIG. 3 is a schematic diagram of a preset gesture according to an exemplary embodiment of the present disclosure;

圖4為本說明書一示例性實施例示出的一種利用手勢模擬滑鼠操作的裝置的實施例框圖; FIG. 4 is a block diagram of an apparatus for simulating a mouse operation using gestures according to an exemplary embodiment of the present disclosure;

圖5示出了本說明書實施例所提供的一種更為具體的終端硬體結構示意圖。 FIG. 5 shows a more specific schematic diagram of a terminal hardware structure provided by an embodiment of the present specification.

Claims (13)

一種利用手勢模擬滑鼠操作的方法,該方法包括: 獲取手勢採集設備採集使用者手勢所得到的手勢資訊; 對該手勢資訊進行識別,得到使用者的手勢操作事件; 根據該使用者的手勢操作事件查找預設映射集,該預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,該滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件; 若在該預設映射集中查找到該使用者的手勢操作事件,則觸發與該使用者的手勢操作事件對應的滑鼠操作事件。A method for simulating mouse operation by using gestures. The method includes: Acquiring gesture information obtained by a gesture collection device collecting a user's gesture; Identify the gesture information to obtain a gesture operation event of the user; Finding a preset mapping set according to the gesture operation event of the user, the preset mapping set includes a correspondence relationship between at least one set of gesture operation events and a mouse operation event, wherein the mouse operation event includes at least a mouse click event, Mouse movement event If a gesture operation event of the user is found in the preset mapping set, a mouse operation event corresponding to the gesture operation event of the user is triggered. 根據申請專利範圍第1項之方法,該手勢採集設備為影像採集設備,該手勢資訊為該影像採集設備採集到的使用者手勢影像。According to the method in the first patent application scope, the gesture acquisition device is an image acquisition device, and the gesture information is a user gesture image collected by the image acquisition device. 根據申請專利範圍第2項之方法,該對該手勢資訊進行識別,得到使用者的手勢操作事件,包括: 在該使用者手勢影像中擷取出使用者的手勢區域; 利用預設的特徵擷取算法對該手勢區域進行特徵擷取; 通過擷取到的特徵進行手勢識別,得到使用者的手勢操作事件。According to the method in the second scope of the patent application, the gesture information is identified to obtain a gesture operation event of the user, including: Extracting a user's gesture area from the user's gesture image; Feature extraction of the gesture area using a preset feature extraction algorithm; Gesture recognition is performed through the captured features to obtain the gesture operation event of the user. 根據申請專利範圍第1項之方法,該使用者的手勢操作事件至少包括:用於表示該使用者的手勢發生移動的第一手勢操作事件、用於表示該使用者的手勢發生變換的第二手勢操作事件; 其中,該第一手勢操作事件對應該滑鼠移動事件,該第二手勢操作事件對應該滑鼠點擊事件。According to the method of claim 1 in the scope of patent application, the gesture operation event of the user includes at least a first gesture operation event indicating that the user ’s gesture has moved, and a second gesture operation event indicating that the user ’s gesture has changed. Gesture operation event; The first gesture operation event corresponds to a mouse movement event, and the second gesture operation event corresponds to a mouse click event. 根據申請專利範圍第4項之方法,該對該手勢資訊進行識別,得到使用者的手勢操作事件,包括: 分別對當前獲取到的手勢資訊與前一次獲取到的手勢資訊進行識別,得到該使用者當前做出的第一手勢與該使用者前一次做出的第二手勢; 判斷該第一手勢與該第二手勢是否屬於預設手勢,若是,則判斷該第一手勢與該第二手勢是否相同; 若相同,則確定該第一手勢相對於該第二手勢的物理位移;若該物理位移大於預設閾值,則得到用於表示該使用者的手勢由該第二手勢所在位置移動到該第一手勢所在位置的第一手勢操作事件; 若不同,則得到用於表示該使用者的手勢由該第二手勢變換為該第一手勢的第二手勢操作事件。According to the method in the scope of patent application No. 4, the gesture information is identified to obtain a gesture operation event of the user, including: Recognize the currently acquired gesture information and the previously acquired gesture information, respectively, to obtain a first gesture currently made by the user and a second gesture previously made by the user; Determining whether the first gesture and the second gesture belong to a preset gesture, and if so, determining whether the first gesture and the second gesture are the same; If they are the same, determine the physical displacement of the first gesture with respect to the second gesture; if the physical displacement is greater than a preset threshold, then a gesture indicating that the user is moved from the position of the second gesture to the A first gesture operation event where the first gesture is located; If different, a second gesture operation event indicating that the user's gesture is transformed from the second gesture to the first gesture is obtained. 根據申請專利範圍第5項之方法,該預設手勢至少包括: 握拳手勢、手掌打開手勢、單指伸直手勢。According to the method of claim 5 in the patent application scope, the preset gesture includes at least: Fist gesture, palm open gesture, single finger straight gesture. 一種利用手勢模擬滑鼠操作的裝置,該裝置包括: 獲取模組,用於獲取手勢採集設備採集使用者手勢所得到的手勢資訊; 識別模組,用於對該手勢資訊進行識別,得到使用者的手勢操作事件; 查找模組,用於根據該使用者的手勢操作事件查找預設映射集,該預設映射集包括至少一組手勢操作事件與滑鼠操作事件的對應關係,其中,該滑鼠操作事件至少包括滑鼠點選事件、滑鼠移動事件; 觸發模組,用於若在該預設映射集中查找到該使用者的手勢操作事件,則觸發與該使用者的手勢操作事件對應的滑鼠操作事件。A device for simulating mouse operation using gestures, the device includes: An acquisition module, configured to acquire gesture information obtained by a gesture acquisition device collecting user gestures; A recognition module for recognizing the gesture information to obtain a gesture operation event of the user; The searching module is configured to find a preset mapping set according to the gesture operation event of the user, and the preset mapping set includes a correspondence relationship between at least one set of gesture operation events and a mouse operation event, wherein the mouse operation event includes at least Mouse click event, mouse movement event; The triggering module is configured to trigger a mouse operation event corresponding to the gesture operation event of the user if the gesture operation event of the user is found in the preset mapping set. 根據申請專利範圍第7項之裝置,該手勢採集設備為影像採集設備,該手勢資訊為該影像採集設備採集到的使用者手勢影像。According to the device in the seventh scope of the patent application, the gesture acquisition device is an image acquisition device, and the gesture information is a user gesture image collected by the image acquisition device. 根據申請專利範圍第8項之裝置,該識別模組包括: 區域擷取子模組,用於在該使用者手勢影像中擷取出使用者的手勢區域; 特徵擷取子模組,用於利用預設的特徵擷取算法對該手勢區域進行特徵擷取; 特徵識別子模組,用於通過擷取到的特徵進行手勢識別,得到使用者的手勢操作事件。According to the device in the scope of patent application No. 8, the identification module includes: An area extraction sub-module for extracting a user's gesture area from the user's gesture image; A feature extraction sub-module, for extracting features of the gesture area by using a preset feature extraction algorithm; The feature recognition sub-module is used for gesture recognition through the captured features to obtain the gesture operation event of the user. 根據申請專利範圍第7項之裝置,該使用者的手勢操作事件至少包括:用於表示該使用者的手勢發生移動的第一手勢操作事件、用於表示該使用者的手勢發生變換的第二手勢操作事件; 其中,該第一手勢操作事件對應該滑鼠移動事件,該第二手勢操作事件對應該滑鼠點擊事件。According to the device in claim 7 of the scope of patent application, the gesture operation event of the user includes at least a first gesture operation event indicating that the user ’s gesture has moved, and a second gesture operation event indicating that the user ’s gesture has changed. Gesture operation event; The first gesture operation event corresponds to a mouse movement event, and the second gesture operation event corresponds to a mouse click event. 根據申請專利範圍第10項之裝置,該識別模組包括: 手勢識別子模組,用於分別對當前獲取到的手勢資訊與前一次獲取到的手勢資訊進行識別,得到該使用者當前做出的第一手勢與該使用者前一次做出的第二手勢; 第一判斷子模組,用於判斷該第一手勢與該第二手勢是否屬於預設手勢; 第二判斷子模組,用於若該第一手勢與該第二手勢屬於預設手勢,則判斷該第一手勢與該第二手勢是否相同; 位移確定子模組,用於若該第一手勢與該第二手勢相同,則確定該第一手勢相對於該第二手勢的物理位移; 第一確定子模組,用於若該物理位移大於預設閾值,則得到用於表示該使用者的手勢由該第二手勢所在位置移動到該第一手勢所在位置的第一手勢操作事件; 第二確定子模組,用於若該第一手勢與該第二手勢不同,則得到用於表示該使用者的手勢由該第二手勢變換為該第一手勢的第二手勢操作事件。According to the device in the scope of patent application No. 10, the identification module includes: The gesture recognition sub-module is used for recognizing the currently acquired gesture information and the previously acquired gesture information respectively to obtain the first gesture currently made by the user and the second gesture previously made by the user ; A first determining sub-module, configured to determine whether the first gesture and the second gesture belong to a preset gesture; A second determining sub-module, configured to determine whether the first gesture and the second gesture are the same if the first gesture and the second gesture are preset gestures; A displacement determining sub-module for determining a physical displacement of the first gesture relative to the second gesture if the first gesture is the same as the second gesture; A first determining sub-module for obtaining a first gesture operation event indicating that the user ’s gesture moves from the position of the second gesture to the position of the first gesture if the physical displacement is greater than a preset threshold ; A second determining sub-module for obtaining a second gesture operation for indicating that the user ’s gesture is changed from the second gesture to the first gesture if the first gesture is different from the second gesture event. 根據申請專利範圍第11項之裝置,該預設手勢至少包括: 握拳手勢、手掌打開手勢、單指伸直手勢。According to the device in the scope of patent application No. 11, the preset gesture includes at least: Fist gesture, palm open gesture, single finger straight gesture. 一種終端,包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的計算機程式,其中,該處理器執行該程式時實現如申請專利範圍第1至6項任一項之方法。A terminal includes a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein when the processor executes the program, the method as described in any one of claims 1 to 6 of the scope of patent application .
TW107147676A 2018-03-12 2018-12-28 Method, device and terminal for simulating mouse operation using gestures TWI695311B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810200113.4 2018-03-12
??201810200113.4 2018-03-12
CN201810200113.4A CN108446073A (en) 2018-03-12 2018-03-12 A kind of method, apparatus and terminal for simulating mouse action using gesture

Publications (2)

Publication Number Publication Date
TW201939260A true TW201939260A (en) 2019-10-01
TWI695311B TWI695311B (en) 2020-06-01

Family

ID=63194033

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107147676A TWI695311B (en) 2018-03-12 2018-12-28 Method, device and terminal for simulating mouse operation using gestures

Country Status (3)

Country Link
CN (1) CN108446073A (en)
TW (1) TWI695311B (en)
WO (1) WO2019174398A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671972A (en) * 2020-12-21 2021-04-16 四川长虹电器股份有限公司 Method for controlling movement of large-screen television mouse by mobile phone

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture
CN111221406B (en) * 2018-11-23 2023-10-13 杭州萤石软件有限公司 Information interaction method and device
CN109696958A (en) * 2018-11-28 2019-04-30 南京华捷艾米软件科技有限公司 A kind of gestural control method and system based on depth transducer gesture identification
CN110221717B (en) * 2019-05-24 2024-07-09 李锦华 Virtual mouse driving device, gesture recognition method and device for virtual mouse
CN112068699A (en) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114115536A (en) * 2021-11-22 2022-03-01 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN114138119A (en) * 2021-12-08 2022-03-04 武汉卡比特信息有限公司 Gesture recognition system and method for mobile phone interconnection split screen projection

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339453B (en) * 2008-08-15 2012-05-23 广东威创视讯科技股份有限公司 Simulated mouse input method based on interactive input apparatus
GB2483168B (en) * 2009-10-13 2013-06-12 Pointgrab Ltd Computer vision gesture based control of a device
CN102854983B (en) * 2012-09-10 2015-12-02 中国电子科技集团公司第二十八研究所 A kind of man-machine interaction method based on gesture identification
CN103926999B (en) * 2013-01-16 2017-03-01 株式会社理光 Palm folding gesture identification method and device, man-machine interaction method and equipment
CN105980965A (en) * 2013-10-10 2016-09-28 视力移动科技公司 Systems, devices, and methods for touch-free typing
CN103530613B (en) * 2013-10-15 2017-02-01 易视腾科技股份有限公司 Target person hand gesture interaction method based on monocular video sequence
CN107885316A (en) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 A kind of exchange method and device based on gesture
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671972A (en) * 2020-12-21 2021-04-16 四川长虹电器股份有限公司 Method for controlling movement of large-screen television mouse by mobile phone

Also Published As

Publication number Publication date
TWI695311B (en) 2020-06-01
CN108446073A (en) 2018-08-24
WO2019174398A1 (en) 2019-09-19

Similar Documents

Publication Publication Date Title
TWI695311B (en) Method, device and terminal for simulating mouse operation using gestures
CN102906671B (en) Gesture input device and gesture input method
WO2014045953A1 (en) Information processing device and method, and program
US20170083741A1 (en) Method and device for generating instruction
US12008167B2 (en) Action recognition method and device for target object, and electronic apparatus
MY195861A (en) Information Processing Method, Electronic Device, and Computer Storage Medium
CN112506340B (en) Equipment control method, device, electronic equipment and storage medium
CN108958627B (en) Touch operation method and device, storage medium and electronic equipment
JP2017531227A (en) Interface providing method and apparatus for recognizing operation in consideration of user's viewpoint
US9836130B2 (en) Operation input device, operation input method, and program
JPWO2011142317A1 (en) Gesture recognition apparatus, method, program, and computer-readable medium storing the program
US9430039B2 (en) Apparatus for controlling virtual mouse based on hand motion and method thereof
EP4030749B1 (en) Image photographing method and apparatus
US10345895B2 (en) Hand and finger line grid for hand based interactions
CN112486394A (en) Information processing method and device, electronic equipment and readable storage medium
JPWO2011142313A1 (en) Object recognition apparatus, method, program, and computer-readable medium storing the software
US20160140762A1 (en) Image processing device and image processing method
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
CN110213407B (en) Electronic device, operation method thereof and computer storage medium
JP2017004438A (en) Input device, finger-tip position detection method, and computer program for finger-tip position detection
Meng et al. Building smart cameras on mobile tablets for hand gesture recognition
US10620760B2 (en) Touch motion tracking and reporting technique for slow touch movements
TWI522892B (en) Electronic apparatus with virtual input feature
JP7213396B1 (en) Electronics and programs
JP2018181169A (en) Information processor, and information processor control method, computer program, and storage medium