WO2015090075A1 - 一种触摸屏终端及其手势识别方法、*** - Google Patents

一种触摸屏终端及其手势识别方法、*** Download PDF

Info

Publication number
WO2015090075A1
WO2015090075A1 PCT/CN2014/084146 CN2014084146W WO2015090075A1 WO 2015090075 A1 WO2015090075 A1 WO 2015090075A1 CN 2014084146 W CN2014084146 W CN 2014084146W WO 2015090075 A1 WO2015090075 A1 WO 2015090075A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence
touch screen
direction vector
module
screen terminal
Prior art date
Application number
PCT/CN2014/084146
Other languages
English (en)
French (fr)
Inventor
周旭武
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Publication of WO2015090075A1 publication Critical patent/WO2015090075A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04104Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Definitions

  • the invention belongs to the field of human-computer interaction of a touch screen terminal, and in particular relates to a touch screen terminal and a method and system for identifying the hand thereof. Background technique
  • a gesture refers to a collection of information of a specific meaning formed by a combination of different gestures of a finger or a palm of a human.
  • gestures can be used to operate, so that the user experience of the application is greatly improved.
  • the touch screen terminal extracts a gesture image based on an optical method, and performs image processing on the continuous image of the image, extracts related feature data, and then recognizes the gesture by using the feature data.
  • the recognition method is complicated and needs to be used with the camera, and the implementation cost is high, and the recognition rate is low when the touch screen terminal is in a poor light environment or in a moving state. Summary of the invention
  • a first technical problem to be solved by the present invention is to provide a gesture recognition method for a touch screen terminal, which aims to solve the problem that the existing touch screen terminal implements gesture recognition based on an optical method, which is high in cost and in a poor light environment or mobile The problem of low recognition rate in the state.
  • the present invention is implemented as a gesture recognition method for a touch screen terminal, and the method includes the following steps:
  • the reference template represents a correspondence between symbols and a sequence of direction vectors.
  • a second technical problem to be solved by the present invention is to provide a gesture recognition system for a touch screen terminal, the system comprising:
  • a preprocessing module configured to acquire a sequence of coordinate points of the gesture through the touch screen, and perform filtering processing on the coordinate point sequence
  • a feature extraction module configured to extract feature points from the sequence of coordinate points after the filtering process by the pre-processing module
  • a direction vector coding module configured to perform direction vector coding on the feature points extracted by the feature extraction module to obtain a direction vector sequence
  • a matching module configured to match the direction vector sequence obtained by the direction vector coding module with a reference template to identify a symbol corresponding to the gesture, where the reference template represents a correspondence between a symbol and a direction vector sequence .
  • a third technical problem to be solved by the present invention is to provide a touch screen terminal including a touch screen, the touch screen terminal further comprising a gesture recognition system of the touch screen terminal, and the system is a gesture recognition system of the touch screen terminal as described above.
  • the gesture recognition method and system for the touch screen terminal proposed by the present invention collects a series of feature points on the gesture through the touch screen, performs direction vector coding on the feature points, and matches the coding result with the reference template to obtain a gesture corresponding to the gesture.
  • the symbol can effectively recognize the symbol formed by the user's gesture sliding the touch screen.
  • the identification method is simple, convenient for user operation, and does not require camera support, which can reduce the implementation cost, and is not affected by ambient light or the moving state of the touch screen terminal, and the recognition rate is high.
  • FIG. 1 is a flowchart of a gesture recognition method of a touch screen terminal provided by the present invention
  • Figure 3 is a schematic diagram showing the angle change curve of the sample points of the present invention
  • Figure 4 is a schematic view of the inflection point extracted from Figure 3 using the present invention
  • Figure 5 is a schematic view of extracting feature points from Figure 3 using the present invention.
  • FIG. 6 is a detailed flowchart of the steps of performing direction vector coding on feature points to obtain a direction vector sequence according to the present invention
  • Figure 7 is a schematic diagram of direction vector coding of the present invention.
  • Figure 8 is a schematic diagram of direction vector coding of a set of random feature points using Figure 7;
  • Figure 9 is a detailed flow chart of the steps of the present invention for identifying symbols corresponding to gestures
  • FIG. 10 is a structural diagram of a gesture recognition system of a touch screen terminal provided by the present invention.
  • Figure 11 is a structural diagram of the feature extraction module of Figure 10.
  • Figure 12 is a structural diagram of the direction vector coding module of Figure 10;
  • FIG 13 is a structural diagram of the matching module of Figure 10. detailed description
  • the present invention provides a gesture recognition method for a touch screen terminal, which is to collect a series of feature points on a gesture through a touch screen, and perform a direction vector on the feature point. Encoding, and matching the encoded result with the reference template to obtain a symbol corresponding to the gesture.
  • FIG. 1 is a flowchart of a gesture recognition method for a touch screen terminal provided by the present invention, including: Step S1: Acquire a sequence of coordinate points of a gesture through a touch screen, and perform filtering processing on the sequence of coordinate points.
  • the coordinate point sequence is filtered by a weighted filtering algorithm.
  • L is the number of coordinate points in the sequence of coordinate points
  • the process of filtering the sequence of coordinate points by the weighted filtering algorithm can be expressed as:
  • ⁇ 3 ⁇ 4 is the weight at different coordinate points
  • Pj is the filtered coordinate point.
  • the window length is 2
  • Step S2 Extracting feature points from the sequence of coordinate points after filtering processing.
  • a feature point refers to a plurality of coordinate points capable of reflecting the skeleton structure of the symbol represented by the gesture.
  • the different coordinate points in the stroke have different curvature information, and the feature points usually have the largest angular velocity.
  • step S2 may include:
  • Step S21 Calculate an angle between a line segment between the adjacent two coordinate points and a horizontal line in the sequence of coordinate points after the filtering process, to obtain an angle sequence.
  • Step S22 Perform low-pass filtering processing on the angle sequence to make the angle sequence smoother.
  • Step S23 Perform differential processing on the angular sequence after the low-pass filtering process to obtain a sequence of difference values.
  • Step S25 Extracting an intermediate point between the start point, the end point, the inflection point, and the adjacent inflection point in the coordinate point sequence as a feature point.
  • step S24 by comparing each difference value ⁇ of the difference value sequence with the threshold value, the coordinate point with the largest angle change is obtained, which can be determined as the inflection point, as shown by the circle in FIG. Shown.
  • step S25 the intermediate point between adjacent inflection points is retained, and the start point, the end point, the inflection point, and the intermediate point are collectively used as the feature points of the gesture, as shown by the coordinate points indicated by circles in FIG.
  • Step S3 Perform direction vector coding on the feature points to obtain a direction vector sequence.
  • step S3 may include:
  • Step S31 Calculate the difference between the abscissa and the ordinate of the adjacent feature points, and represent the difference between the abscissa and the ordinate as the direction value.
  • the direction value may be specified as 1. If the horizontal coordinate difference is a negative number less than 0, the direction value may be specified as -1, if the horizontal coordinate difference is If the value is 0, the direction value can be specified as 0. If the ordinate difference is a positive number greater than 0, the direction value can be specified as 1. If the ordinate difference is a negative number less than 0, the direction can be specified. The value is -1. If the ordinate difference is 0, the direction value can be specified as 0.
  • Step S32 Find a direction vector value corresponding to a direction value of each pair of adjacent feature points in the direction vector definition table, and each direction vector value constitutes a direction vector sequence, and the direction vector sequence is a coding result obtained by encoding the direction vector.
  • the value of ⁇ can be obtained by the vector definition table by the magnitude relationship between the coordinate points.
  • the vector definition table represents a correspondence between the direction vector value and the direction value of the coordinate point.
  • a typical direction vector value includes eight directions as shown by 0-7 in FIG. 7, and the direction vector definition table may be as shown in the following Table 1: 0 1 2 3 4 5 6 7
  • ⁇ ⁇ is the abscissa difference expressed by the direction value
  • ⁇ ⁇ is the ordinate coordinate value expressed by the direction value.
  • the direction vector values corresponding to each pair of adjacent feature points can be obtained by looking up the table one, and the vector values of the respective directions form the direction together.
  • Vector sequence For example, if the feature point distribution extracted in step S2 is as shown in FIG. 8, after performing step S31, by looking up Table 1, the encoded direction vector sequence is 43221007.
  • Step S4 Match the direction vector sequence with the reference template to identify the symbol corresponding to the gesture.
  • step S4 may include:
  • Step S41 Calculate the maximum similarity between the direction vector sequence and the direction vector sequence of each symbol in the reference template by using a dynamic programming algorithm.
  • the reference template characterizes the correspondence between the symbol and the direction vector sequence.
  • Step S42 If the maximum similarity S (D, D R ) is greater than the threshold threshold, ie: S (DD R ) > threshold, the character corresponding to the maximum similarity is selected as the symbol corresponding to the gesture.
  • the present invention may further include: Step S5: waking up the application corresponding to the recognized symbol, or performing an operation corresponding to the recognized symbol.
  • Fig. 10 shows the structure of a gesture recognition system of a touch screen terminal provided by the present invention, and only parts related to the present invention are shown for convenience of explanation.
  • the gesture recognition system of the touch screen terminal includes: a preprocessing module 1 configured to acquire a coordinate point sequence of the gesture through the touch screen, and perform filtering processing on the coordinate point sequence; the feature extraction module 2 is configured to filter from the preprocessing module 1 Feature points are extracted from the processed coordinate point sequence; the direction vector coding module 3 is configured to perform direction vector coding on the feature points extracted by the feature extraction module 2, Obtaining a direction vector sequence; the matching module 4 is configured to match the direction vector sequence obtained by the direction vector coding module 3 with the reference template to identify the symbol corresponding to the gesture.
  • the gesture recognition system of the touch screen terminal may further include: an execution module 5 configured to wake up an application corresponding to the symbol recognized by the matching module 4, or perform an operation corresponding to the symbol recognized by the matching module 4.
  • Fig. 11 shows the structure of the feature extraction module 2 of Fig. 10.
  • the feature extraction module 2 may include: a first calculation sub-module 21, configured to calculate an angle between a line segment between two adjacent coordinate points and a horizontal line in the sequence of coordinate points after the filtering process of the pre-processing module 1 Obtaining an angular sequence; the filtering sub-module 22 is configured to perform low-pass filtering processing on the angular sequence obtained by the first calculating sub-module 21 to make the angular sequence smoother; the difference molecular module 23 is configured to low-pass the filtering sub-module 22 The filtered angular sequence is subjected to differential processing to obtain a sequence of difference values.
  • a first calculation sub-module 21 configured to calculate an angle between a line segment between two adjacent coordinate points and a horizontal line in the sequence of coordinate points after the filtering process of the pre-processing module 1 Obtaining an angular sequence
  • the filtering sub-module 22 is configured to perform low-pass filtering processing on the angular sequence obtained by the first calculating sub-module 21 to make the ang
  • the identification sub-module 24 is configured to identify a difference value of the difference value sequence obtained by the difference molecular module 23 and greater than a threshold value, and determine a sequence of coordinate points according to the recognition result.
  • the inflection point in the extraction sub-module 25 is set to extract the starting point and the end point in the sequence of coordinate points, the inflection point determined by the recognition sub-module 24, and the intermediate point between the adjacent inflection points as feature points.
  • Fig. 12 shows the structure of the direction vector encoding module 3 of Fig. 10.
  • the direction vector coding module 3 may include: a second calculation sub-module 31 configured to calculate an abscissa difference and a ordinate difference between adjacent feature points, and respectively determine the abscissa difference and the ordinate difference
  • the search sub-module 32 is configured to search for a direction vector value corresponding to a direction value of each pair of adjacent feature points in the direction vector definition table, and each direction vector value constitutes a direction vector sequence, and the direction vector sequence is The encoded result obtained by encoding the direction vector.
  • Fig. 13 shows the structure of the matching module 4 of Fig. 10.
  • the matching module 4 may include: a third calculating sub-module 41, configured to calculate a maximum similarity between the direction vector sequence and a direction vector sequence of each symbol in the reference template by using a dynamic programming algorithm, where the reference template represents the symbol Corresponding relationship with the direction vector sequence; the selection sub-module 42 is arranged to select the maximum similarity pair when the maximum similarity calculated by the third calculation sub-module 41 is greater than the threshold value
  • the character that should be used is the symbol corresponding to the gesture.
  • the present invention also provides a touch screen terminal, including a gesture recognition system of the touch screen terminal as described above, which is not described herein.
  • the gesture recognition method and system for the touch screen terminal proposed by the present invention collects a series of feature points on the gesture through the touch screen, performs direction vector coding on the feature points, and matches the coding result with the reference template to obtain a gesture corresponding to the gesture.
  • the symbol can effectively recognize the symbol formed by the user's gesture sliding the touch screen.
  • the identification method is simple, convenient for user operation, and does not require camera support, which can reduce the implementation cost, and is not affected by ambient light or the moving state of the touch screen terminal, and the recognition rate is high.
  • the recognition result can be used to wake up the system under the dark screen condition of the touch screen and call the corresponding application, for example, calling a common call function, thereby avoiding frequent use of the power button on the touch screen terminal and reducing the startup application.
  • the steps required by the program are convenient for operation and improve the service life of the touch screen terminal; meanwhile, since the amount of data used is limited in order to ensure low power consumption under dark screen conditions, the present invention uses less extraction.
  • the method of feature points realizes symbol recognition by using the symbol skeleton formed by feature points, and is particularly suitable for efficient gesture recognition under low power consumption.
  • other operations can be performed using the identified symbols or more applications can be derived, which are not enumerated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

 本发明属于触摸屏终端的人机交互领域,提供了一种触摸屏终端及其手势识别方法、***。该方法及***是通过触摸屏采集手势上的一系列特征点,通过对特征点进行方向矢量编码,并将编码结果与参考模板进行匹配,从而得到与手势对应的符号,可有效识别出用户手势滑动触摸屏所形成的符号。相对于现有技术,该识别方式实现简单,方便用户操作,且无需摄像头支持,可降低实现成本,且不受周围光线或触摸屏终端移动状态的影响,识别率高。

Description

说 明 书 一种触摸屏终端及其手势识别方法、 *** 技术领域
本发明属于触摸屏终端的人机交互领域, 尤其涉及一种触摸屏终端及其手 势识别方法、 ***。 背景技术
在人机交互领域, 手势是指人类通过手指或手掌的不同姿势的组合而形成 的具有特定含义的信息的集合。在目前的触摸屏终端中,可利用手势进行操作, 以使得应用程序的用户体验性获得较大提升。
现有技术中, 触摸屏终端是基于光学方法提取手势图像, 通过对釆集的连 续图像进行图像处理, 提取相关的特征数据, 进而利用特征数据识别出手势。 但这种识别方式算法复杂、 需配合摄像头使用, 实现成本高, 且当触摸屏终端 处于光线较差的环境或处于移动状态中时, 识别率低。 发明内容
本发明所要解决的第一个技术问题在于提供一种触摸屏终端的手势识别方 法, 旨在解决现有的触摸屏终端基于光学方法实现手势识别, 其实现成本高, 且在光线较差的环境或移动状态下识别率低的问题。
本发明是这样实现的, 一种触摸屏终端的手势识别方法, 所述方法包括以 下步骤:
通过触摸屏获取手势的坐标点序列, 并对坐标点序列进行滤波处理; 从滤波处理后的所述坐标点序列中提取特征点;
对所述特征点进行方向矢量编码, 得到方向矢量序列;
将所述方向矢量序列与参考模板进行匹配, 以识别出与所述手势对应的符 号, 所述参考模板表征了符号与方向矢量序列之间的对应关系。
本发明所要解决的第二个技术问题在于提供一种触摸屏终端的手势识别系 统, 所述***包括:
预处理模块, 设置为通过触摸屏获取手势的坐标点序列, 并对所述坐标点 序列进行滤波处理;
特征提取模块, 设置为从所述预处理模块滤波处理后的所述坐标点序列中 提取特征点;
方向矢量编码模块, 设置为对所述特征提取模块提取出的所述特征点进行 方向矢量编码, 得到方向矢量序列;
匹配模块, 设置为将所述方向矢量编码模块得到的所述方向矢量序列与参 考模板进行匹配, 以识别出与手势对应的符号, 所述参考模板表征了符号与方 向矢量序列之间的对应关系。
本发明所要解决的第三个技术问题在于提供一种触摸屏终端,包括触摸屏, 所述触摸屏终端还包括触摸屏终端的手势识别***, 所述***是如上所述的触 摸屏终端的手势识别***。
本发明提出的触摸屏终端的手势识别方法及***是通过触摸屏釆集手势上 的一系列特征点, 通过对特征点进行方向矢量编码, 并将编码结果与参考模板 进行匹配, 从而得到与手势对应的符号, 可有效识别出用户手势滑动触摸屏所 形成的符号。 相对于现有技术, 该识别方式实现简单, 方便用户操作, 且无需 摄像头支持,可降低实现成本,且不受周围光线或触摸屏终端移动状态的影响, 识别率高。 附图说明
图 1是本发明提供的触摸屏终端的手势识别方法的流程图;
图 2是本发明的提取特征点的步骤的详细流程图;
图 3是本发明的样本点的角度变化曲线示意图; 图 4是利用本发明从图 3中提取出拐点的示意图;
图 5是利用本发明从图 3中提取出特征点的示意图;
图 6是本发明的对特征点进行方向矢量编码以得到方向矢量序列的步骤的 详细流程图;
图 7是本发明的方向矢量编码示意图;
图 8是利用图 7对一组随机特征点进行方向矢量编码的示意图;
图 9是本发明的识别出与手势对应的符号的步骤的详细流程图;
图 10是本发明提供的触摸屏终端的手势识别***的结构图;
图 11是图 10中特征提取模块的结构图;
图 12是图 10中方向矢量编码模块的结构图;
图 13是图 10中匹配模块的结构图。 具体实施方式
为了使本发明的目的、 技术方案及优点更加清楚明白, 以下结合附图及实 施例, 对本发明进行进一步详细说明。 应当理解, 此处所描述的具体实施例仅 仅用以解释本发明, 并不用于限定本发明。
为了解决现有基于光学方法实现手势识别方式存在的问题, 本发明提出了 一种触摸屏终端的手势识别方法, 该方法是通过触摸屏釆集手势上的一系列特 征点, 通过对特征点进行方向矢量编码, 并将编码结果与参考模板进行匹配, 从而得到与手势对应的符号。
图 1示出了本发明提供的触摸屏终端的手势识别方法的流程, 包括: 步骤 S1: 通过触摸屏获取手势的坐标点序列, 并对坐标点序列进行滤波处 理。
由于触摸屏上的手写数据容易出现抖动, 因而在本发明中, 在获取到坐标 点序列后, 需要对坐标点序列进行平滑滤波处理。 优选地, 釆用加权滤波算法 对坐标点序列进行滤波处理。 举例来说,假设获取到的坐标点序歹' J为: S = u{u2… ^,其中, = , ) , 1≤ Z≤ L, 是坐标点序列中的起点坐标, ^是坐标点序列中的终点坐标, L是坐标点序 列中的坐标点数, 则釆用加权滤波算法对坐标点序列进行滤波处理的过程可表 示为:
1 j+k
其中, W= i¾, 为滑动滤波的窗长, ί¾为不同坐标点上的权重, Pj为 滤波后的坐标点。 为方便实现, 可取窗长 = 2, 取权值为杨辉三角系数, 即 0 = 1, 4, 6, 4, 1, 则有:
Pj = ∑{ui — i + 6w, + 4w,.+1 + ui+2 )
16
从而经滤波处理后得到的坐标点序列为: S' pf p^ 步骤 S2: 从滤波处理后的坐标点序列中提取特征点。
本发明中, 特征点是指能够反映出手势所表示符号的骨架结构的若干坐标 点。 一般的, 笔划中的不同坐标点具有不同的曲率信息, 而特征点通常都具有 最大的角速度, 基于此, 进一步地, 如图 2所示, 步骤 S2可包括:
步骤 S21 : 计算滤波处理后的坐标点序列中、 相邻两个坐标点之间的线段 与水平线之间的夹角, 得到角度序列。
步骤 S22: 对角度序列进行低通滤波处理, 以使得角度序列更加平滑。
步骤 S23:对低通滤波处理后的角度序列进行差分处理,得到差分值序列。 步骤 S24: 识别差分值序列中、 大于门限值的差分值, 并根据识别结果确 定坐标点序列中的拐点。
步骤 S25: 将坐标点序列中的起点、 终点、 拐点及相邻拐点之间的中间点 作为特征点提取出。
举例来说,假设坐标点序列中的任意两个坐标点;^, A之间形成的线段与水 平线之间的夹角设为 Ω,, 则经步骤 S21后得到的角度序列为: ^ = ^1 — i。 之 后, 通过步骤 S22对角度序列 A进行低通滤波处理。 之后, 在步骤 S23中, 对 滤波后的角度序列 A进行差分处理, 其目的是为了得出角度序列中相邻角度之 间的角度变化值,如图 3所示;假设差分值为 M,则该步骤可表示为: Μ = μ), 其中, 表示对角度序列的差分处理。 之后, 在步骤 S24中, 通过对差分值 序列中每一差分值 ΔΑ与门限值的比较, 得出角度变化最大的坐标点, 既可确定 为拐点, 如图 4中以圓圈表示的坐标点所示。 之后, 在步骤 S25中, 保留相邻 拐点之间的中间点, 并将起点、 终点、 拐点及中间点共同作为手势的特征点, 如图 5中以圓圈表示的坐标点所示。
步骤 S3: 对特征点进行方向矢量编码, 得到方向矢量序列。
进一步地, 如图 6所示, 步骤 S3可包括:
步骤 S31 : 计算相邻特征点之间的横坐标差值和纵坐标差值, 并将横坐标 差值和纵坐标差值分别以方向值表示。
举例来说, 若横坐标差值为大于 0的正数, 则可规定其方向值为 1, 若横 坐标差值为小于 0的负数, 则可规定其方向值为 -1, 若横坐标差值为 0, 则可 规定其方向值为 0; 若纵坐标差值为大于 0的正数, 则可规定其方向值为 1, 若 纵坐标差值为小于 0的负数, 则可规定其方向值为 -1, 若纵坐标差值为 0, 则 可规定其方向值为 0。
步骤 S32: 查找方向矢量定义表中、 与每对相邻特征点的方向值对应的方 向矢量值, 各方向矢量值构成方向矢量序列, 该方向矢量序列即为方向矢量编 码后得到的编码结果。
其中, 方向矢量序列 £>可表示为: D = dld2 - "dL_l, dl表示坐标点/ 7M与坐标 点/^之间的方向矢量, 且 4 e {0,l,2,3,4,5,6,7} ,l≤Z≤L-l。 本发明中, ^的取值可 由坐标点之间的大小关系, 通过矢量定义表获得。
其中, 矢量定义表表征了方向矢量值与坐标点的方向值之间的对应关系。 举例来说, 典型的方向矢量值包括如图 7中 0-7所示的 8个方向, 则方向矢量 定义表可以是如下表一所示: 0 1 2 3 4 5 6 7
ΔΧ 1 1 0 -1 -1 -1 0 1
ΔΥ 0 1 1 1 0 -1 -1 -1 表一中, Δ Χ为以方向值表示的横坐标差值, Δ Υ为以方向值表示的纵坐 标差值。 则在通过步骤 S31将横坐标差值和纵坐标差值分别表示为方向值后, 通过查找表一, 即可得到每对相邻特征点对应的方向矢量值, 各个方向矢量值 共同构成了方向矢量序列。例如,若经步骤 S2提取出的特征点分布如图 8所示, 则在执行步骤 S31后,通过查找表一,可得到编码后的方向矢量序列为 43221007。
步骤 S4: 将方向矢量序列与参考模板进行匹配, 以识别出与手势对应的符 号。
进一步地, 如图 9所示, 步骤 S4可包括:
步骤 S41 : 利用动态规划算法, 计算方向矢量序列与参考模板中各符号的 方向矢量序列之间的最大相似度。 其中, 参考模板表征了符号与方向矢量序列 之间的对应关系。
本发明中, 相似度的定义如下表二所示:
0 1 2 3 4 5 6 7
0 10 6 0 0 0 0 0 6
1 6 10 6 0 0 0 0 0
2 0 6 10 6 0 0 0 0
3 0 0 6 10 6 0 0 0
4 0 0 0 6 10 6 0 0
5 0 0 0 0 6 10 6 0
6 0 0 0 0 0 6 10 6
7 6 0 0 0 0 0 6 10 若手势对应的符号 的方向矢量序列为 ,与之比较的参考模板中字符 CR 的方向矢量序列为 , 则有: D^K D^H' - d^利用动态规划 算法, 即可计算出方向矢量序列 与方向矢量序列为 之间的最大相似度 步骤 S42: 若最大相似度 S (D,,DR ) 大于门限值 threshold , 即: S (D DR ) > threshold, 则选择最大相似度对应的字符作为与手势对应的符号。
之后, 在通过步骤 S4得到手势对应的符号后, 本发明还可包括: 步骤 S5: 唤醒与识别出的符号对应的应用程序, 或执行与识别出的符号对 应的操作。
图 10示出了本发明提供的触摸屏终端的手势识别***的结构,为了便于说 明, 仅示出了与本发明相关的部分。
本发明提供的触摸屏终端的手势识别***包括: 预处理模块 1, 设置为通 过触摸屏获取手势的坐标点序列, 并对坐标点序列进行滤波处理; 特征提取模 块 2, 设置为从预处理模块 1滤波处理后的坐标点序列中提取特征点; 方向矢 量编码模块 3, 设置为对特征提取模块 2提取出的特征点进行方向矢量编码, 得到方向矢量序列; 匹配模块 4, 设置为将方向矢量编码模块 3得到的方向矢 量序列与参考模板进行匹配, 以识别出与手势对应的符号。
本发明提供的触摸屏终端的手势识别***还可包括: 执行模块 5, 设置为 唤醒与匹配模块 4识别出的符号对应的应用程序, 或执行与匹配模块 4识别出 的符号对应的操作。
图 11示出了图 10中特征提取模块 2的结构。
具体地, 特征提取模块 2可包括: 第一计算子模块 21, 用于计算预处理模 块 1滤波处理后的坐标点序列中、 相邻两个坐标点之间的线段与水平线之间的 夹角, 得到角度序列; 滤波子模块 22, 设置为对第一计算子模块 21得到的角 度序列进行低通滤波处理, 以使得角度序列更加平滑; 差分子模块 23, 设置为 对滤波子模块 22低通滤波处理后的角度序列进行差分处理, 得到差分值序列; 识别子模块 24, 设置为识别差分子模块 23得到的差分值序列中、 大于门限值 的差分值, 并根据识别结果确定坐标点序列中的拐点; 提取子模块 25, 设置为 将坐标点序列中的起点、终点、识别子模块 24确定的拐点及相邻拐点之间的中 间点作为特征点提取出。
图 12示出了图 10中方向矢量编码模块 3的结构。
具体地, 方向矢量编码模块 3可包括: 第二计算子模块 31, 设置为计算相 邻特征点之间的横坐标差值和纵坐标差值, 并将横坐标差值和纵坐标差值分别 以方向值表示; 查找子模块 32, 设置为查找方向矢量定义表中、 与每对相邻特 征点的方向值对应的方向矢量值, 各方向矢量值构成方向矢量序列, 该方向矢 量序列即为方向矢量编码后得到的编码结果。
图 13示出了图 10中匹配模块 4的结构。
具体地, 匹配模块 4可包括: 第三计算子模块 41, 设置为利用动态规划算 法,计算方向矢量序列与参考模板中各符号的方向矢量序列之间的最大相似度, 该参考模板表征了符号与方向矢量序列之间的对应关系; 选择子模块 42, 设置 为当第三计算子模块 41计算出的最大相似度大于门限值时,选择最大相似度对 应的字符作为与手势对应的符号。
此外, 本发明还提供了一种触摸屏终端, 包括一如上所述的触摸屏终端的 手势识别***, 在此不赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是 可以通过程序来控制相关的硬件完成, 所述的程序可以在存储于一计算机可读 取存储介质中, 所述的存储介质, 如 ROM/RAM、 磁盘、 光盘等。
以上所述仅为本发明的较佳实施例而已, 并不用以限制本发明, 凡在本发 明的精神和原则之内所作的任何修改、 等同替换和改进等, 均应包含在本发明 的保护范围之内。 工业实用性
本发明提出的触摸屏终端的手势识别方法及***是通过触摸屏釆集手势上 的一系列特征点, 通过对特征点进行方向矢量编码, 并将编码结果与参考模板 进行匹配, 从而得到与手势对应的符号, 可有效识别出用户手势滑动触摸屏所 形成的符号。 相对于现有技术, 该识别方式实现简单, 方便用户操作, 且无需 摄像头支持,可降低实现成本,且不受周围光线或触摸屏终端移动状态的影响, 识别率高。 另外, 在识别出符号后, 可利用识别结果在触摸屏暗屏条件下唤醒 ***并调用相应的应用程序, 例如调用常用的通话功能等, 从而避免了触摸屏 终端上电源键的频繁使用, 减少启动应用程序所需的步骤, 方便操作, 并提高 了触摸屏终端的使用寿命; 同时, 由于在暗屏条件下, 为保证低耗电, 釆用的 数据量有限, 因此, 本发明是釆用提取较少特征点的方式, 利用特征点形成的 符号骨架来实现符号识别, 特别适用于低电耗下的高效的手势识别。 当然, 在 实际中, 还可利用识别出的符号执行其它操作或衍生出更多的应用, 在此不一 一列举。

Claims

权 利 要 求 书
1、 一种触摸屏终端的手势识别方法, 所述方法包括以下步骤:
通过触摸屏获取手势的坐标点序列, 并对坐标点序列进行滤波处理; 从滤波处理后的所述坐标点序列中提取特征点;
对所述特征点进行方向矢量编码, 得到方向矢量序列;
将所述方向矢量序列与参考模板进行匹配, 以识别出与所述手势对应的符 号, 所述参考模板表征了符号与方向矢量序列之间的对应关系。
2、 如权利要求 1所述的触摸屏终端的手势识别方法, 其中, 在所述将所述 方向矢量序列与参考模板进行匹配, 以识别出与所述手势对应的符号的步骤之 后, 所述方法还包括以下步骤:
唤醒与识别出的所述符号对应的应用程序, 或执行与识别出的所述符号对 应的操作。
3、 如权利要求 1或 2所述的触摸屏终端的手势识别方法, 其中, 所述从滤 波处理后的所述坐标点序列中提取特征点的步骤包括以下步骤:
计算滤波处理后的所述坐标点序列中、 相邻两个坐标点之间的线段与水平 线之间的夹角, 得到角度序列;
对所述角度序列进行低通滤波处理; 识别所述差分值序列中、 大于门限值的差分值, 并根据识别结果确定所述 坐标点序列中的拐点;
将所述坐标点序列中的起点、 终点、 所述拐点及相邻拐点之间的中间点作 为特征点提取出。
4、 如权利要求 1或 2所述的触摸屏终端的手势识别方法, 其中, 所述对所 述特征点进行方向矢量编码, 得到方向矢量序列的步骤包括以下步骤:
计算相邻特征点之间的横坐标差值和纵坐标差值, 并将所述横坐标差值和 所述纵坐标差值分别以方向值表示; 查找方向矢量定义表中、 与每对相邻特征点的所述方向值对应的方向矢量 值, 各所述方向矢量值构成方向矢量序列。
5、 如权利要求 1或 2所述的触摸屏终端的手势识别方法, 其中, 所述将所 述方向矢量序列与参考模板进行匹配, 以识别出与所述手势对应的符号的步骤 包括以下步骤:
利用动态规划算法, 计算所述方向矢量序列与参考模板中各符号的方向矢 量序列之间的最大相似度;
若所述最大相似度大于门限值, 则选择所述最大相似度对应的字符作为与 所述手势对应的符号。
6、 一种触摸屏终端的手势识别***, 所述***包括:
预处理模块, 设置为通过触摸屏获取手势的坐标点序列, 并对所述坐标点 序列进行滤波处理;
特征提取模块, 设置为从所述预处理模块滤波处理后的所述坐标点序列中 提取特征点;
方向矢量编码模块, 设置为对所述特征提取模块提取出的所述特征点进行 方向矢量编码, 得到方向矢量序列;
匹配模块, 设置为将所述方向矢量编码模块得到的所述方向矢量序列与参 考模板进行匹配, 以识别出与手势对应的符号, 所述参考模板表征了符号与方 向矢量序列之间的对应关系。
7、 如权利要求 6所述的触摸屏终端的手势识别***, 其中, 所述***还包 括:
执行模块,设置为唤醒与所述匹配模块识别出的所述符号对应的应用程序, 或执行与所述匹配模块识别出的所述符号对应的操作。
8、 如权利要求 6所述的触摸屏终端的手势识别***, 其中, 所述特征提取 模块包括:
第一计算子模块, 设置为计算所述预处理模块滤波处理后的所述坐标点序 列中、 相邻两个坐标点之间的线段与水平线之间的夹角, 得到角度序列; 滤波子模块, 设置为对所述第一计算子模块得到的所述角度序列进行低通 滤波处理;
差分子模块, 设置为对所述滤波子模块低通滤波处理后的所述角度序列进 行差分处理, 得到差分值序列;
识别子模块, 设置为识别所述差分子模块得到的所述差分值序列中、 大于 门限值的差分值, 并根据识别结果确定所述坐标点序列中的拐点;
提取子模块, 设置为将所述坐标点序列中的起点、 终点、 所述识别子模块 确定的所述拐点及相邻拐点之间的中间点作为特征点提取出。
9、 如权利要求 6所述的触摸屏终端的手势识别***, 其中, 所述匹配模块 包括:
第三计算子模块, 设置为利用动态规划算法, 计算所述方向矢量序列与所 述参考模板中各符号的方向矢量序列之间的最大相似度;
选择子模块, 设置为当所述第三计算子模块计算出的所述最大相似度大于 门限值时, 选择所述最大相似度对应的字符作为与所述手势对应的符号。
10、 一种触摸屏终端, 包括触摸屏, 其中, 所述触摸屏终端还包括触摸屏 终端的手势识别***, 所述***是如权利要求 6至 9任一项所述的触摸屏终端 的手势识别***。
PCT/CN2014/084146 2013-12-19 2014-08-12 一种触摸屏终端及其手势识别方法、*** WO2015090075A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310700794.8A CN103677642A (zh) 2013-12-19 2013-12-19 一种触摸屏终端及其手势识别方法、***
CN201310700794.8 2013-12-19

Publications (1)

Publication Number Publication Date
WO2015090075A1 true WO2015090075A1 (zh) 2015-06-25

Family

ID=50315357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/084146 WO2015090075A1 (zh) 2013-12-19 2014-08-12 一种触摸屏终端及其手势识别方法、***

Country Status (2)

Country Link
CN (1) CN103677642A (zh)
WO (1) WO2015090075A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677642A (zh) * 2013-12-19 2014-03-26 深圳市汇顶科技股份有限公司 一种触摸屏终端及其手势识别方法、***
CN103995665A (zh) * 2014-04-14 2014-08-20 深圳市汇顶科技股份有限公司 移动终端及其在待机状态进入应用程序的实现方法、***
CN104460999B (zh) * 2014-11-28 2017-07-28 广东欧珀移动通信有限公司 一种带拐点的手势识别方法及装置
CN105138115B (zh) * 2015-07-28 2018-12-07 Tcl移动通信科技(宁波)有限公司 一种基于触摸屏的移动终端唤醒方法及***
CN105550559A (zh) * 2015-12-03 2016-05-04 深圳市汇顶科技股份有限公司 手势解锁方法、装置及移动终端
CN108920077B (zh) * 2018-06-27 2021-07-23 青岛清原精准农业科技有限公司 基于动态手势库识别的化学结构式绘制方法
CN110109551B (zh) * 2019-05-17 2021-02-23 中国科学院电子学研究所 手势识别方法、装置、设备及存储介质
CN112947836A (zh) * 2019-12-11 2021-06-11 北京集创北方科技股份有限公司 基于拐点特征的手势识别方法、***、存储介质、触屏设备
CN113093972A (zh) * 2019-12-23 2021-07-09 北京集创北方科技股份有限公司 手势识别方法、***、存储介质、触屏设备
CN111459395A (zh) * 2020-03-30 2020-07-28 北京集创北方科技股份有限公司 手势识别方法、***、存储介质、人机交互设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010132075A1 (en) * 2009-05-11 2010-11-18 Tegic Communications, Inc. Touch screen and graphical user interface
CN103455265A (zh) * 2012-06-01 2013-12-18 腾讯科技(深圳)有限公司 受控设备的控制方法和***、移动终端和受控设备
US20140053113A1 (en) * 2012-08-15 2014-02-20 Prss Holding BV Processing user input pertaining to content movement
CN103677642A (zh) * 2013-12-19 2014-03-26 深圳市汇顶科技股份有限公司 一种触摸屏终端及其手势识别方法、***

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347477A (en) * 1992-01-28 1994-09-13 Jack Lee Pen-based form computer
JPH0612493A (ja) * 1992-06-25 1994-01-21 Hitachi Ltd ジェスチャ認識方法およびユーザインタフェース方法
CN101339487A (zh) * 2008-08-29 2009-01-07 飞图科技(北京)有限公司 一种基于手持设备的依靠快捷图形识别调用功能的方法
CN102298486A (zh) * 2010-06-22 2011-12-28 广东国笔科技股份有限公司 一种基于触摸屏的快速调用***及方法
CN103034429A (zh) * 2011-10-10 2013-04-10 北京千橡网景科技发展有限公司 用于触摸屏的身份验证方法和装置
CN103442114B (zh) * 2013-08-16 2015-10-21 中南大学 一种基于动态手势的身份认证方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010132075A1 (en) * 2009-05-11 2010-11-18 Tegic Communications, Inc. Touch screen and graphical user interface
CN103455265A (zh) * 2012-06-01 2013-12-18 腾讯科技(深圳)有限公司 受控设备的控制方法和***、移动终端和受控设备
US20140053113A1 (en) * 2012-08-15 2014-02-20 Prss Holding BV Processing user input pertaining to content movement
CN103677642A (zh) * 2013-12-19 2014-03-26 深圳市汇顶科技股份有限公司 一种触摸屏终端及其手势识别方法、***

Also Published As

Publication number Publication date
CN103677642A (zh) 2014-03-26

Similar Documents

Publication Publication Date Title
WO2015090075A1 (zh) 一种触摸屏终端及其手势识别方法、***
EP3278201B1 (en) Authenticating a user and launching an application on a single intentional user gesture
US8180160B2 (en) Method for character recognition
WO2015158116A1 (zh) 移动终端及其在待机状态进入应用程序的实现方法、***
WO2017092296A1 (zh) 手势解锁方法、装置及移动终端
CN103294996A (zh) 一种3d手势识别方法
WO2013075466A1 (zh) 一种基于图像传感模块的字符输入方法、装置及终端
CN106778456B (zh) 一种手写输入的优化方法及装置
CN111414837A (zh) 手势识别方法、装置、计算机设备及存储介质
CN108781252B (zh) 一种图像拍摄方法及装置
WO2016107317A1 (zh) 输入法光标操作方法和装置
WO2017005207A1 (zh) 一种输入方法、输入装置、服务器和输入***
Premaratne et al. Centroid tracking based dynamic hand gesture recognition using discrete Hidden Markov Models
KR101559502B1 (ko) 실시간 손 포즈 인식을 통한 비접촉식 입력 인터페이스 방법 및 기록 매체
CN112818909A (zh) 图像更新方法、装置、电子设备及计算机可读介质
CN114022887B (zh) 文本识别模型训练及文本识别方法、装置、电子设备
Khurana et al. Static hand gestures recognition system using shape based features
CN107450717B (zh) 一种信息处理方法及穿戴式设备
CN102929394B (zh) 一种基于手势识别的盲文输入法
CN103870071A (zh) 一种触摸源识别方法及***
CN106127222A (zh) 一种基于视觉的字符串相似度计算方法及相似性判断方法
Vivek Veeriah et al. Robust hand gesture recognition algorithm for simple mouse control
KR20220038477A (ko) 텍스트 라인 추출
CN111492407B (zh) 用于绘图美化的***和方法
CN106778568B (zh) 基于web页面的验证码的处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14871847

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14871847

Country of ref document: EP

Kind code of ref document: A1