WO2017114002A1 - 一维手写文字输入设备和一维手写文字输入方法 - Google Patents

一维手写文字输入设备和一维手写文字输入方法 Download PDF

Info

Publication number
WO2017114002A1
WO2017114002A1 PCT/CN2016/105694 CN2016105694W WO2017114002A1 WO 2017114002 A1 WO2017114002 A1 WO 2017114002A1 CN 2016105694 W CN2016105694 W CN 2016105694W WO 2017114002 A1 WO2017114002 A1 WO 2017114002A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
dimensional
stroke
gesture
user
Prior art date
Application number
PCT/CN2016/105694
Other languages
English (en)
French (fr)
Inventor
喻纯
孙科
钟鸣远
李心成
史元春
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2017114002A1 publication Critical patent/WO2017114002A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink

Definitions

  • the present invention generally relates to handwritten text input techniques, and more particularly to one-dimensional handwritten text input devices and methods.
  • smart devices With the rapid development of smart devices, the need for text input on smart devices has become more intense. However, smart devices are often limited by the size of their physical form, and thus the input interface is limited, so that text input methods based on physical keyboards or soft keyboards are no longer applicable, and text input on these smart devices becomes difficult. Examples of such smart devices are smart watches, smart glasses, and smart bracelets.
  • the text input method can be roughly divided into a selection based method and a handwriting based input method.
  • a selection-based method for example, provides a physical or virtual keyboard, each key representing a respective letter or number, selecting (eg, by clicking on a corresponding key) a letter, ie, entering the letter, selecting a plurality of letters to form a word.
  • the form of the character (or word) or the corresponding character (or word) of the defined character is generally simulated by the method of drawing, and the input text is recognized by the intelligent computing device.
  • the present invention is directed to a method based on handwriting input, and more particularly to a method based on handwriting input on a one-dimensional interface.
  • the present invention contemplates providing methods and apparatus for handwriting input and identification on a restricted input interface, such as a one-dimensional input interface.
  • a restricted input interface such as a one-dimensional input interface.
  • the one-dimensional input interface include, for example, a surface of a pair of glasses of a smart glasses along a length direction, a side frame of a smartphone, a side of a smart bracelet, and the like.
  • a one-dimensional handwritten character input device may include: a user input interface for a user to manually make a one-dimensional character in a chronological manner by a body part or a drawing tool in a manner of contacting the user input interface.
  • the gesture is basically a reciprocating stroke on a straight line;
  • the detecting unit is configured to detect a one-dimensional character gesture made by the user on the user input interface, and convert the one-dimensional character gesture into a one-dimensional signal,
  • the one-dimensional signal is a signal having a value in only one dimension in the coordinate system;
  • the character template database is configured to store a template for each character, and the template of each character corresponds to a one-dimensional stroke, which is one in a predetermined order.
  • each sub-stroke is a line segment substantially on a straight line;
  • the identification unit is configured to receive the one-dimensional signal from the detecting unit, and convert the one-dimensional signal into a one-dimensional stroke to be processed, based on The processed one-dimensional stroke and the template of each character stored in the character template database, and the one-dimensional stroke is recognized as the corresponding character
  • the one-dimensional handwritten text input device may further include: a display unit; wherein at least one character template stored in the character template database corresponds to a plurality of characters, wherein when the processor recognizes that the one-dimensional stroke corresponds to the plurality of When the character is a character candidate, the processor performs processing such that the plurality of character candidates are displayed on the display, wherein one of the plurality of character candidates is highlighted at any one time;
  • the processor In response to the detecting unit detecting the selection gesture made by the user, the processor selects the currently highlighted character candidate as the character recognition result; and in response to the detecting unit detecting the movement gesture made by the user, the processor switching is highlighted Character candidate.
  • a one-dimensional handwritten text input device wherein in response to the detecting unit detecting a character separation gesture made by a user, the processor may distinguish successive one-dimensional character gestures to end the input of the previous character, and Ready to receive input for the next character.
  • a one-dimensional handwritten text input device wherein the character separation gesture may be a pause gesture, and the pause gesture corresponds to a user finger or a drawing tool pauses on the user input interface for more than a predetermined threshold.
  • the so-called one-dimensional character gesture may correspond to a single stroke, and the single stroke is a single continuous stroke.
  • the user's finger or drawing tool does not leave the user. Input interface.
  • the sub-strokes of the strokes of the templates of the respective characters may have a length and a direction, wherein the length is one of two predetermined lengths.
  • the one-dimensional character gesture can be similar to the actual two-dimensional character handwriting trend process.
  • At least one of the character templates stored in the character template database may correspond to a plurality of characters.
  • the character may be first rotated by 90 degrees, and then the two-dimensional handwritten gesture is mapped to the one-dimensional character gesture.
  • the processor converting the one-dimensional signal into a one-dimensional stroke to be processed may include: identifying an inflection point, where the inflection point refers to the stroke progression over time in the one-dimensional stroke formation process a point at which the direction changes, the inflection point includes a start point and an end point of the stroke; the inflection point in the distance point of the inflection point is less than a predetermined pixel length; the line segment is divided based on the remaining inflection point; and the line segment whose length is less than a predetermined threshold is removed Normalize the remaining line segments to obtain a set of normalized line segments as a sequence of sub-strokes in chronological order.
  • the identifying the one-dimensional stroke as the corresponding character based on the one-dimensional stroke to be processed and the template of each character stored in the character template database may include: determining the The number of sub-strokes included in the one-dimensional stroke; searching the character template database to determine a template in which the number of sub-strokes in the template is equal to the determined number of sub-strokes included in the one-dimensional stroke;
  • the one-dimensional handwritten character input device may further include a display unit configured to display a predetermined number of character candidates in a visible form in the case where there are a plurality of character candidates; and in response to detecting that the user makes a
  • the dimension character gesture end determination gesture displays the determined character candidate corresponding to the one-dimensional character gesture, the one-dimensional character gesture end determination gesture indicating the end of the one-dimensional character gesture, and in response to detecting that the user makes the character selection gesture, Selecting the currently highlighted candidate character and displaying it in the display area of the text indicating the input, or in response to detecting that the user makes a candidate character switching gesture, switching the currently highlighted candidate character and displaying it in a highlighted form Switch to the highlighted candidate character.
  • a one-dimensional handwritten text input device wherein the candidate character switching gesture may be a movement of a finger in a direction from a currently highlighted candidate character to a candidate character to be selected, a one-dimensional character gesture
  • the candidate characters are displayed in the direction from the first boundary to the second boundary in descending order of probability.
  • the one-dimensional character gesture end determination gesture may be to keep the finger on the user input interface for a predetermined time, and the character selection gesture is to input the finger from the user input interface. Lifting up, the candidate character switching gesture is to move the finger for a predetermined distance.
  • the one-dimensional handwritten character input device may further include: a language model database storing information indicating a probability distribution by a character, assigning a probability to a word composed of characters; and the processor is in a character recognition state Next, switching between a character input mode and a word input mode, wherein in the word input mode, the reference language model database calculates the probability of inputting a specific vocabulary under a given one-dimensional stroke sequence in real time, based on the probability of inputting a specific vocabulary, The candidate words are sorted and displayed on the display unit for the user to select.
  • the graphical user interface of the display unit includes three regions: a text region for displaying the input text, and a character region for displaying the recognized character; A word area for displaying recognized word candidates.
  • a one-dimensional handwritten text input device wherein a height of the text area may be mapped to a full length of a one-dimensional direction of a one-dimensional character gesture of a user input interface, and a horizontal direction of the text area represents a time axis And displaying a one-dimensional character gesture along the time axis on the text area to provide two-dimensional visual feedback of the one-dimensional character gesture.
  • the one-dimensional handwritten text input device may be a smart wearable device.
  • a one-dimensional handwritten text input method performed by a one-dimensional handwritten text input device may include a user input interface and a character template database, and the one-dimensional handwritten text input
  • the method includes: in response to a user manually making a one-dimensional character gesture in a chronological order by a finger or a drawing tool to contact the user input interface, detecting a one-dimensional character gesture made by the user on the user input interface, and the one-dimensional character gesture Converting to a one-dimensional signal, the one-dimensional signal is a signal having a value in only one dimension in the coordinate system, the so-called one-dimensional character gesture is basically a reciprocating swipe on a straight line; receiving the one-dimensional from the detecting unit Transmitting the one-dimensional signal into a one-dimensional stroke to be processed, and identifying the one-dimensional stroke as a corresponding character based on the one-dimensional stroke to be processed and the template of each character stored in the character template database, where
  • a one-dimensional handwritten character input method wherein at least one character template stored in a character template database corresponds to a plurality of characters, wherein when a one-dimensional stroke is recognized as corresponding to a plurality of characters as a character candidate, execution is performed Processing to cause the plurality of character candidates to be displayed on the display, wherein one of the plurality of character candidates is highlighted at any one time; and in response to the detecting unit detecting the selection gesture made by the user, selecting the currently highlighted character candidate As a result of the character recognition; and in response to the detection unit detecting the movement gesture made by the user, the processor switches the highlighted character candidate.
  • a one-dimensional handwritten character input method wherein in response to the detecting unit detecting a character separation gesture made by a user, a continuous one-dimensional character gesture is distinguished to end the input of the previous character, and is ready to be received.
  • One character input in response to the detecting unit detecting a character separation gesture made by a user, a continuous one-dimensional character gesture is distinguished to end the input of the previous character, and is ready to be received.
  • the so-called one-dimensional character gesture may correspond to a single stroke, and the single stroke is a single continuous stroke.
  • the user's finger or the drawing tool does not leave the user. Input interface.
  • a one-dimensional handwritten text input method a stroke of a template of each character
  • the sub-strokes can have a length and a direction, wherein the length can be one of two predetermined lengths.
  • the one-dimensional character gesture may be similar to the actual two-dimensional character handwriting trend process.
  • the converting the one-dimensional signal into a one-dimensional stroke to be processed may include: identifying an inflection point, indicating that the direction of the stroke of the one-dimensional stroke is changed over time. a point, the inflection point includes a starting point and an ending point of the stroke; removing an inflection point in the inflection point in which the front inflection point is less than a predetermined pixel length; dividing the line segment based on the remaining inflection points; removing the line segment whose length in the head and tail line segment is less than a predetermined threshold, and remaining The line segments are normalized to obtain a set of normalized line segments as a sequence of chronological substrokes.
  • the identifying the one-dimensional stroke as the corresponding character based on the one-dimensional stroke to be processed and the template of each character stored in the character template database may include: determining the The number of sub-strokes included in the one-dimensional stroke; retrieving the character template database, determining a template in which the number of sub-strokes in the template is equal to the determined number of sub-strokes included in the one-dimensional stroke; for each of the determined templates Determining, according to a composition order, a probability that each sub-stroke in the template is presented as a corresponding sub-stroke in the one-dimensional stroke, calculating a probability that the one-dimensional stroke corresponds to the template; and corresponding to the one-dimensional stroke based on the one-dimensional stroke The probability of each template identifies the one-dimensional stroke as a corresponding character.
  • the one-dimensional handwritten character input method may further include: a language model database storing information indicating a probability distribution by a character, assigning a probability to a word composed of characters; and the processor is in a character recognition state.
  • a language model database storing information indicating a probability distribution by a character, assigning a probability to a word composed of characters
  • the processor is in a character recognition state.
  • a one-dimensional handwritten character input method may be, in a vocabulary mode, on a graphical user interface of a display unit: displaying input text on a text area; displaying the recognized character on the character area; and in the word area The identified word candidates are displayed on them.
  • a one-dimensional handwritten character input method wherein a height of the text area is mapped to a full length of a one-dimensional direction of a one-dimensional character gesture of a user input interface, and a horizontal direction of the text area represents a time axis.
  • the handwriting recognition method may further include displaying a one-dimensional character gesture along the time axis on the text area to provide two-dimensional visual feedback of the one-dimensional character gesture.
  • a one-dimensional handwritten text input device may include: a user input interface, wherein the user manually makes a one-dimensional character gesture in a chronological order by means of a body part or a tool to contact the user input interface; the detecting unit is configured to detect a one-dimensional character gesture made by the user on the user input interface, Converting the one-dimensional character gesture into a one-dimensional signal, the one-dimensional signal being a signal having a value in only one dimension in the coordinate system; a character template database configured to store a template for each character, a template for each character a feature vector obtained by performing statistical feature extraction on the one-dimensional signal; and a processor configured to receive the one-dimensional signal from the detecting unit, and convert the one-dimensional signal into a feature vector to be processed, based on the to-be-processed A template for each character stored in the feature vector and character template database identifies the corresponding character.
  • the one-dimensional character gesture may be one of the following items: a reciprocating swipe substantially on a straight line on the user input interface; at the user input Single point depression on the interface; and rotation on a one-dimensional angular coordinate system performed on the user input interface.
  • the one-dimensional handwritten text input method and apparatus according to an embodiment of the present invention are particularly suitable for text input on a relatively narrow one-dimensional input space.
  • FIG. 1 shows a simplified block diagram of a one-dimensional handwritten text input device 100 in accordance with an embodiment of the present invention.
  • FIG. 2 shows a sub-stroke composition diagram of each character of a character template in accordance with one embodiment of the present invention.
  • Figure 3 shows the distribution of normalized short, medium and long sub-strokes in the gestures actually made by the user.
  • FIG. 4 is a block diagram showing the composition of sub-strokes of respective characters of a character template according to another embodiment of the present invention.
  • FIG. 5 shows a flow chart of a one-dimensional handwritten text input method performed by a one-dimensional handwritten text input device in accordance with an embodiment of the present invention.
  • FIG. 6 shows an example of a graphical user interface in accordance with an embodiment of the present invention.
  • FIG. 7 shows a flow diagram of a method 400 of converting a one-dimensional signal corresponding to a one-dimensional character gesture into a one-dimensional stroke to be processed, in accordance with an embodiment of the present invention.
  • Figure 8 shows each character stored in a one-dimensional stroke and character template database to be processed
  • FIG. 9 shows a flow diagram of an exemplary method 600 of predicting an input vocabulary using a Bayesian method and a language model in a vocabulary input mode.
  • FIG. 10(a), (b), and (c) show an example of a real device that can implement an environment as the one-dimensional handwritten text input device of FIG.
  • One-dimensional character gesture a reciprocating stroke on a straight line, only the length and direction information in the direction of the line, and no depth and height information perpendicular to the direction of the line. Even if the gestures are not strictly performed on a straight line, only the length and direction information in the direction of the line are extracted, and the information on the depth and height is ignored.
  • One-dimensional signal refers to a signal that has a value in only one dimension in a coordinate system.
  • a "one-dimensional stroke” is a collection of one or more sub-strokes in a predetermined order, each sub-stroke being a line segment substantially on a straight line.
  • Single stroke A single continuous stroke in which the user's finger or drawing tool does not leave the user input interface during the one-dimensional character gesture.
  • FIG. 1 shows a simplified block diagram of a one-dimensional handwritten text input device 100 in accordance with an embodiment of the present invention.
  • the one-dimensional handwritten text input device 100 may include a user input interface 110 , a detecting unit 120 , a character template database 130 , and an identifying unit 140 .
  • FIG. 10 shows an example of a real device that can be used as the one-dimensional handwritten text input device implementation environment of FIG. 1, wherein FIG. 10(a) is smart glasses, FIG. 10(b) is a smart phone, and FIG. 10(c) is smart. Watch.
  • the user input interface 110 allows the user to manually make a one-dimensional character gesture in a chronological order by means of a finger or a drawing tool in a manner that contacts the user input interface.
  • the so-called one-dimensional character gesture is basically a reciprocating stroke on a straight line. .
  • an example of the user input interface 110 may be the side of the temple of the smart glasses, the smartphone Side borders, dial borders for smart watches, etc.
  • User input interface 110 can be approximated as a straight line.
  • the sliding along the ring on the surface of the wristband, for example is not straight, but the ring can be expanded into a straight line.
  • the one-dimensional character gesture of the embodiment of the present invention can also be performed on such a loop line, thus The circular user interface is also within the scope of the present invention.
  • substantially a reciprocating stroke on a straight line herein means that a one-dimensional character gesture is applied to the present invention as long as it is substantially in a straight line, even if it is not strictly on a straight line.
  • first sub-stroke is swiped in one direction on a straight line A
  • second sub-stroke is swiped from the end of the first sub-stroke
  • the line segment corresponding to the second sub-stroke is not strictly located on the straight line A
  • one-dimensional character gestures only signals in one dimension (signals along line A) are extracted, without being extracted from another dimension (eg, an axis perpendicular to line A). Signal) signal. This is in accordance with the actual action of the user, that is, the present invention allows the user's finger to have a slight deviation on a single coordinate axis.
  • a one-dimensional character gesture corresponding to one character corresponds to a single stroke, and a single stroke is a single continuous stroke.
  • the user's finger or drawing tool does not leave the user input interface.
  • the detecting unit 120 is configured to detect a one-dimensional character gesture made by the user on the user input interface, and convert the one-dimensional character gesture into a one-dimensional signal, where the one-dimensional signal has a value in only one dimension in the coordinate system. signal.
  • the detecting unit 120 includes, for example, a device of a pressure sensor and an analog-to-digital converter for converting a one-dimensional character gesture input by a user into a one-dimensional signal that the computing device can process.
  • the character template database 130 is configured to store a template for each character, the template of each character corresponding to a one-dimensional stroke, which is a set of one or more sub-strokes in a predetermined order, each sub-stroke being a line segment substantially on a straight line.
  • a character template will be described in detail later.
  • the identification unit 140 is configured to receive the one-dimensional signal from the detecting unit, and convert the one-dimensional signal into a one-dimensional stroke to be processed, based on the one-dimensional stroke to be processed and the template of each character stored in the character template database, The one-dimensional stroke is identified as a corresponding character.
  • the handwriting recognition apparatus further includes a display unit 150 that can display each sub-stroke in spatial order in an order corresponding to the chronological order of the sub-strokes included in the one-dimensional handwritten gesture, and/or display the recognized text (character and / or word), and / or when the user is required to make a selection in the candidate The candidate is displayed in the case.
  • a display unit 150 can display each sub-stroke in spatial order in an order corresponding to the chronological order of the sub-strokes included in the one-dimensional handwritten gesture, and/or display the recognized text (character and / or word), and / or when the user is required to make a selection in the candidate The candidate is displayed in the case.
  • the inventor selected a large number of participants to perform character template writing experiments, and designed a variety of character templates from the perspectives of easy memory, easy learning, high input accuracy and high input efficiency.
  • FIG. 2 shows a sub-stroke composition diagram of each character of a character template in accordance with one embodiment of the present invention.
  • the sub-strokes have directions and lengths, and have three lengths, short, medium, and long.
  • the arrows in the figure indicate the direction of each substroke.
  • One-character one-dimensional character gestures are designed to simulate two-dimensional character handwriting gestures during actual character writing.
  • handwritten-based one-dimensional character gesture design has the advantages of easy recognition and easy memory. For example, when the character a is written, it is first down and then up and down, so the corresponding sub-strokes are three sub-strokes of downlink, up, and up, and the three sub-strokes are designed to be equal in consideration of the actual writing, so the lengths of the three sub-strokes are designed to be equal. .
  • FIG. 2 shows the distribution of normalized short, medium and long sub-strokes in the gestures actually made by the user. It can be seen that there is more overlap between the lengths of the three substrokes.
  • the user occasionally makes a small hook at the beginning and end of the stroke, which makes the recognition accuracy of the one-dimensional character gesture lower.
  • the inventor has optimized the design of the one-dimensional character gesture, including one or more of the following: the small adjustment is no longer performed at the end of the gesture (ie, no short stroke or "point” is added at the end of the gesture. ",), that is, allowing the same one-dimensional character gesture to correspond to more than one character, or conversely, multiple characters corresponding to the same one-dimensional character gesture (for example, the characters q and y in FIG.
  • FIG. 4 is a block diagram showing the composition of sub-strokes of respective characters of a character template according to another embodiment of the present invention.
  • Fig. 4 only the straight line segment or curve segment with an arrow accompanying each letter indicates the traditional two-dimensional handwriting of the letter, wherein the circle represents the handwriting start point, and the right side of each letter shows the corresponding one-dimensional character gesture. It should be a reciprocating stroke on a linear axis, but it is unfolded in a direction perpendicular to the linear axis for ease of understanding and visualization.
  • a single stroke refers to a single continuous stroke, that is, in the one-dimensional character gesture making process of one character, the user finger or the drawing tool does not leave the user input interface, so that the user performs the character gesture efficiently.
  • each one-dimensional character gesture is not associated with more than four characters to keep it simple and efficient.
  • mapping directly mapped to a one-dimensional space, different processing is performed according to the nature of the characters. For example, for the letters “e”, “z”, and "s”, unlike other letters, it is not directly performed.
  • mapping of two-dimensional handwritten gestures to one-dimensional space but first rotated 90 degrees to the left, and then mapped the two-dimensional handwritten gestures to one-dimensional space; for the letter x, has two sub-strokes, assigning the gesture of the letter n to
  • the handwriting starts with a small backward stroke (a small stroke toward the character body), and the traditional d handwriting takes a backward small stroke (away from The character body ends.
  • these small strokes are removed from the one-dimensional character gestures of these characters.
  • the character template database above is a preferred embodiment of the present invention, but is not shown as a display, and the character template may be changed, replaced, added or deleted as needed.
  • the written text input may include character input and vocabulary input, and the corresponding text recognition also includes character recognition and vocabulary recognition, wherein character input/recognition is the basis of vocabulary input/recognition.
  • character recognition is the basis of vocabulary input/recognition.
  • a one-dimensional handwritten text input method 200 executed by a one-dimensional handwritten text input device including a user input interface and a character template database, according to an embodiment of the present invention, will be described below with reference to FIG.
  • step S210 in response to the user manually making a one-dimensional character gesture in a chronological order by means of a finger or a drawing tool in contact with the user input interface, detecting a one-dimensional character gesture made by the user on the user input interface, the one-dimensional character is The character gesture is converted into a one-dimensional signal, which is a signal having a value in only one dimension in the coordinate system, and the so-called one-dimensional character gesture is basically a reciprocating stroke on a straight line.
  • step S220 the one-dimensional signal from the detecting unit is received, and the one-dimensional signal is converted into a one-dimensional stroke to be processed, based on the one-dimensional stroke to be processed and the template of each character stored in the character template database.
  • the one-dimensional stroke is identified as a corresponding character, wherein the template of each character corresponds to a one-dimensional stroke, which is a set of one or more sub-strokes in a predetermined order, and each sub-stroke is a line segment substantially on a straight line.
  • the character template database may employ, for example, a character template database as described with reference to FIG.
  • the one-dimensional handwritten text input device further includes a display unit.
  • the display unit may be, for example, a liquid crystal screen, or may be a head-mounted micro display screen (for example, Google's head-mounted micro display screen, which may form a virtual screen by projection) or the like.
  • the handwritten text input method further includes: displaying, by means of the display unit, each sub-stroke in a spatial order in an order corresponding to a time sequence of the sub-strokes included in the one-dimensional handwritten gesture, and displaying the recognized text (characters and/or words), And the candidate is displayed if the user is required to make a selection among the candidates.
  • the one-dimensional handwritten text input device can operate in a character input mode or a vocabulary input mode.
  • the character recognition device can work in the vocabulary input mode by default, and the user can switch to the character input mode by performing an input mode switching operation.
  • the input mode switching operation may be that the finger stays on the user input interface for more than a predetermined threshold, such as more than 300 milliseconds, such that the user can switch between the character input mode and the vocabulary input mode without lifting the finger.
  • Other input mode switching operations can be designed as needed.
  • FIG. 6 illustrates an example of a graphical user interface that can be displayed on a virtual screen of Google glasses in the case where the one-dimensional handwritten text input device is Google glasses, in accordance with an embodiment of the present invention.
  • the graphical user interface 300 is divided into three regions: a text region 310, a character region 320, and a vocabulary region 330.
  • the text area 310 displays the input text
  • the character area 320 displays the recognized character candidates
  • the vocabulary area 330 displays the recognized word candidates.
  • the two-dimensional visual feedback of the one-dimensional character gesture is also displayed in the text area, wherein the height of the text area maps to the full length of the one-dimensional direction of the one-dimensional character gesture of the user input interface (For example, the height of the text area corresponds to a range of touchable lengths of the smart glasses temples), the horizontal direction of the text area represents a time axis, and the handwriting recognition method further includes: displaying a time along the time axis on the text area
  • Dimensional character gestures provide two-dimensional visual feedback of one-dimensional character gestures. For example, the gestures shown in FIG.
  • the processor switches the highlighted character candidate (eg, to the next candidate "la").
  • the one-dimensional handwritten text input device after selecting a vocabulary, automatically adds a space after the vocabulary.
  • the one-dimensional handwritten text input method further includes: in response to the detecting unit detecting a character separation gesture made by the user, distinguishing consecutive one-dimensional character gestures to end the input of the previous character, and preparing Receive input for the next character.
  • the character separation gesture can be a finger dwell time that exceeds a predetermined threshold, clicked, pressed, and the like.
  • a one-dimensional character gesture corresponds to a single stroke, and a single stroke is a single continuous Strokes, in the process of making a one-dimensional character gesture, the user's finger or drawing tool does not leave the user input interface, thereby improving the efficiency of character input.
  • the sub-strokes of the strokes of the templates of the individual characters have a length and a direction, wherein the length is one of two predetermined lengths. As mentioned earlier, this can improve the accuracy and efficiency of the input.
  • the one-dimensional character gesture is similar to the actual two-dimensional character handwriting trend process and is a projection of a two-dimensional character handwriting gesture on a one-dimensional space.
  • a method 400 of converting a one-dimensional signal corresponding to a one-dimensional character gesture into a one-dimensional stroke to be processed according to an embodiment of the present invention is described below with reference to FIG.
  • an inflection point is identified.
  • the inflection point refers to a point at which the stroke direction of the stroke changes during the formation of the one-dimensional stroke, and the inflection point includes a start point and an end point of the stroke.
  • step 420 the inflection point in the inflection point in the distance time at which the front inflection point is less than the predetermined pixel length is removed.
  • This step is to remove the inflection point as noise.
  • the distance between the two inflection points should not be too short, so if the distance between two adjacent inflection points is less than a predetermined threshold, for example, less than 20 pixels, the following inflection point can be regarded as noise and removed. .
  • step 430 the line segments are divided based on the remaining inflection points.
  • the two inflection points adjacent in time constitute a line segment, that is, a candidate sub-stroke.
  • step 440 the line segments whose lengths in the head and tail segments are less than a predetermined threshold are removed, and the remaining line segments are normalized to obtain a set of normalized line segments as a sequence of sub-strokes in chronological order.
  • the removal of line segments in the head and tail segments that are less than a predetermined threshold in this step is intended to remove the user's occasional unintentional making of a hook at the beginning and end of the stroke.
  • Normalizing the line segments here can be done by dividing the length of each line segment by the length of the longest sub-stroke.
  • the longest substroke here may be, for example, the longest line segment among the respective line segments described above.
  • a set of line segments that is, a sequence of sub-strokes arranged in chronological order, is obtained as a data representation of a one-dimensional character gesture (also referred to herein as a one-dimensional stroke) made by the user.
  • the one-dimensional stroke is identified as the corresponding character based on the one-dimensional stroke to be processed and the template of each character stored in the character template database.
  • the machine identification method can be used to perform the recognition, such as a template matching method, for example, by directly calculating the feature vector corresponding to the one-dimensional stroke and the template of each character directly in the feature space.
  • the distance between the feature vectors is chosen to be the result of the recognition (or multiple candidates are selected as the result of the recognition).
  • the Bayesian method can be utilized to determine the probability that a one-dimensional character gesture made by a user corresponds to a certain character.
  • FIG. 8 shows a flow diagram of a method 500 of identifying a one-dimensional stroke as a corresponding character based on a template of each character stored in a one-dimensional stroke and character template database to be processed.
  • the number of substrokes included in the one-dimensional stroke to be recognized is n, and the probability is calculated only for the template of the characters including the number of substrokes that are also equal to n, and the number of substrokes is not A template of a character equal to n, and the probability that the one-dimensional stroke to be recognized is the character is zero.
  • step S510 the number of sub-strokes included in the one-dimensional stroke is determined. Then, it proceeds to step S520.
  • step S520 the character template database is retrieved to determine a template in which the number of substrokes in the template is equal to the determined number of substrokes included in the one-dimensional stroke. Then, it proceeds to step S530.
  • step S530 for each of the determined templates, determining, according to a composition order, a probability that each sub-stroke in the template is presented as a corresponding sub-stroke in the one-dimensional stroke, and calculating that the one-dimensional stroke corresponds to The probability of this template.
  • the one-dimensional stroke to be processed is S, and the sub-strokes constituting it are s 1 , s 2 ... s n , and 26 character templates are T[1], T[2]...T[j]...T[26]
  • the substrokes constituting the character template T[j] are T[j] 1 , T[j] 2 ... T[j] n . Then you need to calculate T[j]. According to the Bayesian formula, the following formula (1) holds,
  • the probability P(S) is common to the calculation of the posterior probability P(T[j]/S) of each template, so the impact is the same, without considering the term.
  • the probability P(T), that is, the prior probability that a character appears, can be obtained by pre-statistics, that is, it is known. In one example, for simplicity, the probability of occurrence of each character is considered to be equal. We believe that each substroke is independent, then formula (2) is established.
  • the probability P(S i /T[j] i ), that is, the i-th sub-stroke T[j] i of the template T[j] is presented as the probability of the sub-stroke S i in the one-dimensional character gesture S of the user, which may be The statistics are obtained, whereby the probability P(T[j]/S) of the one-dimensional character gesture representing the character T[j] when the user makes the one-dimensional character gesture S can be calculated.
  • step S530 After step S530 is completed, the process proceeds to step S540.
  • step S540 the one-dimensional stroke is identified as a corresponding character based on the probability that the one-dimensional stroke corresponds to each template.
  • the template corresponding to the maximum probability is determined, and the one-dimensional stroke is identified as the template, in other words, the character corresponding to the one-dimensional character gesture made by the user is recognized.
  • a display unit is further included, the display screen (real existence, or virtual display screen) of the display unit being configured, for example, as shown in FIG. 6, including a character area 320 and a text area 310, the text area 310 displaying input text
  • the character area 320 is for displaying a predetermined number of character candidates in the case where there are a plurality of character candidates, for example, three character candidates are displayed in the character area 320 in FIG.
  • the character recognition method further includes: in response to detecting that the user makes a one-dimensional character gesture end determination gesture, displaying the determined character candidate corresponding to the one-dimensional character gesture, the one-dimensional character gesture ending determination gesture Indicates the end of the one-dimensional character gesture.
  • the one-dimensional character gesture end determination gesture may be a pause gesture, for example, in the case of smart glasses, keeping the finger motionless for a predetermined time on the temple side border as the user input interface.
  • the candidate character that is currently highlighted is highlighted and highlights the candidate character that is switched to highlight.
  • candidate characters having the highest probability value are highlighted as highlighted candidate characters by default.
  • the candidate character switching gesture may be, for example, a finger movement gesture, and each time a predetermined distance is moved, the candidate characters adjacent to the currently highlighted candidate character in the moving direction are switched to the highlighted candidate characters.
  • the character selection gesture can act to lift the finger away from the user input interface.
  • the operation situation of the character input is that the finger swipes on the temples, and when the one-dimensional finger corresponding to one character ends, the user's finger remains unchanged, and the user can observe on the display screen.
  • Each candidate character is displayed, then the highlight character is switched by moving the finger, and finally a character is selected by lifting the finger so that the character is displayed in the text area in the display screen.
  • a one-dimensional character gesture finger is a swipe along the longitudinal direction of the temple.
  • a finger stays at both ends of the temple at the end of a character input (for the description direction, it is called end A and The end A in the end B), at this time, is difficult to move along with the moving direction before the dwelling (i.e., the direction of moving toward the end A or the end A), and the swiping of the finger in the direction of the end B is easy.
  • the display order of the character candidates can be made to facilitate the order of character switching when the finger is swiped in the direction from the end A to the end B, that is, the first highlight is the highest probability, and each candidate character is pressed.
  • the order of probability from high to low is displayed on the display screen, and when the finger is moved to the end B, the character of the next highest probability is highlighted.
  • a character can be deleted by a character deletion gesture, which can be, for example, swiping up or pressing a photo button.
  • a character deletion gesture can be, for example, swiping up or pressing a photo button.
  • the user prefers the gesture of pressing the photo button, which may be because the upward swipe gesture on the temple is not very easy.
  • An exemplary method 600 of predicting an input vocabulary using a Bayesian method and a language model in a vocabulary input mode is described below with reference to FIG.
  • the one-dimensional handwritten text input device further includes a language model database storing information indicating a probability distribution by characters, assigning a probability to a word composed of characters. It should be noted that the inclusion herein includes both the case where the language model database is stored locally on the one-dimensional handwritten text input device, and the case where the language model database is distributed on the remote device.
  • the recognition unit in the vocabulary input mode, refers to the language model database to calculate in real time the probability of inputting a specific vocabulary under the condition of a given one-dimensional stroke sequence.
  • each one-dimensional stroke has been split between, for example, by a character separator.
  • the input sequence of k one-dimensional strokes (corresponding to one character) be I, which is a sequence of k one-dimensional strokes S[1], S[2], ...S[i]...S[k], one for each The dimension stroke corresponds to one character.
  • the probability of inputting the vocabulary W can be expressed by the formula (3)
  • P(I) is a common term when calculating the posterior probability of all vocabulary W, and may be disregarded.
  • the joint probability P(W, I) can be expressed by the formula (4)
  • a template for the letter L i with T(L i ) can be given formula (6)
  • the posterior probability of the same number of characters as the input character sequence can be calculated. If the number of letters matching the number of letters is not found or the number of words found matching the number of words is insufficient, the vocabulary with more letters can be calculated. Probability, which generally requires a reference to the language model. For a reference language model to calculate vocabulary, refer to the article entitled “Language modeling for soft keyboard” published by Goodman et al., IUI'02, pp. 194-195. Narration.
  • step S620 after calculating the probability that the one-dimensional stroke sequence corresponds to each vocabulary, the candidate vocabulary is sorted based on the probability of inputting each vocabulary, and displayed on the display unit for user selection, for example, as shown in FIG. Vocabulary area 330.
  • a one-dimensional handwritten text input method and apparatus has been described with reference to the accompanying drawings, and provides a text input method and apparatus that are particularly suitable for input on a relatively narrow one-dimensional input space.
  • a means for a single character to be represented in a single stroke is provided, thereby providing an efficient input method.
  • an easy-to-learn and efficient one-dimensional writing gesture is designed by simulating a two-dimensional character writing gesture and determining a sub-stroke length type, and providing an efficient and accurate text input method and input. device.
  • the one-dimensional handwritten text input device and method of the embodiment of the present invention are illustrated by taking an alphabetic letter and a word input as an example, but the one-dimensional handwritten text input device and method of the present invention are also suitable for other language characters. Input, as long as you can map 2D handwritten gestures to 1D handwritten gestures.
  • the application of the embodiment of the present invention is illustrated on an approximately one-dimensional input space.
  • the embodiment of the present invention is also applicable to a two-dimensional input space, for example, at present.
  • one-dimensional character gestures can also be performed (the subsequent detection unit also extracts only the one-dimensional signal corresponding to the one-dimensional character gesture), which can leave more space to, for example, the screen display. Use.
  • the finger or the drawing tool is used to contact the user's income interface as an example to describe the form of the user input.
  • the present invention is not limited thereto, and the user may perform the one-dimensional character gesture in a non-contact manner, for example, the user holds the laser light.
  • the laser pen makes a one-dimensional character gesture of reciprocating stroke, and the user's input can be obtained as long as the detecting unit can detect such a laser trajectory.
  • the detecting unit recognizes such a gesture by, for example, image processing
  • the user's handwriting input can also be obtained.
  • smart glasses are mainly used as an application example of the input method of the present invention, but this is merely an example, and the input method of the present invention can be applied to devices such as smart watches, smart bracelets, smart phones, and the like.
  • the user's one-dimensional character gesture is made with a finger, but this is merely an example, and other body parts of the user such as a toe, a wrist, and the like may be selected as needed.
  • an external tool for input such as a stylus pen, which is disadvantageous to the human body part.
  • a one-dimensional handwritten character input device including: a user input interface, for a user to manually make a one-dimensional character gesture in a chronological order by a body part or a tool in contact with the user input interface.
  • the detecting unit is configured to detect a one-dimensional character gesture made by the user on the user input interface, and convert the one-dimensional character gesture into a one-dimensional signal, wherein the one-dimensional signal has a value in only one dimension in the coordinate system.
  • a character template database configured to store a template for each character, the template of each character being a feature vector obtained by performing statistical feature extraction on the one-dimensional signal; and an identifying unit configured to receive the one from the detecting unit The dimension signal is converted into a feature vector to be processed, and the corresponding character is identified based on the feature vector to be processed and the template of each character stored in the character template database.
  • the one-dimensional character gesture may be one of: a reciprocating swipe substantially on a straight line on the user input interface; a single point depression on the user input interface; Rotation on a one-dimensional angular coordinate system performed on the user input interface.
  • a one-dimensional handwritten character input device including: a user input detecting unit, which obtains a one-dimensional character gesture made by a user in chronological order, and the so-called one-dimensional character gesture is basically in a straight line.
  • Reciprocating swipe and converting the one-dimensional character gesture into a one-dimensional signal the one-dimensional signal being a signal having a value in only one dimension in the coordinate system
  • a character template database configured to store a template for each character , the template of each character corresponds to a one-dimensional stroke, as scheduled a set of one or more sub-strokes in sequence, each sub-stroke being a line segment substantially on a straight line
  • an identification unit configured to receive the one-dimensional signal from the detecting unit, and converting the one-dimensional signal into a one-dimensional stroke to be processed, The one-dimensional stroke is identified as a corresponding character based on the one-dimensional stroke to be processed and the template of each character stored in the character template database.
  • the one-dimensional character gesture may be made on the user input interface in a contact manner by a user body part or a borrowing tool, or may be made in a non-contact manner, and the user detecting unit may include an image capturing and processing device, Pressure detecting device, infrared detecting device, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一维手写文字输入方法和设备。该一维手写文字输入设备包括:用户输入界面(110),供用户做出一维字符手势,一维字符手势基本为在一条直线上的往复划动;检测单元(120),配置为检测一维字符手势,并转换为一维信号;字符模板数据库(140),配置为存储每个字符的模板,每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段;识别单元(130),配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符。上述文字输入方法和设备适合于在相对狭长的一维输入空间上进行输入。

Description

一维手写文字输入设备和一维手写文字输入方法 技术领域
本发明总体地涉及手写体文字输入技术,更具体地涉及一维手写文字输入设备和方法。
背景技术
随着智能设备的迅速发展,在智能设备上进行文本输入的需求也越发强烈。但是智能设备往往由于其物理形态尺寸的限制,从而输入界面受限,使得基于物理键盘或软键盘的文本输入方式不再适用,在这些智能设备上的文本输入变得困难。这些智能设备的例子例如有智能手表、智能眼镜和智能手环等。
文本输入方法大致上可以分为基于选择的方法和基于手写体输入的方法。基于选择的方法例如提供物理或虚拟的键盘,各个键代表各个字母或数字,选择(例如通过点击对应键)某个字母,即输入该字母,选择多个字母可以组成词。基于手写体输入的方法,则一般通过描画的方法模拟字符(或字)或定义的对应字符(或字)替代体的形态,由智能计算设备进行识别得到输入的文本。
已经提出了一些在面积有限界面上输入文本的技术,例如文献1的对字母编码的方法;文献2、3的单笔划手势;文献4的模糊键盘;文献5、6的手势键盘等。不过大部分技术都是针对二维的解码或者具有受限个按钮的界面。
文献1:MacKenzie,I.,Soukoreff,R.and Helga,J.1thumb,4buttons,20words per minute:Design and evaluation ofH4-Writer.UIST'11,(2011),471-480.
文献2:Blickenstorfer,C.H.(1995,January).Graffiti:Wow!Pen Computing Magazine,pp.30-31.
文献3:Gu,Z.,Xu,X.,Chu,C.and Zhang,Y.To Write notSelect,a New Text Entry Method Using Joystick.Human-Computer Interaction: InteractionTechnologies,(2015),35-43.
文献4:Poirier,F.and Belatar,M.UniGlyph:only onekeystroke per character on a 4-button minimal keypadfor key-based text entry.HCI International 2007,(2007),479-483.
文献5:Kristensson,P.and Zhai,S.SHARK2:a largevocabulary shorthand writing system for pen-basedcomputers.UIST'04,(2004),43-52.
文献6:Zhai,S.and Kristensson,P.Shorthand writing onstylus keyboard.CHI'03,(2003),97-104.
发明内容
本发明关注基于手写体输入的方法,尤其关注在一维界面上的基于手写体输入的方法。
本发明希望提供在受限的输入界面——例如一维输入界面上——的进行手写输入并识别的方法和设备。所述一维输入界面的例子例如有:智能眼镜的眼镜腿沿长度方向的表面、智能手机的侧边框、智能手环的侧面等。
根据本发明的一个方面,提供了一种一维手写文字输入设备,可以包括:用户输入界面,供用户以身体部位或描画工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势,所谓一维字符手势基本为在一条直线上的往复划动;检测单元,配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;字符模板数据库,配置为存储每个字符的模板,每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段;识别单元,配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符。
根据本发明实施例的一维手写文字输入设备还可以包括:显示单元;其中,字符模板数据库中存储的至少一个字符模板对应于多个字符,其中,当处理器识别一维笔划对应于多个字符作为字符候选时,处理器执行处理使得在显示器上显示该多个字符候选,其中在任一时刻多个字符候选之一被高亮显示;以及
响应于检测单元检测到用户做出的选择手势,处理器选择当前被高亮显示的字符候选作为字符识别结果;以及响应于检测单元检测到用户做出的移动手势,处理器切换被高亮的字符候选。
根据本发明实施例的一维手写文字输入设备,其中响应于检测单元检测到用户做出的字符分隔手势,处理器可以将连续的一维字符手势区分开,以结束前一字符的输入,并准备接收下一字符的输入。
根据本发明实施例的一维手写文字输入设备,其中所述字符分隔手势可以为停顿手势,停顿手势对应于用户手指或描画工具在用户输入界面上停顿时间超过预定阈值。
根据本发明实施例的一维手写文字输入设备,所谓一维字符手势可以对应于单个笔划,单个笔划为单个连续的笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面。
根据本发明实施例的一维手写文字输入设备,各个字符的模板的笔划的子笔划可以具有长度和方向,其中长度为两种预定长度之一。
根据本发明实施例的一维手写文字输入设备,所述一维字符手势可以与实际二维字符手写走势过程相似。
根据本发明实施例的一维手写文字输入设备,字符模板数据库中存储的字符模板中的至少一个可以对应于多个字符。
根据本发明实施例的一维手写文字输入设备,对于特定字符的字符模板,可以首先将该字符旋转90度,再将二维手写手势映射到一维字符手势。
根据本发明实施例的一维手写文字输入设备,所述处理器将该一维信号转换为待处理的一维笔划可以包括:识别拐点,拐点指该一维笔划形成过程中随时间进行笔划行进方向发生变化的点,拐点包括笔划的开始点和结束点;去除拐点中距离时间上在前拐点小于预定像素长度的拐点;基于剩余的拐点划分线段;去除掉线段中长度小于预定阈值的线段,对剩余线段进行归一化,得到归一化后的线段的集合作为按照时间顺序的子笔划的序列。
根据本发明实施例的一维手写文字输入设备,所述基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符可以包括:确定该一维笔划所包含的子笔划的数目;检索字符模板数据库,确定模板中子笔划数目等于所述确定的该一维笔划所包含的子笔划的数目的模板;
对于所确定的模板中的每个,按照构成顺序,确定每个子笔划呈现为所述一维笔划中的对应子笔划的概率,计算给定该模板呈现为所述一维笔划的概率;以及基于各个模板呈现为所述一维笔划的概率,将所述一维笔划识别为相应的字符。
根据本发明实施例的一维手写文字输入设备,还可以包括显示单元,配置为在存在多个字符候选的情况下,以可视形式显示预定数目个字符候选;响应于检测到用户做出一维字符手势结束确定手势,显示所确定的与该一维字符手势对应的字符候选,所述一维字符手势结束确定手势指示一维字符手势结束,以及响应于检测到用户做出字符选择手势,选择当前被高亮的候选字符,并将其显示在指示输入的文本的显示区域,或者响应于检测到用户做出候选字符切换手势,切换当前被高亮的候选字符并以高亮形式显示被切换为高亮的候选字符。
根据本发明实施例的一维手写文字输入设备,其中所述候选字符切换手势可以为手指沿着从当前被高亮的候选字符到要选择的候选字符的方向上的移动,在一维字符手势结束点接近用户输入界面在一维方向上的第一边界的情况下,将候选字符按照概率从高到低的顺序沿着从第一边界到第二边界的方向进行显示。
根据本发明实施例的一维手写文字输入设备,所述一维字符手势结束确定手势可以为保持手指在用户输入界面上不动达预定时间,所述字符选择手势为将手指从用户输入界面上抬起,所述候选字符切换手势为移动手指达预定距离。
根据本发明实施例的一维手写文字输入设备,还可以包括:语言模型数据库,存储有指示借由字符的概率分布,指派概率给由字符组成的词的信息;所述处理器在文字识别状态下,在字符输入模式和词输入模式之间切换,其中在词输入模式下,参考语言模型数据库实时计算在给定一维笔划序列条件下输入具体词汇的概率,基于输入具体词汇的概率,对候选词汇进行排序,并显示在显示单元上,供用户选择。
根据本发明实施例的一维手写文字输入设备,在词汇模式下,显示单元的图形用户界面包括三个区域:文本区域,用于显示输入的文本;字符区域,用于显示识别的字符;以及词区域,用于显示识别的词候选。
根据本发明实施例的一维手写文字输入设备,其中所述文本区域的高度可以映射到用户输入界面在一维字符手势的一维方向上的全部长度,所述文本区域的水平方向表示时间轴,以及在所述文本区域上沿时间轴显示一维字符手势,从而提供了一维字符手势的二维可视反馈。
根据本发明实施例的一维手写文字输入设备,所述一维手写文字输入设备可以为智能可穿戴设备。
根据本发明的另一方面,提供了一种一维手写文字输入设备执行的一维手写文字输入方法,一维手写文字输入设备可以包括用户输入界面和字符模板数据库,所述一维手写文字输入方法包括:响应于用户以手指或描画工具手动以接触用户输入界面的方式按时间顺序做出一维字符手势,检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号,所谓一维字符手势基本为在一条直线上的往复划动;接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符,其中每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段。
根据本发明实施例的一维手写文字输入方法,其中,字符模板数据库中存储的至少一个字符模板对应于多个字符,其中,当识别到一维笔划对应于多个字符作为字符候选时,执行处理使得在显示器上显示该多个字符候选,其中在任一时刻多个字符候选之一被高亮显示;以及响应于检测单元检测到用户做出的选择手势,选择当前被高亮显示的字符候选作为字符识别结果;以及响应于检测单元检测到用户做出的移动手势,处理器切换被高亮的字符候选。
根据本发明实施例的一维手写文字输入方法,其中响应于检测单元检测到用户做出的字符分隔手势,将连续的一维字符手势区分开,以结束前一字符的输入,并准备接收下一字符的输入。
根据本发明实施例的一维手写文字输入方法,所谓一维字符手势可以对应于单个笔划,单个笔划为单个连续的笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面。
根据本发明实施例的一维手写文字输入方法,各个字符的模板的笔划的 子笔划可以具有长度和方向,其中长度可以为两种预定长度之一。
根据本发明实施例的一维手写文字输入方法,所述一维字符手势可以与实际二维字符手写走势过程相似。
根据本发明实施例的一维手写文字输入方法,所述将该一维信号转换为待处理的一维笔划可以包括:识别拐点,指该一维笔划形成过程中随时间进行笔划行进方向发生变化的点,拐点包括笔划的开始点和结束点;去除拐点中距离时间上在前拐点小于预定像素长度的拐点;基于剩余的拐点划分线段;去除掉首尾线段中长度小于预定阈值的线段,对剩余线段进行归一化,得到归一化后的线段的集合作为按照时间顺序的子笔划的序列。
根据本发明实施例的一维手写文字输入方法,所述基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符可以包括:确定该一维笔划所包含的子笔划的数目;检索字符模板数据库,确定模板中子笔划数目等于所述确定的该一维笔划所包含的子笔划的数目的模板;对于所确定的模板中的每个,按照构成顺序,确定该模板中的每个子笔划呈现为所述一维笔划中的对应子笔划的概率,计算所述一维笔划对应于该模板的概率;以及基于所述一维笔划对应于各个模板的概率,将所述一维笔划识别为相应的字符。
根据本发明实施例的一维手写文字输入方法,还可以包括:语言模型数据库,存储有指示借由字符的概率分布,指派概率给由字符组成的词的信息;所述处理器在文字识别状态下,在字符输入模式和词输入模式之间切换,其中在词输入模式下,参考语言模型数据库实时计算在给定一维笔划序列条件下输入特定词汇的概率,基于输入特定词汇的概率,对候选词汇进行排序,并显示在显示单元上,供用户选择。
根据本发明实施例的一维手写文字输入方法,可以在词汇模式下,在显示单元的图形用户界面上:在文本区域上显示输入的文本;在字符区域上显示识别的字符;以及在词区域上显示识别的词候选。
根据本发明实施例的一维手写文字输入方法,其中所述文本区域的高度映射到用户输入界面在一维字符手势的一维方向上的全部长度,所述文本区域的水平方向表示时间轴,所述手写识别方法还可以包括:在所述文本区域上沿时间轴显示一维字符手势,从而提供一维字符手势的二维可视反馈。
根据本发明的另一方面,提供了一种一维手写文字输入设备,可以包括: 用户输入界面,供用户以身体部位或工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势;检测单元,配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;字符模板数据库,配置为存储每个字符的模板,每个字符的模板为对所述一维信号进行统计性特征抽取获得的特征向量;以及处理器,配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的特征向量,基于此待处理的特征向量和字符模板数据库中存储的每个字符的模板,识别得到相应字符。
根据本发明实施例的一维手写文字输入设备,所述一维字符手势可以为下列项目之一:在所述用户输入界面上进行的基本在一条直线上的往复划动;在所述用户输入界面上进行的单点下压;以及在所述用户输入界面上进行的一维角坐标系上的转动。
根据本发明实施例的一维手写文字输入方法和设备,特别适合于在相对狭长的一维输入空间上进行文字输入。
附图说明
从下面结合附图对本发明实施例的详细描述中,本发明的这些和/或其它方面和优点将变得更加清楚并更容易理解,其中:
图1示出了根据本发明实施例的一维手写文字输入设备100的简化结构框图。
图2示出了根据本发明一个实施例的字符模板的各个字符的子笔划构成图。
图3示出了用户实际做出的手势中归一化后的短、中和长子笔划的分布。
图4示出了根据本发明另一实施例的字符模板的各个字符的子笔划构成示意图。
图5示出了根据本发明实施例的由一维手写文字输入设备执行的一维手写文字输入方法的流程图。
图6示出了根据本发明实施例的图形用户界面的示例。
图7示出了根据本发明实施例的将一维字符手势对应的一维信号转换为待处理的一维笔划的方法400的流程图。
图8示出了基于待处理的一维笔划和字符模板数据库中存储的每个字符 的模板将该一维笔划识别为相应字符的方法500的流程图。
图9示出了在词汇输入模式下,利用贝叶斯方法和语言模型预测输入的词汇的示例性方法600的流程图。
图10(a)、(b)、(c)示出了可作为图1的一维手写文字输入设备实现环境的现实设备的示例。
具体实施方式
为了使本领域技术人员更好地理解本发明,下面结合附图和具体实施方式对本发明作进一步详细说明。
下面说明一下本文中使用的术语的含义。
“一维字符手势”,在一条直线上的往复划动,只有在该直线方向上的长度和方向信息,没有与该直线方向垂直的深度和高度信息。即便所做出的手势不是严格的在一条直线上进行的,也只提取在直线方向上的长度和方向信息,而忽视在深度和高度上的信息。
“一维信号”指在坐标***中仅在一个维度上有值的信号。
“一维笔划”为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段。
“单个笔划”:单个连续的笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面。
一、一维手写文字输入设备
图1示出了根据本发明实施例的一维手写文字输入设备100的简化结构框图。
如图1所示,一维手写文字输入设备100可以包括:用户输入界面110、检测单元120、字符模板数据库130、识别单元140。
图10给出了可作为图1的一维手写文字输入设备实现环境的现实设备的示例,其中图10(a)为智能眼镜、图10(b)为智能手机、图10(c)为智能手表。
回到图1,用户输入界面110供用户以手指或描画工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势,所谓一维字符手势基本为在一条直线上的往复划动。
这里用户输入界面110的例子可以为智能眼镜的镜腿的侧面、智能手机 的侧边框、智能手表的表盘边框等。用户输入界面110可以近似为一条直线。不过作为扩展,在例如手环表面上沿着环形的滑动虽然不是直线的,但是该环形是可以展开为直线的,本发明实施例的一维字符手势也可以在这样的环线上进行,因此这样的环形用户界面也在本发明的涵盖范围之内。
需要说明的是,这里的“基本为在一条直线上的往复划动”表示一维字符手势即使不是严格地在一条直线上,但是只要实质上在一条直线上,也适用于本发明。例如,如第一子笔划在一条直线A沿一个方向划动,第二子笔划为从第一子笔划末尾起往回划动,第二子笔划所对应的线段即使不是严格位于直线A上,也属于本发明这里所说的“一维字符手势”,也仅被提取在一个维度上的信号(沿着直线A的信号),而不被提取另一维度(例如与直线A垂直的轴的信号)的信号。这是符合用户实际动作情况的,即本发明允许用户手指在单个坐标轴上有轻微的偏离。
这里,需要说明的是,对一维字符手势的起点和终点等位置没有限制性要求,如“手写”的特性决定的,文本识别的结果只和手势的过程和手势的样态有关。
在一个示例中,一个字符对应的一维字符手势对应于单个笔划,单个笔划为单个连续的笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面。
检测单元120配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号。检测单元120例如包括压力传感器和模数转换器的器件,用于将用户输入的一维字符手势转换为计算设备能够处理的一维信号,
字符模板数据库130配置为存储每个字符的模板,每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段。后面将对字符模板的示例给予详细介绍。
识别单元140配置为接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符。
优选地,手写体识别设备还包括显示单元150,可以用于与一维手写手势所包含的子笔划的时间顺序对应的顺序以空间顺序显示各个子笔划,和/或显示识别出的文本(字符和/或单词),和/或在需要用户在候选中进行选择的 情况下显示候选。
二、字符模板——一维字符手势
发明人经过选取大量参与人进行字符模板书写实验,从易于记忆、易于学习、输入正确率高、输入效率高的多个层面考虑,设计了多种字符模板。
图2示出了根据本发明一个实施例的字符模板的各个字符的子笔划构成图。
图2中,对于每个字符,将构成其的子笔划按书写顺序展开,这些子笔划的集合构成了对应字符的一维字符手势。
如图2所示可见,子笔划具有方向和长度,共有三种长度,短、中等、长三种。图中的箭头表示每个子笔划的方向。一个字符的一维字符手势是模拟实际字符书写过程中的二维字符手写手势设计出来的,相比于基于编码的设计,基于手写的一维字符手势设计具有易于识别和易于记忆的优点。例如,字符a在书写时是先下行再上行再下行,因此对应的子笔划为下行、上行、上行的三个子笔划,且考虑实际书写时的三次手势行程接近,因此设计该三个子笔划长度相等。另外,对于某些二维书写在一维上的投影类似的字符,为了进行区分,进行了小调整,例如在手势结尾附加上了短笔划或者“点”,例如对于字符“q”和“y”,其一维上的投影类似,都是中下行、中上行、长下行,为此在字符q的手势后加了一个都上行;再例如字符“o”和“v”,在字符v的手势后加了一个“点”进行区分。
在由多人对图2所示的一维字符手势在智能眼镜上的文字输入实践中发现,对于上述为进行区分进行的小调整(手势结尾处的短笔划或者点)用户难于记住。另外,从输入准确度角度,用户难于区分三种子笔划长度。图3示出了用户实际做出的手势中归一化后的短、中和长子笔划的分布。可见,三种子笔划长度之间存在较多的重叠。此外,用户偶尔在笔划的开头和结尾无意识的做出小勾画(hook),使得一维字符手势的识别准确率降低。
针对上述问题,发明人对一维字符手势的设计进行了优化,包括以下中的一项或多项:不再在手势的末尾进行上述小调整(即不再在手势末尾附加短笔划或者“点”,),也就是允许同一个一维字符手势对应于不止一个字符,或者反过来说,多个字符对应于同一个一维字符手势(例如后面描述的图4中的字符q和y对应于同一手势,e和z,c和i等);仅设计两种笔划长度; 去除一些细小的子笔划,例如传统的n、m和p(小写形式)手写体以一个后向小笔划(朝向字符主体的小笔划)开始,传统的d的手写体以一个向后的小笔划(远离字符主体)结束,为了简化和增强可用性,从这些字符的一维字符手势中去除了这些小笔划。
图4示出了根据本发明另一实施例的字符模板的各个字符的子笔划构成示意图。图4中仅仅伴随每个字母的带箭头的直线段或曲线段指示该字母的传统二维手写笔顺,其中的圆圈表示手写开始点,每个字母的右侧示出了对应的一维字符手势,其应为在一直线轴上的往复划动,但为易于理解和可视化,在与直线轴垂直的方向上进行了展开。
图4的字符模板集合的设计主要考虑如下指导规则:
(1)模拟传统二维手写手势,以便容易学习和记忆;
(2)最小化其中含有的子笔划的长度,使得用户能够准确高效地做出字符手势;
(3)最小化字符中子笔划的数目,以便高效地执行字符手势;
(4)单笔划输入,单个笔划指单个连续的笔划,即在一个字符的一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面,以便用户高效地执行字符手势。
如图4所示,每个一维字符手势中的最多有两种子笔划长度。设计了13种字符手势,被映射到26个字母,为保持简洁高效,每个一维字符手势不会与超过四个字符相关联。对于一个字符,如果直接映射到一维空间不存在优选映射,根据字符的性质进行了不同的处理,例如,对于字母“e”、“z”和“s”,与其他字母不同,不是直接进行二维手写手势到一维空间中的映射,而是首先向左旋转90度,然后再将二维手写手势映射到一维空间;对于字母x,具有两个子笔划,将字母n的手势分配给它;此外如前所述,对于字母n、m和p(小写形式)手写体以一个后向小笔划(朝向字符主体的小笔划)开始,传统的d的手写体以一个向后的小笔划(远离字符主体)结束,为了简化和增强可用性,从这些字符的一维字符手势中去除了这些小笔划。
上面的字符模板数据库为本发明优选实施例,但并不作为显示,可以根据需要对字符模板进行更改、替换、增加或删除。
三、一维手写文字输入方法
书写文字输入可以包括字符输入和词汇输入,对应的文字识别也包括字符识别和词汇识别,其中字符输入/识别是词汇输入/识别的基础。下面的描述先描述字符识别,后续再描述词汇识别。
下面参考附图5描述根据本发明实施例的由一维手写文字输入设备执行的一维手写文字输入方法200,一维手写文字输入设备包括用户输入界面和字符模板数据库。
在步骤S210中,响应于用户以手指或描画工具手动以接触用户输入界面的方式按时间顺序做出一维字符手势,检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号,所谓一维字符手势基本为在一条直线上的往复划动。
在步骤S220中,接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符,其中每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段。
这里,字符模板数据库可以采用例如如参考附图4描述的字符模板数据库。
在一个示例中,一维手写文字输入设备还包括显示单元。所述显示单元可以是例如液晶屏幕、也可以是头戴式微型显示屏(例如Google的头戴式微型显示屏,可以通过投影形成虚拟屏幕)等等。
手写体文字输入方法还包括:借助于显示单元,与一维手写手势所包含的子笔划的时间顺序对应的顺序以空间顺序显示各个子笔划,以及显示识别出的文本(字符和/或单词),并且在需要用户在候选中进行选择的情况下显示候选。
在一个示例中,所述一维手写文字输入设备可以工作于字符输入模式或词汇输入模式。例如文字识别设备可以默认工作于词汇输入模式,用户通过执行输入模式切换操作可以切换到字符输入模式。作为示例,输入模式切换操作可以是手指在用户输入界面上停留时间超过预定阈值,例如超过300毫秒,这样用户可以无需抬起手指即在字符输入模式和词汇输入模式之间切换。 根据需要,可以设计其他的输入模式切换操作。
图6示出了根据本发明实施例的图形用户界面的示例,在一维手写文字输入设备是Google眼镜的情况下,该图形用户界面可以显示在Google眼镜的虚拟屏幕上。
图形用户界面300被划分为三个区域:文本区域310、字符区域320和词汇区域330。文本区域310显示输入文本,字符区域320显示识别出的字符候选,词汇区域330显示识别的词候选。
此外,在一个示例中,还在所述文本区域显示一维字符手势的二维可视反馈,其中所述文本区域的高度映射到用户输入界面在一维字符手势的一维方向上的全部长度(例如,文本区域的高度对应于智能眼镜镜腿的可触摸长度范围),所述文本区域的水平方向表示时间轴,所述手写识别方法还包括:在所述文本区域上沿时间轴显示一维字符手势,从而提供一维字符手势的二维可视反馈。例如图6中所示的手势(即下划、上划、下划)对应于字符u、s和a,因此在文本区域310中间将该一维空间上的下划、上划和下划按照时间顺序展开显示(形状类似于大写字母“N”),该展开显示可以为半透明形式叠加在文本区域,由此在用户输入期间提供输入笔划的二维可视反馈指导,同时在字符区域320显示候选字符“u”、“s”和“a”;以及在词汇区域330显示候选词汇“is”、“la”、“can”、“last”和“late”,且高亮显示第一候选“is”,对于候选词汇中已输入的字符的显示亮度高于预测的其他字符的显示亮度(例如候选词汇“can”中字符“c”和“a”的显示亮度大于字符“n”的显示亮度)。此时,响应于检测单元检测到用户做出的选择手势,例如以两指轻拍用户输入界面,而选择当前高亮的词汇(“is”)。替代地,响应于检测单元检测到用户做出的移动手势,例如,两指在用户输入界面上移动,处理器切换被高亮的字符候选(例如移动到下一候选“la”)。
在一个示例中,在选择了一个词汇后,一维手写文字输入设备自动在词汇之后添加一个空格。
在一个示例中,所述一维手写文字输入方法还包括:响应于检测单元检测到用户做出的字符分隔手势,将连续的一维字符手势区分开,以结束前一字符的输入,并准备接收下一字符的输入。在一个示例中字符分隔手势可以为手指停留时间超过预定阈值、单击、下压等等。
在一个示例中,一维字符手势对应于单个笔划,单个笔划为单个连续的 笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面,由此提高字符输入的效率。
在一个示例中,各个字符的模板的笔划的子笔划具有长度和方向,其中长度为两种预定长度之一。如前所述,这样可以提高输入的准确性和效率。
在一个示例中,所述一维字符手势与实际二维字符手写走势过程相似,是对二维字符手写手势在一维空间上的投影。
下面参考图7描述根据本发明实施例的将一维字符手势对应的一维信号转换为待处理的一维笔划的方法400。
在步骤410中,识别拐点,拐点指该一维笔划形成过程中随时间进行笔划行进方向发生变化的点,拐点包括笔划的开始点和结束点。
在步骤420中,去除拐点中距离时间上在前拐点小于预定像素长度的拐点。
此步为去除作为噪声的拐点。根据字符模板的设计,两个拐点之间的距离不应过短,因此如果两个相邻拐点之间距离小于预定阈值,例如小于20个像素,则可以视后面的拐点为噪声,将其去除。
在步骤430中,基于剩余的拐点划分线段。
即,时间上相邻的两个拐点即构成线段,也即候选的子笔划。
在步骤440中,去除掉首尾线段中长度小于预定阈值的线段,对剩余线段进行归一化,得到归一化后的线段的集合作为按照时间顺序的子笔划的序列。
如前所述,此步中的去除掉首尾线段中长度小于预定阈值的线段,旨在去除用户偶尔在笔划的开头和结尾无意识的做出小勾画(hook)。
这里对线段进行归一化可以通过将各个线段的长度除以最长的子笔划的长度来进行。这里最长的子笔划可以是例如上述各个线段中最长的线段。
经过上述处理,得到了线段的集合,即按照时间顺序排列的子笔划的序列,作为用户做出的一维字符手势(本文中也称之为一维笔划)的数据化表示。
在得到一维笔划后,如图2中步骤S220所示,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符
可以利用各种机器学习方法来进行该识别,例如模板匹配方法,比如通过直接在特征空间中,计算一维笔划对应的特征向量与每个字符的模板对应 的特征向量之间的距离,来选择一个作为识别结果(或者选择多个作为识别结果候选)。
根据本发明一个实施例,可以利用贝叶斯方法来求得用户所做一维字符手势对应于某个字符的概率。
图8示出了基于待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符的方法500的流程图。
在图8所示的方法中,假设待识别的一维笔划中所包含的子笔划数目为n,仅针对所包含子笔划数目同样等于n的字符的模板来计算概率,而对于子笔划数目不等于n的字符的模板,认为待识别的一维笔划为该字符的概率为零。
如图8所示,在步骤S510中,确定该一维笔划所包含的子笔划的数目。然后,前进到步骤S520。
在步骤S520中,检索字符模板数据库,确定模板中子笔划数目等于所述确定的该一维笔划所包含的子笔划的数目的模板。然后,前进到步骤S530。
在步骤S530中,对于所确定的模板中的每个,按照构成顺序,确定该模板中的每个子笔划呈现为所述一维笔划中的对应子笔划的概率,计算所述一维笔划对应于该模板的概率。
设待处理的一维笔划为S,构成其的子笔划为s1,s2…sn,设26个字符模板为T[1],T[2]…T[j]…T[26],构成字符模板T[j]的子笔划为T[j]1,T[j]2…T[j]n。则需要计算T[j]。根据贝叶斯公式,下面公式(1)成立,
P(T[j]/S)=P(T[j],S)/P(S)
=(P(T[j])×P(S/T[j]))/P(S)          (1)
概率P(S)对于各个模板的后验概率P(T[j]/S)的计算是共有的,因此对影响是相同的,无需考虑该项。概率P(T)即某个字符出现的先验概率可由预先统计获得,即是已知的。在一个示例中,为简化,可认为各个字符的出现概率是相等的。我们认为各个子笔划之间是独立的,则公式(2)成立
Figure PCTCN2016105694-appb-000001
其中概率P(Si/T[j]i),即模板T[j]的第i个子笔划T[j]i呈现为用户一维字符手势S中的子笔划Si的概率,可以由事先统计获得,由此能够计算得到当 用户做出一维字符手势S时该一维字符手势表示字符T[j]的概率P(T[j]/S)。
由此得到了一维笔划对应于各个模板的概率。
步骤S530完成后,前进到步骤S540。
在步骤S540中,基于所述一维笔划对应于各个模板的概率,将所述一维笔划识别为相应的字符。
例如,确定最大概率对应的模板,将所述一维笔划识别为该模板,换句话说,识别出用户所做的一维字符手势对应的字符。
在一个示例中,还包括显示单元,该显示单元的显示屏幕(真实存在的,或虚拟的显示屏幕)例如如图6所示配置,包括字符区域320和文本区域310,文本区域310显示输入文本,字符区域320用于在存在多个字符候选的情况下,显示预定数目个字符候选,例如图6中在字符区域320中显示3个字符候选。
在一个示例中,字符识别方法还包括:响应于检测到用户做出一维字符手势结束确定手势,显示所确定的与该一维字符手势对应的字符候选,所述一维字符手势结束确定手势指示一维字符手势结束。在一个示例中,一维字符手势结束确定手势可以为停顿手势,例如以智能眼镜为例,在作为用户输入界面的镜腿侧边框上,保持手指不动达预定时间。
接下来,响应于检测到用户做出字符选择手势,选择当前被高亮的候选字符,并将其显示在指示输入的文本的显示区域,或者响应于检测到用户做出候选字符切换手势,切换当前被高亮的候选字符并以高亮形式显示被切换为高亮的候选字符。在一个示例中,在默认情况下,例如前述计算的概率值最高的候选字符默认作为高亮的候选字符,被突出显示。候选字符切换手势可以为例如手指移动手势,每移动预定距离,则沿移动方向与当前高亮的候选字符相邻的候选字符被切换为高亮的候选字符。在一个示例中,字符选择手势可以为手指抬起离开用户输入界面的动作。
简言之,在智能眼镜环境下,字符输入的操作情境为,手指在眼镜腿上划动,当一个字符对应的一维手指结束时,用户手指保持不动,同时用户可以观察在显示屏幕上显示的各个候选字符,然后通过移动手指来切换高亮字符,最后通过抬开手指来选中一个字符,从而该字符显示在显示屏幕中的文本区域中。
在有些情况下,在一维字符手势结束时,手指处于用户输入界面的末端, 例如仍以智能眼镜为例,一维字符手势手指为沿眼镜腿纵向的划动,此时很可能一个字符输入结束时手指停留在镜腿的两端(为描述方向,称之为端A和端B)中的端A,这时再沿与停留前移动方向(即向端A或从端A移动的方向)移动存在困难,而向端B方向的手指划动是容易的。为此,可以使得字符候选的显示顺序为便于在沿着从端A向端B的方向划动手指时进行字符切换的顺序,即第一个高亮显示的为概率最高的,各候选字符按概率从高到低的顺序显示在显示屏幕上,当向端B移动手指时,次高概率的字符被高亮。
在一个示例中,可以通过字符删除手势来删除一字符,字符删除手势例如可以为向上滑动或者按下拍照按钮。依然以智能眼镜为例,经多个用户实验发现,用户更偏好按下拍照按钮这一手势,这可能是因为在镜腿上的向上滑动手势不是很容易进行。
上文描述了如何利用贝叶斯方法识别字符。下面参考图9描述在词汇输入模式下,利用贝叶斯方法和语言模型预测输入的词汇的示例性方法600。
在一个示例中,所述一维手写文字输入设备还包括语言模型数据库,存储有指示借由字符的概率分布,指派概率给由字符组成的词的信息。需要说明的是,这里的包括,既包括语言模型数据库存储于一维手写文字输入设备本地的情况,也涵盖语言模型数据库分布于远程设备上的情况。
如图9所示,在步骤S610中,在词汇输入模式下,识别单元参考语言模型数据库实时计算在给定一维笔划序列条件下输入特定词汇的概率。
在本示例中,各个一维笔划之间已经分割好,例如是通过字符分隔符分割的。
设k个一维笔划(对应于一个字符)的输入序列为I,为k个一维笔划S[1],S[2],…S[i]…S[k]的序列,每个一维笔划对应于一个字符。设一个词汇W具有k个字母,即W=L1…Li…Lk,当给定一维笔划的输入序列I时,输入的为词汇W的概率可以用公式(3)表示
P(W/I)=P(W,I)/P(I)            (3)
这里,P(I)是计算所有词汇W后验概率时的共有项,可以不予考虑。联合概率P(W,I)可以用公式(4)表示
P(W,I)=P(W)×P(I/W)             (4)
然后我们假定各个字符的输入彼此独立,则P(W,I)可以转换为公式(5)
Figure PCTCN2016105694-appb-000002
以T(Li)表示字母Li的模板可得公式(6)
Figure PCTCN2016105694-appb-000003
而P(S[i]/T(Li)可以利用上面的公式(2)得到。
由此可以计算得到与输入字符序列的字符个数相同的词汇的后验概率,如果没有找到字母数目匹配的词汇或者找到的字母数目匹配的词汇数目不足,可以计算具有更多字母的词汇的后验概率,这一般需要参考语言模型,有关参考语言模型来计算词汇的文献可参考Goodman等在IUI’02第194-195页上发表的题为“Language modeling for soft keyboard”的文章,这里不予赘述。
在步骤S620中,在计算得到一维笔划序列对应于各个词汇的概率之后,基于输入各个词汇的概率,对候选词汇进行排序,并显示在显示单元上,供用户选择,例如图6所示的词汇区域330。
前面参考附图描述了根据本发明实施例的一维手写文字输入方法和设备,提供了一种特别适合于在相对狭长的一维输入空间上进行输入的文字输入方法和设备。
根据本发明优选实施例,提供了单个字符以单个笔划表示的手段,由此提供了高效的输入方式。
根据本发明优选实施例,通过对二维字符书写手势的模拟和对子笔划长度种类的确定,设计了便于学习、高效的一维书写手势,提供了一种高效、准确的文字输入方法和输入设备。
前述描述仅为说明性的,旨在向本领域技术人员以完全且易于理解的方式传达本发明的发明理念。前述描述不应作为本发明的限制,本领域技术人员可以在本发明的发明理念基础上对一些细节、手段、器件等进行选择、更改或补充。
在上文的描述中,以英文字母和单词输入为例说明了本发明实施例的一维手写文字输入设备和方法,不过本发明的一维手写文字输入设备和方法也适于其他语言文字的输入,只要能够将二维手写手势映射到一维手写手势即可。
另外,前面的例子中,都是在近似一维的输入空间上举例说明本发明实施例的应用,其实本发明实施例当然也适用于二维的输入空间,例如在当前 的手机触摸屏、笔记本电脑的触摸屏等上,也可以进行一维字符手势(此时后续检测单元也仅提取一维字符手势对应的一维信号),这可以将更多的空间留给例如屏幕显示之用。
另外,这里以手指或描画工具与用户收入界面接触为例说明用户输入的形式,其实,本发明并不局限于此,也可以用户以非接触方式进行一维字符手势,例如用户手持发射激光的激光笔做出往复划动的一维字符手势,只要检测单元能检测这样的激光轨迹,也就能获得用户的输入。再比如,用户以手指在空中做出往复划动的一维字符手势,检测单元例如通过图像处理识别出这样的手势,则也可以完成用户手写输入的获得。
前面主要以智能眼镜作为本发明输入方法的应用例子,不过这仅为示例,本发明输入方法可以应用于智能手表、智能手环、智能手机等设备。
前述描述中,用户的一维字符手势以手指做出,不过这仅为示例,可以根据需要选择用户的其他身体部位,例如脚趾、腕部等等。另外,也可以不利于人体部位,而利用外部工具来进行输入,例如手写笔等。
根据本发明另一实施例,提供了一种一维手写文字输入设备,包括:用户输入界面,供用户以身体部位或工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势;检测单元,配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;字符模板数据库,配置为存储每个字符的模板,每个字符的模板为对所述一维信号进行统计性特征抽取获得的特征向量;识别单元,配置为接收来自检测单元的该一维信号,将该一维信号转换为待处理的特征向量,基于此待处理的特征向量和字符模板数据库中存储的每个字符的模板,识别得到相应字符。
根据一个实施例,一维字符手势可以为下列项目之一:在所述用户输入界面上进行的基本在一条直线上的往复划动;在所述用户输入界面上进行的单点下压;以及在所述用户输入界面上进行的一维角坐标系上的转动。
根据本发明另一实施例,提供了一种一维手写文字输入设备,包括:用户输入检测单元,获得用户按时间顺序做出的一维字符手势,所谓一维字符手势基本为在一条直线上的往复划动,并将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;字符模板数据库,配置为存储每个字符的模板,每个字符的模板对应于一维笔划,为按照预定 顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段;识别单元,配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符。
所述一维字符手势可以是用户身体部位或借用工具以接触方式在用户输入界面上做出的,也可以是以非接触方式做出的,所述用户检测单元可以包括图像拍摄和处理装置、压力检测装置、红外检测装置等等。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (31)

  1. 一种一维手写文字输入设备,包括:
    用户输入界面,供用户以身体部位或描画工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势,所谓一维字符手势基本为在一条直线上的往复划动;
    检测单元,配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;
    字符模板数据库,配置为存储每个字符的模板,每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段;
    识别单元,配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符。
  2. 根据权利要求1的一维手写文字输入设备,还包括:
    显示单元;
    其中,字符模板数据库中存储的至少一个字符模板对应于多个字符,
    其中,当处理器识别一维笔划对应于多个字符作为字符候选时,处理器执行处理使得在显示器上显示该多个字符候选,其中在任一时刻多个字符候选之一被高亮显示;以及
    响应于检测单元检测到用户做出的选择手势,处理器选择当前被高亮显示的字符候选作为字符识别结果;以及响应于检测单元检测到用户做出的移动手势,处理器切换被高亮的字符候选。
  3. 根据权利要求1或2的一维手写文字输入设备,响应于检测单元检测到用户做出的字符分隔手势,处理器将连续的一维字符手势区分开,以结束前一字符的输入,并准备接收下一字符的输入。
  4. 根据权利要求3的一维手写文字输入设备,所述字符分隔手势为停顿手势,停顿手势对应于用户手指或描画工具在用户输入界面上停顿时间超过预定阈值。
  5. 根据权利要求1或2的一维手写文字输入设备,所谓一维字符手势对 应于单个笔划,单个笔划为单个连续的笔划,在一维字符手势做出过程中,用户手指或者描画工具不离开用户输入界面。
  6. 根据权利要求1或2的一维手写文字输入设备,各个字符的模板的笔划的子笔划具有长度和方向,其中长度为两种预定长度之一。
  7. 根据权利要求1或2的一维手写文字输入设备,所述一维字符手势与实际二维字符手写走势过程相似。
  8. 根据权利要求7的一维手写文字输入设备,字符模板数据库中存储的字符模板中的至少一个对应于多个字符。
  9. 根据权利要求1或2的一维手写文字输入设备,对于特定字符的字符模板,首先将该字符旋转90度,再将二维手写手势映射到一维字符手势。
  10. 根据权利要求1或2的一维手写文字输入设备,所述处理器将该一维信号转换为待处理的一维笔划包括:
    识别拐点,拐点指该一维笔划形成过程中随时间进行笔划行进方向发生变化的点,拐点包括笔划的开始点和结束点;
    去除拐点中距离时间上在前拐点小于预定像素长度的拐点;
    基于剩余的拐点划分线段;
    去除掉线段中长度小于预定阈值的线段,对剩余线段进行归一化,得到归一化后的线段的集合作为按照时间顺序的子笔划的序列。
  11. 根据权利要求1或2的一维手写文字输入设备,所述基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符包括:
    确定该一维笔划所包含的子笔划的数目;
    检索字符模板数据库,确定模板中子笔划数目等于所述确定的该一维笔划所包含的子笔划的数目的模板;
    对于所确定的模板中的每个,按照构成顺序,确定每个子笔划呈现为所述一维笔划中的对应子笔划的概率,计算给定该模板呈现为所述一维笔划的概率;以及
    基于各个模板呈现为所述一维笔划的概率,将所述一维笔划识别为相应的字符。
  12. 根据权利要求11的一维手写文字输入设备,还包括显示单元,配置为在存在多个字符候选的情况下,以可视形式显示预定数目个字符候选,
    响应于检测到用户做出一维字符手势结束确定手势,显示所确定的与该一维字符手势对应的字符候选,所述一维字符手势结束确定手势指示一维字符手势结束,以及
    响应于检测到用户做出字符选择手势,选择当前被高亮的候选字符,并将其显示在指示输入的文本的显示区域,或者
    响应于检测到用户做出候选字符切换手势,切换当前被高亮的候选字符并以高亮形式显示被切换为高亮的候选字符。
  13. 根据权利要求12的一维手写文字输入设备,其中所述候选字符切换手势为手指沿着从当前被高亮的候选字符到要选择的候选字符的方向上的移动,
    在一维字符手势结束点接近用户输入界面在一维方向上的第一边界的情况下,将候选字符按照概率从高到低的顺序沿着从第一边界到第二边界的方向进行显示。
  14. 根据权利要求13的一维手写文字输入设备,所述一维字符手势结束确定手势为保持手指在用户输入界面上不动达预定时间,所述字符选择手势为将手指从用户输入界面上抬起,所述候选字符切换手势为移动手指达预定距离。
  15. 根据权利要求11的一维手写文字输入设备,还包括:
    语言模型数据库,存储有指示借由字符的概率分布,指派概率给由字符组成的词的信息;
    所述处理器在文字识别状态下,在字符输入模式和词输入模式之间切换,
    其中在词输入模式下,参考语言模型数据库实时计算在给定一维笔划序列条件下输入具体词汇的概率,
    基于输入具体词汇的概率,对候选词汇进行排序,并显示在显示单元上,供用户选择。
  16. 根据权利要求15的一维手写文字输入设备,在词汇模式下,显示单元的图形用户界面包括三个区域:
    文本区域,用于显示输入的文本;
    字符区域,用于显示识别的字符;以及
    词区域,用于显示识别的词候选。
  17. 根据权利要求16的一维手写文字输入设备,其中所述文本区域的高 度映射到用户输入界面在一维字符手势的一维方向上的全部长度,所述文本区域的水平方向表示时间轴,以及
    在所述文本区域上沿时间轴显示一维字符手势,从而提供了一维字符手势的二维可视反馈。
  18. 根据权利要求16的一维手写文字输入设备,所述一维手写文字输入设备为智能可穿戴设备。
  19. 一种一维手写文字输入设备执行的一维手写文字输入方法,一维手写文字输入设备包括用户输入界面和字符模板数据库,所述一维手写文字输入方法包括:
    响应于用户以手指或描画工具手动以接触用户输入界面的方式按时间顺序做出一维字符手势,检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号,所谓一维字符手势基本为在一条直线上的往复划动;
    接收来自检测单元的该一维信号,将该一维信号转换为待处理的一维笔划,基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符,其中每个字符的模板对应于一维笔划,为按照预定顺序的一个或多个子笔划的集合,各个子笔划为基本在一条直线上的线段。
  20. 根据权利要求19的一维手写文字输入方法,
    其中,字符模板数据库中存储的至少一个字符模板对应于多个字符,
    其中,当识别到一维笔划对应于多个字符作为字符候选时,执行处理使得在显示器上显示该多个字符候选,其中在任一时刻多个字符候选之一被高亮显示;以及
    响应于检测单元检测到用户做出的选择手势,选择当前被高亮显示的字符候选作为字符识别结果;以及响应于检测单元检测到用户做出的移动手势,处理器切换被高亮的字符候选。
  21. 根据权利要求19或20的一维手写文字输入方法,
    响应于检测单元检测到用户做出的字符分隔手势,将连续的一维字符手势区分开,以结束前一字符的输入,并准备接收下一字符的输入。
  22. 根据权利要求19或20的一维手写文字输入方法,所谓一维字符手势对应于单个笔划,单个笔划为单个连续的笔划,在一维字符手势做出过程 中,用户手指或者描画工具不离开用户输入界面。
  23. 根据权利要求19或20的一维手写文字输入方法,各个字符的模板的笔划的子笔划具有长度和方向,其中长度为两种预定长度之一。
  24. 根据权利要求19或20的一维手写文字输入方法,所述一维字符手势与实际二维字符手写走势过程相似。
  25. 根据权利要求19或20的一维手写文字输入方法,所述将该一维信号转换为待处理的一维笔划包括:
    识别拐点,指该一维笔划形成过程中随时间进行笔划行进方向发生变化的点,拐点包括笔划的开始点和结束点;
    去除拐点中距离时间上在前拐点小于预定像素长度的拐点;
    基于剩余的拐点划分线段;
    去除掉首尾线段中长度小于预定阈值的线段,对剩余线段进行归一化,得到归一化后的线段的集合作为按照时间顺序的子笔划的序列。
  26. 根据权利要求19或20的一维手写文字输入方法,所述基于此待处理的一维笔划和字符模板数据库中存储的每个字符的模板,将该一维笔划识别为相应字符包括:
    确定该一维笔划所包含的子笔划的数目;
    检索字符模板数据库,确定模板中子笔划数目等于所述确定的该一维笔划所包含的子笔划的数目的模板;
    对于所确定的模板中的每个,按照构成顺序,确定该模板中的每个子笔划呈现为所述一维笔划中的对应子笔划的概率,计算所述一维笔划对应于该模板的概率;以及
    基于所述一维笔划对应于各个模板的概率,将所述一维笔划识别为相应的字符。
  27. 根据权利要求19或20的一维手写文字输入方法,还包括:
    语言模型数据库,存储有指示借由字符的概率分布,指派概率给由字符组成的词的信息;
    所述处理器在文字识别状态下,在字符输入模式和词输入模式之间切换,
    其中在词输入模式下,参考语言模型数据库实时计算在给定一维笔划序列条件下输入特定词汇的概率,
    基于输入特定词汇的概率,对候选词汇进行排序,并显示在显示单元上, 供用户选择。
  28. 根据权利要求19或20的一维手写文字输入方法,在词汇模式下,在显示单元的图形用户界面上:
    在文本区域上显示输入的文本;
    在字符区域上显示识别的字符;以及
    在词区域上显示识别的词候选。
  29. 根据权利要求28的一维手写文字输入方法,其中所述文本区域的高度映射到用户输入界面在一维字符手势的一维方向上的全部长度,所述文本区域的水平方向表示时间轴,
    所述手写识别方法还包括:
    在所述文本区域上沿时间轴显示一维字符手势,从而提供一维字符手势的二维可视反馈。
  30. 一种一维手写文字输入设备,包括:
    用户输入界面,供用户以身体部位或工具手动以接触该用户输入界面的方式按时间顺序做出一维字符手势;
    检测单元,配置为检测用户在用户输入界面上做出的一维字符手势,将该一维字符手势转为一维信号,所述一维信号为在坐标***中仅在一个维度上有值的信号;
    字符模板数据库,配置为存储每个字符的模板,每个字符的模板为对所述一维信号进行统计性特征抽取获得的特征向量,
    处理器,配置接收来自检测单元的该一维信号,将该一维信号转换为待处理的特征向量,基于此待处理的特征向量和字符模板数据库中存储的每个字符的模板,识别得到相应字符。
  31. 根据权利要求30的一维手写文字输入设备,所述一维字符手势为下列项目之一:
    在所述用户输入界面上进行的基本在一条直线上的往复划动;
    在所述用户输入界面上进行的单点下压;以及
    在所述用户输入界面上进行的一维角坐标系上的转动。
PCT/CN2016/105694 2015-12-29 2016-11-14 一维手写文字输入设备和一维手写文字输入方法 WO2017114002A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511009156.7A CN105549890B (zh) 2015-12-29 2015-12-29 一维手写文字输入设备和一维手写文字输入方法
CN201511009156.7 2015-12-29

Publications (1)

Publication Number Publication Date
WO2017114002A1 true WO2017114002A1 (zh) 2017-07-06

Family

ID=55829095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105694 WO2017114002A1 (zh) 2015-12-29 2016-11-14 一维手写文字输入设备和一维手写文字输入方法

Country Status (2)

Country Link
CN (1) CN105549890B (zh)
WO (1) WO2017114002A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046751A (zh) * 2019-11-22 2020-04-21 华中师范大学 公式识别方法和装置
CN111506185A (zh) * 2019-01-31 2020-08-07 珠海金山办公软件有限公司 对文档进行操作的方法、装置、电子设备及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549890B (zh) * 2015-12-29 2019-03-05 清华大学 一维手写文字输入设备和一维手写文字输入方法
CN106598268B (zh) * 2016-11-10 2019-01-11 清华大学 文本输入方法和电子设备
CN109920309B (zh) * 2019-01-16 2023-02-03 深圳壹账通智能科技有限公司 手语转换方法、装置、存储介质和终端
US11113517B2 (en) * 2019-03-20 2021-09-07 Microsoft Technology Licensing, Llc Object detection and segmentation for inking applications
CN110377914B (zh) * 2019-07-25 2023-01-06 腾讯科技(深圳)有限公司 字符识别方法、装置及存储介质
CN112861709A (zh) * 2021-02-05 2021-05-28 金陵科技学院 一种基于简笔画的手绘草图识别方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845040A (zh) * 2006-03-21 2006-10-11 北京三桥科技有限公司 一种单键输入终端及其使用方法
CN101706689A (zh) * 2009-11-25 2010-05-12 福州福昕软件开发有限公司 通过方向键进行字符输入的方法和装置
CN101739118A (zh) * 2008-11-06 2010-06-16 大同大学 视讯手写文字输入装置及其方法
CN102314252A (zh) * 2010-06-30 2012-01-11 汉王科技股份有限公司 一种手写字符串的字符切分方法和装置
US20150220265A1 (en) * 2014-02-06 2015-08-06 Sony Corporation Information processing device, information processing method, and program
US20150301739A1 (en) * 2013-04-22 2015-10-22 Rajeev Jain Method and system of data entry on a virtual interface
CN105549890A (zh) * 2015-12-29 2016-05-04 清华大学 一维手写文字输入设备和一维手写文字输入方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8199126B1 (en) * 2011-07-18 2012-06-12 Google Inc. Use of potential-touch detection to improve responsiveness of devices
US8319746B1 (en) * 2011-07-22 2012-11-27 Google Inc. Systems and methods for removing electrical noise from a touchpad signal
CN104063069A (zh) * 2014-07-03 2014-09-24 南京吉隆光纤通信股份有限公司 一种便捷的文字输入装置
CN104133559A (zh) * 2014-07-04 2014-11-05 浙江大学 一种用于触摸屏输入的候选词汇的显示方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845040A (zh) * 2006-03-21 2006-10-11 北京三桥科技有限公司 一种单键输入终端及其使用方法
CN101739118A (zh) * 2008-11-06 2010-06-16 大同大学 视讯手写文字输入装置及其方法
CN101706689A (zh) * 2009-11-25 2010-05-12 福州福昕软件开发有限公司 通过方向键进行字符输入的方法和装置
CN102314252A (zh) * 2010-06-30 2012-01-11 汉王科技股份有限公司 一种手写字符串的字符切分方法和装置
US20150301739A1 (en) * 2013-04-22 2015-10-22 Rajeev Jain Method and system of data entry on a virtual interface
US20150220265A1 (en) * 2014-02-06 2015-08-06 Sony Corporation Information processing device, information processing method, and program
CN105549890A (zh) * 2015-12-29 2016-05-04 清华大学 一维手写文字输入设备和一维手写文字输入方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506185A (zh) * 2019-01-31 2020-08-07 珠海金山办公软件有限公司 对文档进行操作的方法、装置、电子设备及存储介质
CN111506185B (zh) * 2019-01-31 2023-09-29 珠海金山办公软件有限公司 对文档进行操作的方法、装置、电子设备及存储介质
CN111046751A (zh) * 2019-11-22 2020-04-21 华中师范大学 公式识别方法和装置
CN111046751B (zh) * 2019-11-22 2024-02-13 华中师范大学 公式识别方法和装置

Also Published As

Publication number Publication date
CN105549890A (zh) 2016-05-04
CN105549890B (zh) 2019-03-05

Similar Documents

Publication Publication Date Title
WO2017114002A1 (zh) 一维手写文字输入设备和一维手写文字输入方法
RU2702270C2 (ru) Обнаружение выбора рукописного фрагмента
US8570294B2 (en) Techniques for recognizing temporal tapping patterns input to a touch panel interface
US9881224B2 (en) User interface for overlapping handwritten text input
CN103294996B (zh) 一种3d手势识别方法
CN108700996B (zh) 用于多输入管理的***和方法
CN100587660C (zh) 一种手写字符预测识别的方法和装置
TWI382352B (zh) 視訊手寫文字輸入裝置及其方法
Magrofuoco et al. Two-dimensional stroke gesture recognition: A survey
JP2002203208A (ja) オンライン文字認識装置及び方法並びにコンピュータ読み取り可能な記憶媒体及びオンライン文字認識プログラム
JP2009543204A (ja) 手書き記号の認識方法及び装置
US10996843B2 (en) System and method for selecting graphical objects
JP2019220155A (ja) 手書き入力表示装置、手書き入力表示方法およびプログラム
CN109074224A (zh) 用于在字符串中***字符的方法以及相应的数字设备
CN111414837A (zh) 手势识别方法、装置、计算机设备及存储介质
CN107450717B (zh) 一种信息处理方法及穿戴式设备
JP5897726B2 (ja) ユーザインタフェース装置、ユーザインタフェース方法、プログラム及びコンピュータ可読情報記憶媒体
Enkhbat et al. Handkey: An efficient hand typing recognition using cnn for virtual keyboard
KR101559424B1 (ko) 손 인식에 기반한 가상 키보드 및 그 구현 방법
US11216691B2 (en) Input method and system for electronic device
JP4148867B2 (ja) 筆跡処理装置
KR20200103236A (ko) 수기에 기반한 입력을 디스플레이하기 위한 방법 및 장치
AU2020103527A4 (en) IPDN- Read Handwriting: Intelligent Process to Read Handwriting Using Deep Learning and Neural Networks
JP7392315B2 (ja) 表示装置、表示方法、プログラム
Kurosu Human-Computer Interaction. Interaction Technologies: 20th International Conference, HCI International 2018, Las Vegas, NV, USA, July 15–20, 2018, Proceedings, Part III

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16880792

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16880792

Country of ref document: EP

Kind code of ref document: A1