CN102520790A - Character input method based on image sensing module, device and terminal - Google Patents

Character input method based on image sensing module, device and terminal Download PDF

Info

Publication number
CN102520790A
CN102520790A CN2011103763311A CN201110376331A CN102520790A CN 102520790 A CN102520790 A CN 102520790A CN 2011103763311 A CN2011103763311 A CN 2011103763311A CN 201110376331 A CN201110376331 A CN 201110376331A CN 102520790 A CN102520790 A CN 102520790A
Authority
CN
China
Prior art keywords
finger tip
gesture
running orbit
information
image sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103763311A
Other languages
Chinese (zh)
Inventor
辛静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2011103763311A priority Critical patent/CN102520790A/en
Priority to PCT/CN2012/075103 priority patent/WO2013075466A1/en
Publication of CN102520790A publication Critical patent/CN102520790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a character input method based on an image sensing module, a device and a terminal. The character input method comprises the following steps of: acquiring operation track information of an object fingertip by the image sensing module; and inquiring a character model library trained in advance according to the operation track information so as to convert the operation track information into corresponding characters. According to the invention, the personalized requirement of a user for the handheld terminal can be met, and simultaneously the fast and accurate input of the characters also can be realized.

Description

A kind of characters input method, device and terminal based on image sensing module
Technical field
The present invention relates to field of mobile terminals, in particular to a kind of characters input method, device and terminal based on image sensing module.
Background technology
At present, mobile communication is in the development of carrying out advancing by leaps and bounds, and thus, people are also increasingly high thereupon to the functional requirement of the human-computer interaction function of portable terminal (for example mobile phone).Mobile phone is as requisite communication tool in people's daily life and the work, and is also increasing at present towards intellectuality, hommization upgrading and development, constantly to satisfy the fastidious day by day individual demand of people.
Copy editor and input technology have just greatly changed people's communication mode from it is born, through this technology, people can editing short message, input character etc.At present, have for the input mode of character multiple, but in view of multiple characters input method all more or less have certain limitation or a defective.At first, the input of character (or claiming text) is by the button editor, in the process of editing character; The button that need utilize thumb not stop adopts this method, and the user needs continuous repeat key action; Input efficiency is lower; And because long-term button, when serious even can cause and be similar to " thumb is scorching " health problem, greatly have influence on user's health.
In addition; Because present most handheld terminal has disposed resistance screen or capacitance plate; When input character; Write at the enterprising running hand of the screen at terminal by finger or writing pencil, carry out the identification of character and the generation of character through pressure between finger or writing pencil and the screen or capacitive sensing, thereby import corresponding character.But this method needs user's terminal configuration resistance screen or capacitance plate, and the mobile phone that does not dispose resistance screen or capacitance plate then can't adopt this method to carry out the character input, so this method has certain limitation.
In addition; The exercises that also have some handheld terminals to make through control sensor sensing user reach the purpose of character input; For example worn under the situation of special gloves, carried out target following, detected the action that the user makes and carry out the character input through sensor the user; Or use the user under the situation of e-Pointer of special device and carry out target following, detect action that the user makes and the same purpose that reaches the character input.But when adopting these methods to realize the character input; All need make these can hold terminal additional configuration relevant hardware; Thereby make that its cost is higher, and carrying out character when input, make that therefore user's use is extremely inconvenient owing to need to use additional apparatus.
For this reason, how to provide the characters input method of a kind of personalization, input high-speeding just to receive people's common concern gradually.
Summary of the invention
The object of the present invention is to provide a kind of characters input method based on image sensing module, device and terminal, it can satisfy the user for the individual demand that handheld terminal proposed, can also realize simultaneously character fast, input accurately.
In order to reach the object of the invention, the present invention adopts following technical scheme:
A kind of characters input method based on image sensing module comprises:
Obtain the running orbit information of object finger tip through image sensing module;
The character model storehouse that the said running orbit information inquiry training in advance of foundation goes out is to be converted into corresponding character with said running orbit information.
Preferably, carry out institute in steps before, said characters input method based on image sensing module also comprises:
Through image sensing module acquisition target gesture contour images information;
According to the gesture skeleton pattern storehouse that said object gesture contour images information inquiry training in advance goes out, obtain the gesture skeleton pattern of coupling;
Carry out the tracking of object finger tip according to said gesture skeleton pattern.
Preferably, in the gesture skeleton pattern storehouse that goes out according to said object gesture contour images information inquiry training in advance, also to comprise before the gesture skeleton pattern that obtains coupling:
Object gesture contour images information to collecting is carried out binary conversion treatment; And/or,
The object gesture contour images information that collects is carried out image sharpening to be handled; And/or,
The object gesture contour images information that collects is carried out image smooth to be handled.
Preferably; The step of carrying out the tracking of object finger tip according to said gesture skeleton pattern comprises: the rectangular area of the finger tip coarse positioning of gesture skeleton pattern that should reference is mapped on the object gesture contour images of extraction, obtains the rectangular area at the fingertip location place of object gesture contour images;
Finger contours in the said rectangular area is equidistantly divided, and calculates the curved edge rate of every section finger contours fragment;
Obtaining the maximum finger contours fragment of curved edge rate is the object finger tip.
Preferably, the method for obtaining the running orbit information of object finger tip through image sensing module comprises:
Carry out the tracking prediction of object finger tip according to Kalman filter;
From the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
Calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and obtain the running orbit information of object finger tip according to the variation of said tangential angle, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure BDA0000111512570000031
wherein; Said
Figure BDA0000111512570000032
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure BDA0000111512570000033
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
Preferably, the character model storehouse according to said running orbit information inquiry training in advance goes out comprises with the method that said running orbit information is converted into corresponding character:
The character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
Obtaining the maximum character model of likelihood value is the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
According to said target character model said running orbit information is converted into corresponding character.
A kind of character entry apparatus based on image sensing module comprises:
Image sensing module is used to gather the frame of video that comprises the object finger tip;
Object finger tip running orbit information acquisition module is used for obtaining through the frame of video of image sensing module collection the running orbit information of object finger tip;
The character conversion module is used for the character model storehouse that the said running orbit information inquiry training in advance of foundation goes out, so that said running orbit information is converted into corresponding character.
Preferably, said character entry apparatus based on image sensing module also comprises:
Object gesture contour images information acquisition module is used for through image sensing module acquisition target gesture contour images information;
Gesture skeleton pattern acquisition module; Be used for the gesture skeleton pattern storehouse that the said object gesture contour images information inquiry training in advance of foundation goes out; Obtain the gesture skeleton pattern of coupling; So that object finger tip running orbit information acquisition module can carry out the tracking of object finger tip according to said gesture skeleton pattern, obtain the running orbit information of object finger tip.
Preferably, said character entry apparatus based on image sensing module also comprises:
Image processing module, the object gesture contour images information that is used for object gesture contour images information acquisition module is collected is handled as follows:
Object gesture contour images information to collecting is carried out binary conversion treatment; And/or,
The object gesture contour images information that collects is carried out image sharpening to be handled; And/or,
The object gesture contour images information that collects is carried out image smooth to be handled.
Preferably, said object finger tip running orbit information acquisition module comprises according to the step that said gesture skeleton pattern carries out the tracking of object finger tip:
The rectangular area of the finger tip coarse positioning of the gesture skeleton pattern of this reference is mapped on the object gesture contour images of extraction, obtains the rectangular area at the fingertip location place of object gesture contour images;
Finger contours in the said rectangular area is equidistantly divided, and calculates the curved edge rate of every section finger contours fragment;
Obtaining the maximum finger contours fragment of curved edge rate is the object finger tip.Preferably, the said object finger tip running orbit information acquisition module method of obtaining the running orbit information of object finger tip through the frame of video of image sensing module collection comprises:
Carry out the tracking prediction of object finger tip according to Kalman filter;
From the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
Calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and obtain the running orbit information of object finger tip according to the variation of said tangential angle, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure BDA0000111512570000051
wherein; Said
Figure BDA0000111512570000052
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure BDA0000111512570000053
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
Preferably, said character conversion module comprises with the method that said running orbit information is converted into corresponding character according to the character model storehouse that said running orbit information inquiry training in advance goes out:
The character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
Obtaining the maximum character model of likelihood value is the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
According to said target character model said running orbit information is converted into corresponding character.
A kind of terminal, it comprises described character entry apparatus based on image sensing module, said device comprises:
Image sensing module is used to gather the frame of video that comprises the object finger tip;
Object finger tip running orbit information acquisition module is used for obtaining through the frame of video of image sensing module collection the running orbit information of object finger tip;
The character conversion module is used for the character model storehouse that the said running orbit information inquiry training in advance of foundation goes out, so that said running orbit information is converted into corresponding character.
Technical scheme through the invention described above can be found out; Image sensing module (for example cam device) images acquired that characters input method based on image sensing module provided by the invention, device and terminal rely on the terminal to carry; Extract the gesture profile, gesture skeleton pattern handwheel is wide and that deposited matees, and identifies gesture model; So that the object finger tip is carried out coarse positioning, according to the rate of curving of finger contours the object finger tip is accurately located then.Next approximate location that occurs constantly of forecasting object finger tip is caught the movement locus of finger tip, calculates different tangential angle constantly, the tangential angle in a period of time is changed add up, and just can obtain object fingertip motions track in this time period.Object fingertip motions track that obtains and the character model storehouse that prestores are mated, thereby generate corresponding character.The user can be satisfied for the individual demand that handheld terminal proposed in characters input method based on image sensing module provided by the invention, device and terminal, can also realize simultaneously character fast, input accurately.
Description of drawings
Accompanying drawing described herein is used to provide further understanding of the present invention, constitutes a part of the present invention, and illustrative examples of the present invention and explanation thereof are used to explain the present invention, does not constitute improper qualification of the present invention.In the accompanying drawings:
Fig. 1 is the characters input method schematic flow sheet based on image sensing module that one embodiment of the invention provides;
Fig. 2 is the characters input method detailed process synoptic diagram based on image sensing module that the present invention's one preferred embodiment provides;
Fig. 3 is the character entry apparatus structural representation based on image sensing module that one embodiment of the invention provides.
Embodiment
In order to make technical matters to be solved by this invention, technical scheme and beneficial effect clearer, clear,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
A kind of characters input method based on image sensing module as shown in Figure 1, that one embodiment of the invention provides comprises following concrete steps:
S101, obtain the running orbit information of object finger tip through image sensing module;
In this step; Said image sensing module is used to gather the finger tip image information of user's finger; And extracting user's finger tip running orbit information according to the said direct image information that collects, for example said image sensing module is the cam device that regular handset or smart mobile phone have all disposed.
During practical implementation, in this step, comprise following concrete steps:
A, obtain object gesture contour images through image sensing module, to object gesture contour images carry out time domain and spatial domain unite cut apart, binary conversion treatment;
B, through methods such as translation, convergent-divergent, rotations with the gesture profile reference model of gesture profile and storage coupling, select the maximum gesture profile reference model of similarity, the fingertip area of this reference model is mapped to the object model fingertip area;
C, in this fingertip area, uniformly-spaced calculate the finger contours rate of curving, the exact position that the rate of curving is maximum as finger tip;
D, the different tangential angle constantly of calculating add up the variation of the tangential angle in a period of time, obtain the running orbit information of object finger tip.
The character model storehouse that S102, the said running orbit information inquiry training in advance of foundation go out is to be converted into corresponding character with said running orbit information.
In this step; Mate according to the above-mentioned user's who collects finger tip running orbit information and the character model storehouse that foundation is good in advance; When finding the corresponding characters model, so that said running orbit information is translated into corresponding characters information, thereby foundation is provided for the character input.
Under the preferred implementation, carry out institute in steps before, also comprise:
S1001, through image sensing module acquisition target gesture contour images information, for example said object gesture contour images information includes but not limited to following several kinds: have only forefinger to stretch, other fingers are clenched fist; Have only middle finger to stretch, other fingers are clenched fist; Thumb and forefinger stretch simultaneously, and other fingers are clenched fist etc.
S1002, the gesture skeleton pattern storehouse that goes out according to said object gesture contour images information inquiry training in advance obtain the gesture skeleton pattern of coupling;
S1003, the said gesture skeleton pattern of foundation carry out the tracking of object finger tip.
Under the preferred implementation, in the gesture skeleton pattern storehouse that goes out according to said object gesture contour images information inquiry training in advance, also to comprise before the gesture skeleton pattern that obtains coupling:
(1) the object gesture contour images information that collects is carried out binary conversion treatment; And/or,
(2) the object gesture contour images information that collects being carried out image sharpening handles; And/or,
(3) the object gesture contour images information that collects being carried out image smooth handles.
Be treated to example simultaneously the object gesture contour images information that collects is carried out binary conversion treatment, image sharpening processing and image smooth,, thereby can represent with a numerical value because gray level image has only the monochrome information of spatial sampling point.And for coloured image, the content that comprises is abundanter, but expression is more complicated.But be indifferent to and there is no need to keep the color information of image in the present invention, therefore can the object gesture contour images information of coloured image be carried out binary conversion treatment, be converted into gray level image, reduce complexity, reduce memory source.Simultaneously, for the edge of outstanding gesture profile, can carry out image sharpening to object gesture contour images information and handle, the present invention adopts Roberts gradient sharpening method to strengthen the image border.In addition, in order to reduce noise, can also carry out image smooth to object gesture contour images information and handle.
Under the preferred implementation, in said step S1003, the step of carrying out the tracking of object finger tip according to said gesture skeleton pattern comprises:
S10031, the rectangular area of finger tip coarse positioning of gesture skeleton pattern that should reference are mapped on the object gesture contour images of extraction, obtain the rectangular area at the fingertip location place of object gesture contour images;
S10032, the finger contours in the said rectangular area is equidistantly divided, and calculate the curved edge rate of every section finger contours fragment;
S10033, to obtain the maximum finger contours fragment of curved edge rate be the object finger tip.
In said step S101, the method for obtaining the running orbit information of object finger tip through image sensing module specifically comprises the steps:
S1011, carry out the tracking prediction of object finger tip according to Kalman filter;
S1012, from the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
S1013, calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and the variation of the said tangential angle of foundation obtains the running orbit information of object finger tip, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure BDA0000111512570000081
wherein; Said
Figure BDA0000111512570000082
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure BDA0000111512570000083
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
In said step S102, the character model storehouse according to said running orbit information inquiry training in advance goes out comprises with the method that said running orbit information is converted into corresponding character:
S1021, the character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
S1022, to obtain the maximum character model of likelihood value be the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
S1023, the said target character model of foundation are converted into corresponding character with said running orbit information.
As shown in Figure 2, Fig. 2 is the characters input method detailed process synoptic diagram based on image sensing module that the present invention's one preferred embodiment provides, and said flow process specifically comprises the steps:
S301, gesture model are chosen and the finger tip coarse positioning;
Gesture model choose the direct recognition effect that influences.The present invention chooses common gesture model model as a reference, for example includes but not limited to following several kinds: have only forefinger to stretch, other fingers are clenched fist; Have only middle finger to stretch, other fingers are clenched fist; Thumb and forefinger stretch simultaneously, and other fingers are clenched fist etc.For different gesture models, the initial position of difference predefine finger tip.For example, for the situation of having only forefinger to stretch, the initial position of finger tip is defined in the forefinger the top, and with a certain size rectangle frame fingertip location is included, this rectangular area is as the pinpoint zone of follow-up finger tip.
S302, gesture profile extract;
Information in general using interframe or the frame is analyzed video image.But owing to have noise in the scene, single methods of video segmentation can not be approached the edge of object exactly.Therefore will extract the gesture profile accurately, the spatial informations such as color, brightness that unite object carry out Video Segmentation.For example the embodiment of the invention adopts based on Space Time associating split plot design, fully utilizes the interframe movement information of time domain and the colour of skin, the monochrome information in space, carries out the dividing method in time and space simultaneously, extracts gesture edge accurately.Cut apart through the spatial domain, obtain initial segmentation zone, cut apart the moving region that obtains image, connect discontinuous edge, obtain the gesture profile through time domain with accurate semanteme.
S303, image pre-service;
Gray level image has only the monochrome information of spatial sampling point, thereby can represent with a numerical value.And for coloured image, the content that comprises is abundanter, but expression is more complicated.But in this inventive embodiments, there is no need to keep the color information of image, therefore need coloured image be carried out binary conversion treatment, be converted into gray level image, reduce complexity, reduce memory source.Edge for outstanding hand carries out sharpening to image, and the present invention adopts Roberts gradient sharpening method to strengthen the image border.For reducing noise, can carry out smoothing processing to bianry image.
S304, gesture profile and gesture skeleton pattern coupling;
The gesture profile and the gesture skeleton pattern that extract are mated; The process of coupling is: the gesture contour images that extracts and each are carried out the translation coupling with reference to the gesture skeleton pattern; Calculate matching value, choose the maximum gesture skeleton pattern of matching value as wanting recognition objective.Rectangular area with the finger tip coarse positioning of this gesture skeleton pattern is mapped on the gesture profile of extraction then, just obtains the rectangular area at the fingertip location place of gesture profile.In order to improve matching degree, can the gesture profile that extract be carried out convergent-divergent and rotation.
S305, finger tip are accurately located;
Finger contours in the rectangular area is carried out equally spaced division, calculate the rate of curving of each section curve respectively, the exact position as finger tip of rate of curving maximum curve.
S306, moving target predicting tracing;
In video image, the time interval of consecutive frame is less, can think uniform motion, therefore can the variation of motion state be described as the dynamic linear system.Consider the real-time requirement, embodiment of the invention selection card Thalmann filter is realized fingertip location is followed the tracks of.Topmost two stages of Kalman filter are predictions and upgrade that equation is respectively:
S ( n ) = A S ^ ( n - 1 ) D ( n ) = A D ^ ( n - 1 ) A T + W ,
S ^ ( n ) = S ( n ) + K ( n ) ( Z ( n ) - HS ( n ) ) D ^ ( n ) = ( I - K ( n ) H ) D ( n ) .
S307, moving target starting point are obtained;
In the practical implementation process of the embodiment of the invention, the user generally in writing process its intermediate frame be quality frame preferably, and starting point has a lot of meaningless frames, influences target detection, therefore value rule of thumb is since the starting point of the 4th frame as track.
S308, moving target terminal point obtain;
In embodiments of the present invention, when moving target (being finger tip) had no motion in 2 seconds, think that then the character input operation finishes.
S309, movement locus characteristic are obtained;
After obtaining the coordinate of adjacent two frames; Calculate tangential angle; If t-1 constantly finger tip point coordinate for
Figure BDA0000111512570000103
t constantly finger tip point coordinate is
Figure BDA0000111512570000104
tangential angle is arranged in order to simplify calculating at this moment; The present invention quantizes tangential angle; For example; Per 15 tolerance turn to a direction, promptly adopt the uniform quantization method of 24 eigenvectors.Different tangential angle constantly change the movement locus of having formed finger tip.
S310, dynamic trajectory identification.
Track that obtains and the character model that trains are complementary, select the maximum model of likelihood score as the target character model.Wherein relate to probability problem, the present invention adopts the Verbit algorithm to obtain the likelihood value of each model, with the final goal of confirming as of likelihood score maximum.The said target character model of final foundation is converted into corresponding character with said running orbit information.
As shown in Figure 3, the embodiment of the invention also provides a kind of character entry apparatus based on image sensing module, and said device comprises:
Image sensing module 10 is used to gather the frame of video that comprises the object finger tip;
Object finger tip running orbit information acquisition module 20; Be used for obtaining the running orbit information of object finger tip through the frame of video of image sensing module collection; In embodiments of the present invention, the said object finger tip running orbit information acquisition module 20 running orbit information of obtaining the object finger tip comprises following concrete steps:
Obtain object gesture contour images through image sensing module, to object gesture contour images carry out time domain and spatial domain unite cut apart, binary conversion treatment; With the gesture profile reference model of gesture profile and storage coupling, select the maximum gesture profile reference model of similarity through methods such as translation, convergent-divergent, rotations, the fingertip area of this reference model is mapped to the object model fingertip area; In this fingertip area, uniformly-spaced calculate the finger contours rate of curving, the exact position that the rate of curving is maximum as finger tip; Calculate different tangential angle constantly, the tangential angle in a period of time is changed add up, obtain the running orbit information of object finger tip.
Character conversion module 30 is used for the character model storehouse that the said running orbit information inquiry training in advance of foundation goes out, so that said running orbit information is converted into corresponding character.
Under the preferred implementation, said character entry apparatus based on image sensing module also comprises:
Object gesture contour images information acquisition module 40 is used for through image sensing module acquisition target gesture contour images information;
Gesture skeleton pattern acquisition module 50; Be used for the gesture skeleton pattern storehouse that the said object gesture contour images information inquiry training in advance of foundation goes out; Obtain the gesture skeleton pattern of coupling; So that object finger tip running orbit information acquisition module can carry out the tracking of object finger tip according to said gesture skeleton pattern, obtain the running orbit information of object finger tip.
Under the preferred implementation, said character entry apparatus based on image sensing module also comprises:
Image processing module 60, the object gesture contour images information that is used for object gesture contour images information acquisition module 40 is collected is handled as follows:
(1) the object gesture contour images information that collects is carried out binary conversion treatment; And/or,
(2) the object gesture contour images information that collects being carried out image sharpening handles; And/or,
(3) the object gesture contour images information that collects being carried out image smooth handles.
Wherein, said object finger tip running orbit information acquisition module 20 comprises according to the step that said gesture skeleton pattern carries out the tracking of object finger tip:
The rectangular area of the finger tip coarse positioning of gesture skeleton pattern that (1) should reference is mapped on the object gesture contour images of extraction, obtains the rectangular area at the fingertip location place of object gesture contour images;
(2) finger contours in the said rectangular area is equidistantly divided, and calculate the curved edge rate of every section finger contours fragment;
(3) obtaining the maximum finger contours fragment of curved edge rate is the object finger tip.
In addition, said object finger tip running orbit information acquisition module 20 method of obtaining the running orbit information of object finger tip through the frame of video of image sensing module collection comprises:
(1) carries out the tracking prediction of object finger tip according to Kalman filter;
(2) from the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
(3) calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and the variation of the said tangential angle of foundation obtains the running orbit information of object finger tip, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure BDA0000111512570000121
wherein; Said
Figure BDA0000111512570000122
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure BDA0000111512570000123
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
Said character conversion module 30 comprises with the method that said running orbit information is converted into corresponding character according to the character model storehouse that said running orbit information inquiry training in advance goes out:
(1) the character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
(2) obtaining the maximum character model of likelihood value is the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
(3) according to said target character model said running orbit information is converted into corresponding character.
Correspondingly, the embodiment of the invention also provides a kind of terminal, and it comprises aforesaid character entry apparatus based on image sensing module, and with reference to figure 3, said device comprises:
Image sensing module 10 is used to gather the frame of video that comprises the object finger tip;
Object finger tip running orbit information acquisition module 20 is used for obtaining through the frame of video of image sensing module collection the running orbit information of object finger tip;
Character conversion module 30 is used for the character model storehouse that the said running orbit information inquiry training in advance of foundation goes out, so that said running orbit information is converted into corresponding character.
In this terminal, said image sensing module can be common cam device, therefore for terminal provided by the present invention; Because the cam device images acquired that can rely on the terminal to carry; Extract user's gesture profile, gesture skeleton pattern handwheel is wide and that deposited matees, and identifies gesture model; So that the object finger tip is carried out coarse positioning, according to the rate of curving of finger contours the object finger tip is accurately located then.Next approximate location that occurs constantly of forecasting object finger tip is caught the movement locus of finger tip, calculates different tangential angle constantly, the tangential angle in a period of time is changed add up, and just can obtain object fingertip motions track in this time period.Object fingertip motions track that obtains and the character model storehouse that prestores are mated, thereby generate corresponding character.The user can be satisfied for the individual demand that handheld terminal proposed in characters input method based on image sensing module provided by the invention, device and terminal, can also realize simultaneously character fast, input accurately.
Above-mentioned explanation illustrates and has described a preferred embodiment of the present invention; But as previously mentioned; Be to be understood that the present invention is not limited to the form that this paper discloses, should do not regard eliminating as, and can be used for various other combinations, modification and environment other embodiment; And can in invention contemplated scope described herein, change through the technology or the knowledge of above-mentioned instruction or association area.And change that those skilled in the art carried out and variation do not break away from the spirit and scope of the present invention, then all should be in the protection domain of accompanying claims of the present invention.

Claims (13)

1. the characters input method based on image sensing module is characterized in that, comprising:
Obtain the running orbit information of object finger tip through image sensing module;
The character model storehouse that the said running orbit information inquiry training in advance of foundation goes out is to be converted into corresponding character with said running orbit information.
2. the characters input method based on image sensing module as claimed in claim 1 is characterized in that, carry out institute in steps before, also comprise:
Through image sensing module acquisition target gesture contour images information;
According to the gesture skeleton pattern storehouse that said object gesture contour images information inquiry training in advance goes out, obtain the gesture skeleton pattern of coupling;
Carry out the tracking of object finger tip according to said gesture skeleton pattern.
3. the characters input method based on image sensing module as claimed in claim 2 is characterized in that, in the gesture skeleton pattern storehouse that goes out according to said object gesture contour images information inquiry training in advance, also to comprise before the gesture skeleton pattern that obtains coupling:
Object gesture contour images information to collecting is carried out binary conversion treatment; And/or,
The object gesture contour images information that collects is carried out image sharpening to be handled; And/or,
The object gesture contour images information that collects is carried out image smooth to be handled.
4. the characters input method based on image sensing module as claimed in claim 2 is characterized in that, the step of carrying out the tracking of object finger tip according to said gesture skeleton pattern comprises:
The rectangular area of the finger tip coarse positioning of the gesture skeleton pattern of this reference is mapped on the object gesture contour images of extraction, obtains the rectangular area at the fingertip location place of object gesture contour images;
Finger contours in the said rectangular area is equidistantly divided, and calculates the curved edge rate of every section finger contours fragment;
Obtaining the maximum finger contours fragment of curved edge rate is the object finger tip.
5. the characters input method based on image sensing module as claimed in claim 1 is characterized in that, the method for obtaining the running orbit information of object finger tip through image sensing module comprises:
Carry out the tracking prediction of object finger tip according to Kalman filter;
From the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
Calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and obtain the running orbit information of object finger tip according to the variation of said tangential angle, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure FDA0000111512560000021
wherein; Said
Figure FDA0000111512560000022
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure FDA0000111512560000023
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
6. the characters input method based on image sensing module as claimed in claim 1 is characterized in that, the character model storehouse according to said running orbit information inquiry training in advance goes out comprises with the method that said running orbit information is converted into corresponding character:
The character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
Obtaining the maximum character model of likelihood value is the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
According to said target character model said running orbit information is converted into corresponding character.
7. the character entry apparatus based on image sensing module is characterized in that, comprising:
Image sensing module is used to gather the frame of video that comprises the object finger tip;
Object finger tip running orbit information acquisition module is used for obtaining through the frame of video of image sensing module collection the running orbit information of object finger tip;
The character conversion module is used for the character model storehouse that the said running orbit information inquiry training in advance of foundation goes out, so that said running orbit information is converted into corresponding character.
8. the character entry apparatus based on image sensing module as claimed in claim 7 is characterized in that, also comprises:
Object gesture contour images information acquisition module is used for through image sensing module acquisition target gesture contour images information;
Gesture skeleton pattern acquisition module; Be used for the gesture skeleton pattern storehouse that the said object gesture contour images information inquiry training in advance of foundation goes out; Obtain the gesture skeleton pattern of coupling; So that object finger tip running orbit information acquisition module can carry out the tracking of object finger tip according to said gesture skeleton pattern, obtain the running orbit information of object finger tip.
9. the character entry apparatus based on image sensing module as claimed in claim 8 is characterized in that, also comprises:
Image processing module, the object gesture contour images information that is used for object gesture contour images information acquisition module is collected is handled as follows:
Object gesture contour images information to collecting is carried out binary conversion treatment; And/or,
The object gesture contour images information that collects is carried out image sharpening to be handled; And/or,
The object gesture contour images information that collects is carried out image smooth to be handled.
10. the character entry apparatus based on image sensing module as claimed in claim 8; It is characterized in that; Said object finger tip running orbit information acquisition module comprises according to the step that said gesture skeleton pattern carries out the tracking of object finger tip: the rectangular area of the finger tip coarse positioning of gesture skeleton pattern that should reference is mapped on the object gesture contour images of extraction, obtains the rectangular area at the fingertip location place of object gesture contour images;
Finger contours in the said rectangular area is equidistantly divided, and calculates the curved edge rate of every section finger contours fragment;
Obtaining the maximum finger contours fragment of curved edge rate is the object finger tip.
11. the character entry apparatus based on image sensing module as claimed in claim 8; It is characterized in that the method that said object finger tip running orbit information acquisition module obtains the running orbit information of object finger tip through the frame of video of image sensing module collection comprises:
Carry out the tracking prediction of object finger tip according to Kalman filter;
From the frame of video of image sensing module collection, obtain the origin coordinates information of object finger tip, average afterwards every real-time coordinate information of gathering an object finger tip at a distance from least one frame;
Calculate the tangential angle θ of said real-time coordinate information and last coordinate information, and obtain the running orbit information of object finger tip according to the variation of said tangential angle, wherein, the computational mathematics formula of said tangential angle θ is following:
Figure FDA0000111512560000041
wherein; Said
Figure FDA0000111512560000042
is meant the object finger tip at last once t-1 coordinate information constantly, and said
Figure FDA0000111512560000043
is meant that the object finger tip is at real-time t real-time coordinate information constantly.
12. the character entry apparatus based on image sensing module as claimed in claim 8; It is characterized in that; Said character conversion module comprises with the method that said running orbit information is converted into corresponding character according to the character model storehouse that said running orbit information inquiry training in advance goes out:
The character model storehouse that goes out according to said running orbit information inquiry training in advance, and said running orbit information and all character models mated;
Obtaining the maximum character model of likelihood value is the target character model, wherein, adopts the likelihood value of each character model of Verbit algorithm computation;
According to said target character model said running orbit information is converted into corresponding character.
13. a terminal is characterized in that, comprises arbitrary described character entry apparatus based on image sensing module like claim 7-12.
CN2011103763311A 2011-11-23 2011-11-23 Character input method based on image sensing module, device and terminal Pending CN102520790A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2011103763311A CN102520790A (en) 2011-11-23 2011-11-23 Character input method based on image sensing module, device and terminal
PCT/CN2012/075103 WO2013075466A1 (en) 2011-11-23 2012-05-04 Character input method, device and terminal based on image sensing module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103763311A CN102520790A (en) 2011-11-23 2011-11-23 Character input method based on image sensing module, device and terminal

Publications (1)

Publication Number Publication Date
CN102520790A true CN102520790A (en) 2012-06-27

Family

ID=46291742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103763311A Pending CN102520790A (en) 2011-11-23 2011-11-23 Character input method based on image sensing module, device and terminal

Country Status (2)

Country Link
CN (1) CN102520790A (en)
WO (1) WO2013075466A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103235836A (en) * 2013-05-07 2013-08-07 西安电子科技大学 Method for inputting information through mobile phone
CN103577843A (en) * 2013-11-22 2014-02-12 中国科学院自动化研究所 Identification method for handwritten character strings in air
WO2014101410A1 (en) * 2012-12-31 2014-07-03 华为技术有限公司 Input processing method and apparatus
CN103941846A (en) * 2013-01-18 2014-07-23 联想(北京)有限公司 Electronic device and input method
CN104123008A (en) * 2014-07-30 2014-10-29 哈尔滨工业大学深圳研究生院 Man-machine interaction method and system based on static gestures
CN104834412A (en) * 2015-05-13 2015-08-12 深圳市蓝晨科技有限公司 Touch terminal based on non-contact hand gesture recognition
CN105094544A (en) * 2015-07-16 2015-11-25 百度在线网络技术(北京)有限公司 Acquisition method and device for emoticons
CN105975934A (en) * 2016-05-05 2016-09-28 中国人民解放军63908部队 Dynamic gesture identification method and system for augmented reality auxiliary maintenance
CN107643821A (en) * 2016-07-22 2018-01-30 北京搜狗科技发展有限公司 A kind of input control method, device and electronic equipment
CN107923742A (en) * 2015-08-19 2018-04-17 索尼公司 Information processor, information processing method and program
WO2020078017A1 (en) * 2018-10-19 2020-04-23 北京百度网讯科技有限公司 Method and apparatus for recognizing handwriting in air, and device and computer-readable storage medium
CN112357703A (en) * 2020-10-27 2021-02-12 广州广日电梯工业有限公司 Elevator control system and elevator control method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382598B (en) * 2018-12-27 2024-05-24 北京搜狗科技发展有限公司 Identification method and device and electronic equipment
CN110865767A (en) * 2019-11-20 2020-03-06 深圳传音控股股份有限公司 Application program running method, device, equipment and storage medium
CN111311590B (en) * 2020-03-06 2023-11-21 通控研究院(安徽)有限公司 Switch point contact degree detection method based on image detection technology
CN111627039A (en) * 2020-05-09 2020-09-04 北京小狗智能机器人技术有限公司 Interaction system and interaction method based on image recognition
CN113591822B (en) * 2021-10-08 2022-02-08 广州市简筱网络科技有限公司 Special crowd gesture interaction information consultation and recognition system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1477924A2 (en) * 2003-03-31 2004-11-17 HONDA MOTOR CO., Ltd. Gesture recognition apparatus, method and program
CN101593022A (en) * 2009-06-30 2009-12-02 华南理工大学 A kind of quick human-computer interaction of following the tracks of based on finger tip

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739118A (en) * 2008-11-06 2010-06-16 大同大学 Video handwriting character inputting device and method thereof
CN102063618B (en) * 2011-01-13 2012-10-31 中科芯集成电路股份有限公司 Dynamic gesture identification method in interactive system
CN102184021B (en) * 2011-05-27 2013-06-12 华南理工大学 Television man-machine interaction method based on handwriting input and fingertip mouse

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1477924A2 (en) * 2003-03-31 2004-11-17 HONDA MOTOR CO., Ltd. Gesture recognition apparatus, method and program
CN101593022A (en) * 2009-06-30 2009-12-02 华南理工大学 A kind of quick human-computer interaction of following the tracks of based on finger tip

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨青: "手势识别技术的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014101410A1 (en) * 2012-12-31 2014-07-03 华为技术有限公司 Input processing method and apparatus
CN103941846A (en) * 2013-01-18 2014-07-23 联想(北京)有限公司 Electronic device and input method
CN103235836A (en) * 2013-05-07 2013-08-07 西安电子科技大学 Method for inputting information through mobile phone
CN103577843A (en) * 2013-11-22 2014-02-12 中国科学院自动化研究所 Identification method for handwritten character strings in air
CN103577843B (en) * 2013-11-22 2016-06-22 中国科学院自动化研究所 A kind of aerial hand-written character string recognition methods
CN104123008A (en) * 2014-07-30 2014-10-29 哈尔滨工业大学深圳研究生院 Man-machine interaction method and system based on static gestures
CN104834412B (en) * 2015-05-13 2018-02-23 深圳市蓝晨科技股份有限公司 A kind of touch terminal based on contactless gesture identification
CN104834412A (en) * 2015-05-13 2015-08-12 深圳市蓝晨科技有限公司 Touch terminal based on non-contact hand gesture recognition
CN105094544B (en) * 2015-07-16 2020-03-03 百度在线网络技术(北京)有限公司 Method and device for acquiring characters
CN105094544A (en) * 2015-07-16 2015-11-25 百度在线网络技术(北京)有限公司 Acquisition method and device for emoticons
CN107923742A (en) * 2015-08-19 2018-04-17 索尼公司 Information processor, information processing method and program
CN105975934A (en) * 2016-05-05 2016-09-28 中国人民解放军63908部队 Dynamic gesture identification method and system for augmented reality auxiliary maintenance
CN107643821A (en) * 2016-07-22 2018-01-30 北京搜狗科技发展有限公司 A kind of input control method, device and electronic equipment
CN107643821B (en) * 2016-07-22 2021-07-27 北京搜狗科技发展有限公司 Input control method and device and electronic equipment
WO2020078017A1 (en) * 2018-10-19 2020-04-23 北京百度网讯科技有限公司 Method and apparatus for recognizing handwriting in air, and device and computer-readable storage medium
US11423700B2 (en) 2018-10-19 2022-08-23 Beijing Baidu Netcom Science And Technology Co., Ltd. Method, apparatus, device and computer readable storage medium for recognizing aerial handwriting
CN112357703A (en) * 2020-10-27 2021-02-12 广州广日电梯工业有限公司 Elevator control system and elevator control method

Also Published As

Publication number Publication date
WO2013075466A1 (en) 2013-05-30

Similar Documents

Publication Publication Date Title
CN102520790A (en) Character input method based on image sensing module, device and terminal
CN103294996B (en) A kind of 3D gesture identification method
US8204310B2 (en) Feature design for HMM based Eastern Asian character recognition
CN103150019B (en) A kind of hand-written input system and method
Lee et al. Kinect-based Taiwanese sign-language recognition system
Taylor et al. Type-hover-swipe in 96 bytes: A motion sensing mechanical keyboard
CN106502570A (en) A kind of method of gesture identification, device and onboard system
CN102402289B (en) Mouse recognition method for gesture based on machine vision
CN106774850B (en) Mobile terminal and interaction control method thereof
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN103226388A (en) Kinect-based handwriting method
CN101142617A (en) Method and apparatus for data entry input
CN105787442A (en) Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN106503619B (en) Gesture recognition method based on BP neural network
Dinh et al. Hand number gesture recognition using recognized hand parts in depth images
Sang et al. Robust palmprint recognition base on touch-less color palmprint images acquired
CN107450717B (en) Information processing method and wearable device
Aggarwal et al. Online handwriting recognition using depth sensors
CN103927555A (en) Static sign language letter recognition system and method based on Kinect sensor
CN101739118A (en) Video handwriting character inputting device and method thereof
CN103077381A (en) Monocular dynamic hand gesture recognition method on basis of fractional Fourier transformation
CN103176651A (en) Rapid collecting method of handwriting information
CN106648423A (en) Mobile terminal and interactive control method thereof
Nigam et al. A complete study of methodology of hand gesture recognition system for smart homes
Pansare et al. Gestuelle: A system to recognize dynamic hand gestures using hidden Markov model to control windows applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120627