CN102073414A - Multi-touch tracking method based on machine vision - Google Patents

Multi-touch tracking method based on machine vision Download PDF

Info

Publication number
CN102073414A
CN102073414A CN2010105251582A CN201010525158A CN102073414A CN 102073414 A CN102073414 A CN 102073414A CN 2010105251582 A CN2010105251582 A CN 2010105251582A CN 201010525158 A CN201010525158 A CN 201010525158A CN 102073414 A CN102073414 A CN 102073414A
Authority
CN
China
Prior art keywords
touch point
execution
value
frame
container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105251582A
Other languages
Chinese (zh)
Other versions
CN102073414B (en
Inventor
骆威
肖平
郑金发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Group Co Ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN 201010525158 priority Critical patent/CN102073414B/en
Publication of CN102073414A publication Critical patent/CN102073414A/en
Application granted granted Critical
Publication of CN102073414B publication Critical patent/CN102073414B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Position Input By Displaying (AREA)

Abstract

The invention relates to the fields of image analysis, target detection and target track, and in particular relates to a multi-touch tracking method. On one hand, two optimal conditions are set in the method; the optimal condition I is that the total distance is required to reach the relative minimum after all touch points between adjacent frames are associated; and the optimal condition II is that the distance of each two associated touch points between the adjacent frames is required to be relatively consistent. In the method, the touch points meeting the two optimal conditions are searched, and then are associated, thus generating right mobile tracks. On the other hand, in the method, a direction-based search technology is fused and applied, thus accelerating the search speed and reducing the computational complexity.

Description

Multiple point touching tracking based on machine vision
Technical field
The present invention relates to graphical analysis, target detection and target tracking domain, particularly the multiple point touching trace tracking method.
Background technology
In recent years, along with the develop rapidly of computer hardware technique, the change of essence has taken place to the human life style.The rise of artificial intelligence alleviated human mentality and blue-collar output, and having brought to the human lives does not have compared convenient, to the exploration of artificial intelligence, requires the thought intention that machine can the correct understanding mankind, therefore man-machine between correct mutual ever more important.
Gesture can be understood human thought's intention by auxiliary engine as a kind of effective exchange way.Current, the widespread use of virtual environment requires man-machine interaction more natural, and machine perception is more direct.Control mode based on conventional mouse, keyboard, control lever is unfavorable for the comprehensive representation user view, therefore makes up more direct man-machine interaction and will have vast market prospect.The rise of Flame Image Process, machine vision and pattern-recognition is especially for the more natural technical support that provides alternately is provided between man-machine.
At present, the gesture identification method based on vision mainly is divided into two classes: one is based on the analytical approach of 3D model, and one is based on the analytical approach of two dimensional image.Need set up the parameter model of describing gesture based on the analytical approach of 3D model, because of it can provide three-dimensional data, thus can set up comparatively accurate gesture model, however this method parameter is many, and the computation complexity height is difficult to reach the real-time requirement under the technology at present; Analytical approach based on two dimensional image mainly is that the character of image is analyzed, extract effective hand-characteristic, and discern, but owing to lost the information in third dimension space, make described feature effectively not set up gesture model, robustness is relatively poor, yet its parameter is less, can satisfy the requirement of handling in real time, therefore, mainly be still in the current industry practice two dimensional image is handled.
In man-machine interaction based on gesture, the single-point touches technical development is comparatively ripe, in the small size touching device, use very extensive, as touch mobile phone, PDA etc., but effect still need be improved in the large scale touching device, influence internal noise, environmental factor and involuntary accidental contact that its main cause that touches effect is starting material restriction, electronic devices and components, cause pseudo-touch point to produce, thereby cause maloperation.Therefore in addition, the single-point touches function is comparatively single, can only express limited several interactive operations, develops the multiple point touching technology and can make between man-machine more natural alternately.
Therefore multiple point touching moves simultaneously because of relating to a plurality of touch points, needs the correct related respective point that rises between consecutive frame, if the touch point associated errors between consecutive frame will produce wrong movement locus, thereby causes the machine can not the correct understanding user view.At present, in industrial products between consecutive frame the related main Euclidean distance method that adopts of respective point carry out association.This method detects the nearest touch point of respective distances with it, and carries out association at each touch point in the previous frame in present frame.This method can correctly be carried out association under the situation far away relatively of each touch point, but if each touch point is nearer relatively, then easily produces wrong related.Figure 5 shows that the correct related synoptic diagram of false touch point, Fig. 6 is wrong related synoptic diagram.
Summary of the invention
The technical matters that the present invention solves provides a kind of multiple point touching tracking based on machine vision, make this method carry out correct association to a plurality of touch points between consecutive frame by utilizing image analysis technology, generate correct movement locus, and then make machine can understand the meaning of track, thereby reach the purpose of natural interaction.
For solving the problems of the technologies described above, the technical solution used in the present invention is:
A kind of multiple point touching tracking based on machine vision, this method may further comprise the steps:
Step 1: statistical value rule of thumb, initialization touch point action scope value range;
Step 2: obtain each touch point coordinate of present frame, and judge whether present frame is first frame, if execution in step 11 then; Execution in step 3 then if not;
Step 3: according to judging whether have touch point or one-to-many touch point one to one in the former frame, if there is then execution in step 4 when the prescope value range; If there is not then execution in step 11;
Step 4: judge whether there is touch point one to one in the former frame, if there is then execution in step 5; If there is not then execution in step 6;
Step 5: will be one to one the touch point carry out association, and use each 1 to 2 times of regeneration function territory value range to the mean value of relating dot Euclidean distance, and use each to the mean value of relating dot angle as the reference direction; And judge in the former frame whether also have not related one-to-many touch point, if then execution in step 7 is arranged, if there is not then execution in step 11;
Step 6: association is carried out in the one-to-many touch point, and uses each 1 to 2 times of regeneration function territory value range the mean value of relating dot Euclidean distance, and use each to the mean value of relating dot angle as the reference direction; Execution in step 11;
Step 7: not related with it yet continuous touch point is searched for according to reference direction in the one-to-many touch point in the former frame in its action scope scope, and carry out pre-association, execution in step 8;
Step 8: calculate each total distance, judge whether its total distance reaches relative minimum, if reach relative minimum, execution in step 9 to pre-related touch point; Otherwise, carry out pre-association again, execution in step 7;
Step 9: relatively whether each all is in the same scope the distance between the pre-related touch point, if then expression is pre-related correct, and then with its association, execution in step 10; Otherwise, carry out pre-association again, execution in step 7;
Step 10: each pre-relating dot is carried out association, execution in step 11;
Step 11: obtain the next frame image, get back to step 2.
Compared with prior art, the invention has the beneficial effects as follows:
On the one hand, the present invention creatively allows machine automatically seek the touch point of satisfying optimal conditions: optimal conditions 1 requires that the total distance after each touch point association reaches relative minimum between consecutive frame; Optimal conditions 2 requires between consecutive frame the distance between each related touch point consistent relatively.By the touch point that these two optimal conditionss are satisfied in searching, it is carried out association, just can generate correct motion track.
On the other hand, for reducing the search complexity under the more situation in touch point, the searching method based on orientation preferentially has been merged in the present invention, greatly reduces computation complexity, has improved search speed, has higher real-time.The present invention leaves in 64K still can the real-time follow-up data, can be applicable to the real-time follow-up under the limited situation of memory source, can be applicable to large scale (more than 60 cun) man-machine interaction occasion in addition, as interactive meeting, interactive educational, interactive entertainment, aspect such as virtual environment especially.
Description of drawings
Fig. 1 is the synoptic diagram of embodiment of the invention hardware unit;
Fig. 2 is the schematic flow sheet of the embodiment of the invention;
Fig. 3 is the coordinate system structure that is adopted and the touch point data structure and the action scope scope synoptic diagram of the embodiment of the invention;
Fig. 4 is the position preface synoptic diagram of touch point in a certain frame of the embodiment of the invention;
Fig. 5 is the correct related synoptic diagram in frame touch point, front and back;
Fig. 6 is the synoptic diagram of front and back frame touch point mistake association.
Embodiment
Below embodiments of the invention are described in detail, be to be noted that described embodiment is intended to be convenient to the understanding of the present invention, does not play the qualification effect to the present invention.
As shown in Figure 6, the hardware of present embodiment needs is touch-screen, infrared light supply, image capture device, computing machine and projector.。This touch-screen is 67 cun common rear-projection display screens, is used for touching and shows.
This infrared light supply is an infrared word lasing light emitter, is used to produce infrared light field.Limited because of this light source irradiation angle, adopt 4 these type of light sources in the present embodiment, the staggered projection screen upper and lower sides that places guarantees that each zone of touch-screen is all covered by infrared light.This image capture device is common infrared camera, place the touch-screen rear, when not touching object and touch touch-screen, what obtain by infrared camera is the less even image sequence of gray-scale value, when the touch object touched touch-screen, the contact area grey scale pixel value can be apparently higher than other zones.Exactly because exist different gray-scale values zone in the image sequence, whether the subsequent calculations machine could be differentiated according to this automatically has the object of touch to touch touch-screen.
This computing machine links to each other with camera by hardware circuit, obtain the image sequence in the camera, and then finish a series of image processing algorithm, be image denoising, figure image intensifying, image binaryzation, touch point rectification, center of gravity normalization in regular turn, thereby obtain the barycentric coordinates of each touch point, and then the track algorithm among execution the present invention, at last tracking results is inputed in the projector.
This projector mainly is that the tracking results after Computer Processing is finished projects on the touch-screen, thereby makes the place, touch point produce bright spot, and then when finger is mobile on touch-screen, demonstrates corresponding track.
In the present embodiment, described association is meant that two touch points of confirming between consecutive frame are the touch point that same finger motion produced, and these 2 can be coupled together, and can not disconnect connection again after coupling together; Described linking to each other is meant that two touch points between consecutive frame all are in the other side's the action scope scope each other, but still uncertain whether the generation by same finger motion; Described pre-association is meant that two touch points are under the situation that determines whether not yet to produce for same finger motion between consecutive frame, elder generation thinks same finger motion for the time being and is produced, for the time being two touch points are coupled together, but after judging, still might disconnect connection.
In the present embodiment, described reference direction dynamically determines by touch point in former frame and the present frame, and it only instructs the touch point in the search present frame in its action scope scope how, touch point in the former frame.
In the present embodiment, if the touch point in the frame falls into the other side's action scope scope each other before and after laying respectively at, then define the touch point that links to each other each other, these touch points.If a certain touch point of former frame has only a continuous touch point in present frame, then this a certain touch point of definition is touch point one to one; If there are two or more continuous touch points a certain touch point of former frame in present frame, then this a certain touch point of definition is the one-to-many touch point.If a certain touch point that is positioned at former frame with carry out relatedly with its corresponding a certain touch point that links to each other, claim that then these two touch points are related touch point.For making explanation concrete more in detail, limit the touch point number in the present embodiment and mostly be most 10, but the present invention does not limit 10 touch points, the modification of not having in essence all belongs within the protection domain of the present invention.As shown in Figure 2, this method specific implementation step is as follows.
Step S1: statistical value rule of thumb, each touch point action scope value range of initialization
Figure 854DEST_PATH_IMAGE001
, and initialization FF==0.Whether FF is used to write down present frame is first frame, if present frame is then FF=1 of first frame, otherwise FF=2.
Step S2: judge whether the touch point is arranged in the present frame,, write down each touch point coordinate, and judge whether present frame is first frame if detect the touch point, determination methods is as being: if FF==0 then puts FF=1, show that present frame is first frame, execution in step S14, otherwise, put FF=2, execution in step S3; If do not detect the touch point, then wait for, detect the touch point up to system.
Each touch point is with orderly coordinate
Figure 131621DEST_PATH_IMAGE002
Record, wherein
Figure 270478DEST_PATH_IMAGE003
Expression touch point transverse axis coordinate, Expression touch point ordinate of orthogonal axes, Be zone bit, the index value when being used to write down association, its value initialization is 0, as shown in Figure 3;
With each touch point of scanning in the present frame by Value descending sort deposits the touch point after the ordering in the data capsule in, and gives a preface successively, and its principle as shown in Figure 3.For example detect three touch points (25,38,0) in the present frame, (60,55,0), (98,42,0) is then according to it
Figure 274020DEST_PATH_IMAGE004
The value descending deposits in the data capsule, and gives a preface, and its data association form is (25,38,0)-2, (60,55,0)-0, (98,42,0)-1.
Step S3:
Obscure for unlikely, former frame is expressed as Frame, back one frame is expressed as the
Figure 132572DEST_PATH_IMAGE007
Frame.If the
Figure 972352DEST_PATH_IMAGE006
Have in the frame Individual touch point, wherein
Figure 948715DEST_PATH_IMAGE009
The
Figure 540234DEST_PATH_IMAGE007
Have in the frame
Figure 500099DEST_PATH_IMAGE010
Individual touch point, wherein
Figure 89344DEST_PATH_IMAGE011
Figure 869081DEST_PATH_IMAGE012
Be illustrated in There is new touch point to add in the frame,
Figure 16345DEST_PATH_IMAGE013
Be illustrated in
Figure 838808DEST_PATH_IMAGE007
The touch point number reduces in the frame,
Figure 105841DEST_PATH_IMAGE014
Expression
Figure 304741DEST_PATH_IMAGE006
Frame with
Figure 911303DEST_PATH_IMAGE007
Number no change in touch point in the frame.
Will
Figure 904667DEST_PATH_IMAGE006
The index value of all touch point zone bits is changed to " 0 " in the frame; According to current action scope value range
Figure 658996DEST_PATH_IMAGE001
, calculate
Figure 599271DEST_PATH_IMAGE007
Whether frame has the touch point to fall into
Figure 122656DEST_PATH_IMAGE006
In each action scope scope that touches, promptly judge in the frame
Figure 286921DEST_PATH_IMAGE006
Whether there are touch point or one-to-many touch point one to one in the frame:
(1) if There is the touch point to fall in the frame
Figure 6932DEST_PATH_IMAGE006
In the frame in the action scope scope of certain touch point, then according to its sequence modification
Figure 384824DEST_PATH_IMAGE006
Frame with
Figure 985570DEST_PATH_IMAGE007
The index value of corresponding touch point zone bit is connected together both in the frame.At this moment, if
Figure 448912DEST_PATH_IMAGE006
Do not have in the action scope scope of frame touch point
Figure 730989DEST_PATH_IMAGE007
Arbitrary touch point in the frame represents that then this track leaves it at that, put its zone bit index value and be 1<<11(represents that 1 moves to left 11); If
Figure 963387DEST_PATH_IMAGE007
Do not have in the action scope scope of frame touch point
Figure 735034DEST_PATH_IMAGE006
Arbitrary touch point in the frame represents that then this touch point is the track initial point that increases newly, and to put its zone bit index value be 1<<11.Execution in step S4.
For example
Figure 685672DEST_PATH_IMAGE006
Touch point in the frame
Figure 505861DEST_PATH_IMAGE015
Being recorded as in data capsule: (60,55,0)-1 comprises in its action scope scope
Figure 858345DEST_PATH_IMAGE007
Two touch points in the frame : (55,60,0)-4 Hes
Figure 238828DEST_PATH_IMAGE017
: (65,65,0)-3, then should be by revising
Figure 862707DEST_PATH_IMAGE015
The zone bit index value associates the three.Correlating method of the present invention is: index value is represented with no symbol short in machine, and index value with binary expansion, can be obtained the 16bit binary digit, and the value of putting on the corresponding bit of the index value position according to the position preface of touch point is " 1 ".In this way after the association,
Figure 69697DEST_PATH_IMAGE015
Record in data capsule becomes: (60,55,2,4(0,000 0,000 0,001 1000))-1.
(2) if
Figure 183147DEST_PATH_IMAGE006
Do not exist arbitrary touch point in its action scope scope, to comprise in the frame
Figure 108378DEST_PATH_IMAGE007
Arbitrary touch point in the frame, promptly
Figure 270369DEST_PATH_IMAGE006
Neither there is touch point one to one in the frame, also do not have the one-to-many touch point, show that last touch action finishes, and puts FF=0, execution in step S14.
Step S4:
According to working as the prescope value range
Figure 597445DEST_PATH_IMAGE001
Judge Whether there is touch point one to one in the frame, if there is then execution in step S5; If
Figure 28743DEST_PATH_IMAGE006
There is not touch point one to one in the frame, then
Figure 994425DEST_PATH_IMAGE006
Certainly exist the one-to-many touch point in the frame, execution in step 6.
Step 5:
(1) if
Figure 176008DEST_PATH_IMAGE006
In the frame one to one the touch point have only one, then calculate it and link to each other accordingly Euclidean distance and angle between the touch point, and carry out association, and with 1.5 times of regeneration function territory value ranges of this Euclidean distance
Figure 631260DEST_PATH_IMAGE001
, with the reference direction value of this angle as direction search; Execution in step S7;
(2) if
Figure 265503DEST_PATH_IMAGE006
In the frame one to one the touch point have a plurality ofly, then earlier each is carried out association respectively in the touch point one to one, calculate again each to the Euclidean distance between the related touch point and with angle and, and with 1.5 times of regeneration function territory value ranges of this Euclidean distance mean value
Figure 769297DEST_PATH_IMAGE001
, with the reference direction value of this angle mean value as direction search; Execution in step S7.
Step S6:
(1) if
Figure 70965DEST_PATH_IMAGE006
The one-to-many touch point has only one in the frame, then will carry out relatedly with the nearest continuous touch point of its Euclidean distance, and to put two relating dot index values be 1<<11, and with 1.5 times of regeneration function territory value ranges of this Euclidean distance
Figure 697119DEST_PATH_IMAGE001
, with the reference direction value of this angle of 2 as direction search; Remaining not relating dot is the touch point that increases newly then in this one-to-many touch point action scope scope, and putting its index value equally is 1<<11; Execution in step S14;
(2) if
Figure 818659DEST_PATH_IMAGE006
In the frame one-to-many touch point have a plurality of, then according to the relating heading consistance carry out association (this consistance need satisfy 2 points: the one, direction is all in same scope; The 2nd, do not have intersection point between each related line segment), and the distance between satisfied related back 2 is in same action scope scope; And to put two relating dot index values be 1<<11, and touch 1.5 times of regeneration function territory value ranges of the average Euclidean distance of relating dot with each
Figure 126143DEST_PATH_IMAGE001
, touch the reference direction value of the average angle of relating dot as direction search with each; Remaining not relating dot is the touch point that increases newly then in each one-to-many touch point action scope scope, and putting its index value equally is 1<<11; Execution in step S14.
Step S7: judge
Figure 282318DEST_PATH_IMAGE006
Whether also have related in the frame but include two or more touch points in the t frame in its action scope scope, promptly judge whether also have the one-to-many touch point in the t-1 frame.
If it is have, then right
Figure 79373DEST_PATH_IMAGE006
One-to-many touch point in the frame according to all the continuous touch points in the reference direction search t frame, is detected the index value of its zone bit earlier to the continuous touch point that searches in its action scope scope, if its value is 1<<11, show this touch point with
Figure 688209DEST_PATH_IMAGE006
Pre-related with this continuous touch point then abandoned in other touch point association in the frame; If its value is not 1<<11, show that then this continuous touch point is coupled but also related, then carry out pre-association, and the position preface of pre-related touch point in the distance value between the two pre-relating dots and the t frame is deposited in corresponding apart from the container, this is apart from container and be positioned at Pre-related touch point correspondence in the frame, search for complete action scope scope after, execution in step S8;
If do not have, then execution in step S14.
Step S8: initialization L=0, each is carried out value apart from the data in the container, the value rule is as follows:
First apart from container in (first the pre-related touch point that promptly is arranged in the t-1 frame is pairing apart from container) to get ordering be first data, be about to fall in the t frame first apart from container press the reference direction ordering after, get ordering and be primary data, and put L=L+1; Also getting ordering in apart from container at second is first data, but need to carry out a preface apart from fetching data in the container relatively with first, if the position preface is identical, then abandoning sorting is first data, getting ordering is second data, equally also need with first apart from the position preface of being fetched data in the container relatively, the rest may be inferred, till getting the data different with first position preface of being fetched data in apart from container; In like manner, the 3rd apart from container fetch data need with preceding two compare apart from the position preface of being fetched data in the container; The back equally need all fetch data preface relatively apart from the institute of container with the front apart from container, guarantee each apart from the position preface difference that container fetched data, up to all till all value is finished apart from container.
Step S9: calculate between each pre-related touch point apart from summation
Figure 532767DEST_PATH_IMAGE018
, get data at each at random in apart from container, can the identical data of fetch bit preface, calculate its distance and
Figure 500723DEST_PATH_IMAGE019
, if
Figure 596855DEST_PATH_IMAGE018
Figure 246143DEST_PATH_IMAGE019
, execution in step S10 then; Otherwise, execution in step S11.
Step S10: relatively each judges apart from the distance value that container fetched data whether it all is in the same scope, if all be in the same scope execution in step S12; If not, execution in step S11 then.
Step S11: to each apart from container value again; The intact back of value execution in step S8; Again the value rule is as follows:
First apart from container in, getting ordering is the data of L position, and puts L=L+1; Second apart from the container in regular turn since first value, but must guarantee that the position preface of its value is different with first position preface apart from container institute value; The 3rd must guarantee equally different with the value position preface of all containers before apart from container; By that analogy.
Step S12: carry out association according to each apart from the position preface that container fetched data, and to put each related index value that touches the dot mark position be 1<<11; Execution in step S13.
Step S13: detect
Figure 111330DEST_PATH_IMAGE006
The index value of each touch point zone bit in the frame has all carried out association, execution in step S14 if all show to be had a few by 1<<11; If still have its zone bit index value of touch point to be not equal to 1<<11, execution in step S7 then.
Step S14: obtain next frame, execution in step S2.
Above-mentioned steps is described between consecutive frame the touch point in detail and how to be followed the tracks of.In fact when execution in step S14, computing machine will be finished two work, and the one, continue to carry out circulation, handle the next frame image; The 2nd, carry out the line program, and the result is sent in the projector, thereby on touch-screen, demonstrate the track that finger moves in real time.
Generally speaking, on the one hand, the multiple point touching tracking of present embodiment satisfies by calculating each touch point between consecutive frame that following two optimal conditionss realize: the total distance of optimal conditions 1 after for each touch point association between consecutive frame reaches relative minimum; Optimal conditions 2 is that the distance between each related touch point is consistent relatively between consecutive frame.By the touch point that these two optimal conditionss are satisfied in searching, it is carried out association, make present embodiment can generate correct motion track.On the other hand, for reducing the search complexity under the more situation in touch point, present embodiment also adopts the search strategy based on the reference direction first search, has improved the system-computed complexity.
In addition, from the tracking (algorithm) of present embodiment as can be seen, in internal memory, in fact only need to preserve all touch point coordinates between two consecutive frames, other memory cost not, and track algorithm is succinct, so the real-time among the embodiment is very good., real-time limited for storage resources requires high occasion, this algorithm directly can be embedded.

Claims (7)

1. multiple point touching tracking based on machine vision is characterized in that may further comprise the steps:
Step 1: statistical value rule of thumb, initialization touch point action scope value range;
Step 2: obtain each touch point coordinate of present frame, and judge whether present frame is first frame, if execution in step 11 then; Execution in step 3 then if not;
Step 3: according to judging whether have touch point or one-to-many touch point one to one in the former frame, if there is then execution in step 4 when the prescope value range; If there is not then execution in step 11;
Step 4: judge whether there is touch point one to one in the former frame, if exist, then execution in step 5; If do not exist, then execution in step 6;
Step 5: will be one to one the touch point carry out association, and use each 1 to 2 times of regeneration function territory value range to the mean value of relating dot Euclidean distance, and use each to the mean value of relating dot angle as the reference direction; And judge in the former frame whether also have not related one-to-many touch point, if then execution in step 7 is arranged, if there is not then execution in step 11;
Step 6: association is carried out in the one-to-many touch point, and uses each 1 to 2 times of regeneration function territory value range the mean value of relating dot Euclidean distance, and use each to the mean value of relating dot angle as the reference direction; Execution in step 11;
Step 7: not related with it yet continuous touch point is searched for according to reference direction in the one-to-many touch point in the former frame in its action scope scope, and carry out pre-association, execution in step 8;
Step 8: calculate each total distance, judge whether its total distance reaches relative minimum, if reach relative minimum, execution in step 9 to pre-related touch point; Otherwise, carry out pre-association again, execution in step 7;
Step 9: relatively whether each all is in the same scope the distance between the pre-related touch point, if then expression is pre-related correct, and then with its association, execution in step 10; Otherwise, carry out pre-association again, execution in step 7;
Step 10: each pre-relating dot is carried out association, execution in step 11;
Step 11: obtain the next frame image, get back to step 2.
2. the multiple point touching tracking based on machine vision according to claim 1 is characterized in that: in step 2, the touch point coordinate form is orderly coordinate
Figure 2010105251582100001DEST_PATH_IMAGE001
, wherein Expression touch point transverse axis coordinate, Expression touch point ordinate of orthogonal axes,
Figure 2010105251582100001DEST_PATH_IMAGE004
Index value when being used to write down association for zone bit, its value initialization is 0, if this touch point has been carried out association or need not association, then puts it and puts its zone bit
Figure 699948DEST_PATH_IMAGE004
Index value be 1<<11.
3. the multiple point touching tracking based on machine vision according to claim 2 is characterized in that: in step 5,
If the number of touch point is 1 one to one, then this is carried out association with the touch point that links to each other in the touch point one to one, and with both 1 to 2 times of regeneration function territory value range of Euclidean distance, with the reference direction value of both angles as the direction of search, and execution in step 7;
If the number of touch point is greater than 1 one to one, then each is carried out association with the corresponding touch point that links to each other in the touch point one to one, and calculate each to the Euclidean distance between the related touch point and with angle and, and with 1 to 2 times of regeneration function territory value range of the mean value of this Euclidean distance, with this angle and mean value as the reference value of the direction of search, and execution in step 7.
4. the multiple point touching tracking based on machine vision according to claim 2 is characterized in that: in step 6,
If the number of one-to-many touch point is 1, then will with its Euclidean distance link to each other recently the touch point carry out related, and with 1 to 2 times of regeneration function territory value range of this Euclidean distance, with the angle of this two relating dot reference direction value as direction search; Put simultaneously this one-to-many touch point with and the index value of continuous touch point be 1<<11; Execution in step 11.
If the number of one-to-many touch point is greater than 1, then carry out association according to the relating heading consistance, and use each 1 to 2 times of regeneration function territory value range to the average Euclidean distance of relating dot, touch the reference direction value of the average angle of relating dot with reddendo signula singulis as direction search; The index value of putting the touch point that links to each other of each one-to-many touch point and its correspondence simultaneously is 1<<11; Execution in step 11.
5. the multiple point touching tracking based on machine vision according to claim 2, it is characterized in that: in step 7, to the one-to-many touch point in the former frame earlier in its action scope scope according to the touch point in the reference direction search present frame, if the index value of the touch point that searches is 1<<11, then abandon related with this touch point; If this index value value is not 1<<11, then carry out this one-to-many touch point and this touch point pre-related, and the distance value between the two pre-relating dots and the position preface that is arranged in this touch point of present frame being deposited in corresponding apart from container, this is corresponding with this one-to-many touch point that is arranged in former frame apart from container.
6. the multiple point touching tracking based on machine vision according to claim 5, it is characterized in that: in step 7, at first each one-to-many touch point in the former frame is also set up apart from container according to the continuous touch point in the reference direction search present frame, then according to come related this one-to-many touch point apart from the container value.
7. the multiple point touching tracking based on machine vision according to claim 6 is characterized in that:
In step 2, the tagmeme of the orderly coordinate of each touch point is pressed
Figure 45478DEST_PATH_IMAGE003
Be worth descending sort, and the orderly coordinate and the tagmeme thereof of each touch point deposited in the data capsule;
In step 7, first initialization L=0, and each is carried out value apart from the data in the container, the value rule is as follows: ordering is first data in first fetches data container in apart from container, L=L+1; Also getting ordering in apart from container at second is first data, but need to carry out a preface apart from fetching data in the container relatively with first, if the position preface is identical, then abandoning sorting is first data, getting ordering is second data, equally also need with first apart from the position preface of being fetched data in the container relatively, till getting the data different with first position preface of being fetched data in apart from container; Until all one-to-many touch point correspondences apart from container all till the value of having got.
CN 201010525158 2010-10-29 2010-10-29 Multi-touch tracking method based on machine vision Expired - Fee Related CN102073414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010525158 CN102073414B (en) 2010-10-29 2010-10-29 Multi-touch tracking method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010525158 CN102073414B (en) 2010-10-29 2010-10-29 Multi-touch tracking method based on machine vision

Publications (2)

Publication Number Publication Date
CN102073414A true CN102073414A (en) 2011-05-25
CN102073414B CN102073414B (en) 2013-10-30

Family

ID=44031975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010525158 Expired - Fee Related CN102073414B (en) 2010-10-29 2010-10-29 Multi-touch tracking method based on machine vision

Country Status (1)

Country Link
CN (1) CN102073414B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279667A (en) * 2011-08-25 2011-12-14 中兴通讯股份有限公司 Method and device for responding screen touch event and communication terminal
CN103268169A (en) * 2013-06-14 2013-08-28 深圳市爱点多媒体科技有限公司 Touch point tracking method and system of touch equipment
WO2013170721A1 (en) * 2012-05-14 2013-11-21 北京汇冠新技术股份有限公司 Multipoint touch trail tracking method
CN103593131A (en) * 2012-08-15 2014-02-19 北京汇冠新技术股份有限公司 Touch track tracking method
CN103970323A (en) * 2013-01-30 2014-08-06 北京汇冠新技术股份有限公司 Method and system for tracking of trajectory of touch screen
CN103970322A (en) * 2013-01-30 2014-08-06 北京汇冠新技术股份有限公司 Method and system for tracking handling of trajectory of touch screen
CN104777984A (en) * 2015-04-30 2015-07-15 青岛海信电器股份有限公司 Touch trajectory tracking method and device and touch screen device
CN105739793A (en) * 2016-02-01 2016-07-06 青岛海信电器股份有限公司 Track matching method and apparatus for touch point, and touch screen device
CN105892744A (en) * 2016-03-31 2016-08-24 青岛海信电器股份有限公司 Touch trajectory tracking method and device and display equipment
CN106125979A (en) * 2016-06-22 2016-11-16 青岛海信电器股份有限公司 Touch track acquisition methods and touch screen
CN108351726A (en) * 2015-11-06 2018-07-31 三星电子株式会社 Input processing method and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763198A (en) * 2009-12-25 2010-06-30 中国船舶重工集团公司第七○九研究所 Back projection type multi-point touch screen device based on SoC and multi-point touch positioning method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763198A (en) * 2009-12-25 2010-06-30 中国船舶重工集团公司第七○九研究所 Back projection type multi-point touch screen device based on SoC and multi-point touch positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
齐婷,王铎: "基于视觉的多点触摸基本技术实现方法", 《计算机技术与发展》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279667B (en) * 2011-08-25 2016-10-05 南京中兴新软件有限责任公司 Method, device and the communicating terminal of a kind of response screen touch event
CN102279667A (en) * 2011-08-25 2011-12-14 中兴通讯股份有限公司 Method and device for responding screen touch event and communication terminal
WO2013170721A1 (en) * 2012-05-14 2013-11-21 北京汇冠新技术股份有限公司 Multipoint touch trail tracking method
CN103593131B (en) * 2012-08-15 2017-03-08 北京汇冠新技术股份有限公司 A kind of touch track tracking
CN103593131A (en) * 2012-08-15 2014-02-19 北京汇冠新技术股份有限公司 Touch track tracking method
CN103970323A (en) * 2013-01-30 2014-08-06 北京汇冠新技术股份有限公司 Method and system for tracking of trajectory of touch screen
CN103970322A (en) * 2013-01-30 2014-08-06 北京汇冠新技术股份有限公司 Method and system for tracking handling of trajectory of touch screen
CN103970322B (en) * 2013-01-30 2017-09-01 北京科加触控技术有限公司 A kind of method and system of touch-screen track following processing
CN103268169B (en) * 2013-06-14 2015-10-28 深圳市爱点多媒体科技有限公司 Touch control device touch point tracking and system
CN103268169A (en) * 2013-06-14 2013-08-28 深圳市爱点多媒体科技有限公司 Touch point tracking method and system of touch equipment
CN104777984A (en) * 2015-04-30 2015-07-15 青岛海信电器股份有限公司 Touch trajectory tracking method and device and touch screen device
CN108351726A (en) * 2015-11-06 2018-07-31 三星电子株式会社 Input processing method and equipment
CN108351726B (en) * 2015-11-06 2021-07-13 三星电子株式会社 Input processing method and device
CN105739793A (en) * 2016-02-01 2016-07-06 青岛海信电器股份有限公司 Track matching method and apparatus for touch point, and touch screen device
CN105739793B (en) * 2016-02-01 2018-07-13 青岛海信电器股份有限公司 A kind of the path matching method, apparatus and touch-screen equipment of touch point
CN105892744A (en) * 2016-03-31 2016-08-24 青岛海信电器股份有限公司 Touch trajectory tracking method and device and display equipment
CN106125979A (en) * 2016-06-22 2016-11-16 青岛海信电器股份有限公司 Touch track acquisition methods and touch screen
CN106125979B (en) * 2016-06-22 2019-09-20 青岛海信电器股份有限公司 Touch track acquisition methods and touch screen

Also Published As

Publication number Publication date
CN102073414B (en) 2013-10-30

Similar Documents

Publication Publication Date Title
CN102073414B (en) Multi-touch tracking method based on machine vision
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
US20120326995A1 (en) Virtual touch panel system and interactive mode auto-switching method
JP2017529582A (en) Touch classification
CN101408824A (en) Method for recognizing mouse gesticulation
US20140125584A1 (en) System and method for human computer interaction
US9779292B2 (en) System and method for interactive sketch recognition based on geometric contraints
CN105589553A (en) Gesture control method and system for intelligent equipment
WO2014127697A1 (en) Method and terminal for triggering application programs and application program functions
CN103902086A (en) Curve fitting based touch trajectory smoothing method and system
CN102622225A (en) Multipoint touch application program development method supporting user defined gestures
CN108628455B (en) Virtual sand painting drawing method based on touch screen gesture recognition
Yin et al. CamK: A camera-based keyboard for small mobile devices
Jia et al. Real‐time hand gestures system based on leap motion
US20140232672A1 (en) Method and terminal for triggering application programs and application program functions
Michel et al. Gesture recognition supporting the interaction of humans with socially assistive robots
CN102364419B (en) Camera type touch control method and system thereof
Lekova et al. Fingers and gesture recognition with kinect v2 sensor
CN102541417A (en) Multi-object tracking method and system in virtual touch screen system
Raj et al. Human computer interaction using virtual user computer interaction system
CN106020712A (en) Touch control gesture recognition method and device
CN102799344A (en) Virtual touch screen system and method
JP2020027647A (en) Robust gesture recognizer and system for projector-camera interactive displays, using deep neural networks and depth camera
CN103558948A (en) Man-machine interaction method applied to virtual optical keyboard
CN106547402A (en) A kind of touch control method, touch frame and smart pen

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: Kezhu road high tech Industrial Development Zone, Guangzhou city of Guangdong Province, No. 233 510670

Patentee after: VTRON GROUP Co.,Ltd.

Address before: 510663 No. 6, color road, hi tech Industrial Development Zone, Guangdong, Guangzhou, China

Patentee before: VTRON TECHNOLOGIES Ltd.

CP03 Change of name, title or address
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131030

CF01 Termination of patent right due to non-payment of annual fee