CN104007849B - Virtual navigation device and its air navigation aid - Google Patents
Virtual navigation device and its air navigation aid Download PDFInfo
- Publication number
- CN104007849B CN104007849B CN201310060217.7A CN201310060217A CN104007849B CN 104007849 B CN104007849 B CN 104007849B CN 201310060217 A CN201310060217 A CN 201310060217A CN 104007849 B CN104007849 B CN 104007849B
- Authority
- CN
- China
- Prior art keywords
- touch
- control object
- working face
- control
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Position Input By Displaying (AREA)
Abstract
The invention discloses a kind of virtual navigation device, air navigation aid and its computer program product.The virtual navigation device includes a working face, a touch control detection module and a processor.The touch-control module is electrically connected to the working face and the processor.The touch control detection module is used in detecting a plurality of detection informations in a time interval.The processor is according to the detection information such as this, and judgement has at least three touch-control objects in being contacted in the time interval with the working face.The processor judges a mobile message of each touch-control object according to the detection information such as this, and determines a location information signal according to the grade mobile message, moves the vernier on a screen according to the location information signal in order to a main frame.
Description
Technical field
The present disclosure generally relates to a kind of virtual navigation device, air navigation aid and its computer program product;Specifically, it is of the invention
It is that virtual navigation device, air navigation aid and its computer program product of a hardware external form need to not be gripped with hand on a kind of.
Background technology
Computer has turned into modern in indispensable necessity in life.Known computer peripheral device, majority is used
Guider is (for example:Slide-mouse) it is used as one of main input unit.When user's operating computer, generally require through navigation
Device clicks required option, application etc. to move the vernier on screen, or transmission guider.Therefore, lead
Boat device has just like turned into the important bridge that user is connected with computer.However, the volume of guider has certain space,
And its shape is mostly irregular, if therefore user is outgoing need to use computer, it is difficult to easily carry guider in the lump.
In view of this, the light portative guider of user can be allowed by how providing one kind, be the mesh that industry needs effort badly
Mark.
The content of the invention
To solve foregoing problems, the invention provides a kind of virtual navigation device, air navigation aid and its computer program product.
Virtual navigation device provided by the present invention includes a working face, a touch control detection module and a processor, and should
Touch control detection module is electrically connected to the working face and the processor.The touch control detection module is used in detection in a time interval
A plurality of detection informations.The processor has at least three touch-control objects in the time interval according to the detection information such as this, judgement
Contacted with the working face.The processor more judges a mobile message of each touch-control object and according to this etc. according to the detection information such as this
Mobile message determines a location information signal, moves the vernier on a screen according to the location information signal in order to a main frame.
Air navigation aid provided by the present invention is applied to a virtual navigation device, and the virtual navigation device includes a job
Face, a touch control detection module and a processor.The air navigation aid is comprised the steps of:(a) by the touch control detection module in for the moment
Between it is interval in a plurality of detection informations of detection, (b) by the processor according to the detection information such as this, judgement has at least three touch control objects
Part in the time interval with the working face in contacting, (c) judges each touch-control object by the processor according to the detection information such as this
One mobile message and (d) determine a location information signal by the processor according to the grade mobile message, in order to a main frame according to the position
Confidence information signal moves the vernier on a screen.
Computer program product provided by the present invention is loaded into after the computer program product via a virtual navigation device, can be held
A plurality of programmings that the row computer program product is included, so that the virtual navigation device performs an air navigation aid.This etc.
Programming includes programming A, programming B, programming C and programming D.When programming A is performed, by the void
Intend a touch control detection module of guider in detecting a plurality of detection informations in a time interval.When programming B is performed,
By a processor of the virtual navigation device according to the detection information such as this, judgement has at least three touch-control objects in the time interval
It is interior to be contacted with the working face.When programming C is performed, each touch-control object is judged according to the detection information such as this by the processor
One mobile message.When programming D is performed, one location information signal is determined according to the grade mobile message by the processor, in order to
One main frame moves the vernier on a screen according to the location information signal.
As shown in the above description, present invention system utilizes the dress with a working face, a touch control detection module and a processor
Put to reach the purpose of navigation.Present invention system detects a plurality of detection informations through touch control detection module, then judgement is according to this
It is no to there are at least three touch-control objects to be contacted in a time interval with working face.If having at least three touch-control objects in the time
Contacted in interval with working face, then the present invention can further determine a location information signal, so that a main frame is according to the position
Information signal moves the vernier on a screen.Because present invention system is using with a working face, a touch control detection module and one
The device of processor reaches the purpose of navigation, and such device does not have the external form of traditional slide-mouse or optical mice, therefore can allow makes
User easily carries.
It is hereafter to be matched somebody with somebody with preferred embodiment for above-mentioned purpose, technical characteristic and the advantage of the present invention can be become apparent
Institute's accompanying drawings are closed to be described in detail.
Brief description of the drawings
The schematic diagram of the virtual navigation device of first embodiment is described by Figure 1A systems;
The schematic diagram for the detection information that touch control detection module is detected is described by Figure 1B systems;
Fig. 1 C systems describe touch-control object and move schematic diagram in the position on working face;
Fig. 1 D systems describe touch-control object and move schematic diagram in the position on working face;
Fig. 1 E systems describe touch-control object and move schematic diagram in the position on working face;
Fig. 1 F systems describe touch-control object and move schematic diagram in the position on working face;
Fig. 1 G systems describe touch-control object and move schematic diagram in the position on working face;
Fig. 1 H systems describe touch-control object and move schematic diagram in the position on working face;
The schematic diagram of the virtual navigation device of second and third embodiment is described by Fig. 2A systems;
The schematic diagram for the detection information that touch control detection module is detected is described by Fig. 2 B figures system;
The schematic diagram of the virtual navigation device of the 4th and the 5th embodiment is described by Fig. 3 A systems;
Fig. 3 B, Fig. 3 C systems describe the schematic diagram for the detection information that touch control detection module is detected;And
Fig. 4 A, Fig. 4 B, the method flow diagram of Fig. 4 C systems description sixth embodiment.
Symbol description:
1 virtual navigation device
10 location information signals
11 processors
13 touch control detection modules
15 working faces
17 transceiver interfaces
12a, 12b, 12c, 12d detection information
T1 time intervals
P1, P2, P3, P4, P5, P6 position
102a, 102b polygon
104a, 104b, 104c motion track
20a, 20b operation signal
24a, 24b, 24c, 24d detection information
T2 time intervals
3 virtual navigation devices
33 touch control detection modules
35 working faces
33a, 33b touch control detection module
35a, 35b working face
32a, 32b, 32c, 32d detection information
34a, 34b, 34c, 34d detection information
36a, 36b, 36c, 36d detection information
30a location information signals
30a, 30b operation signal
Embodiment
Embodiment is will transmit through below to explain virtual navigation device provided by the present invention, air navigation aid and its computer program
Product.However, embodiments of the invention and being not used to limit the present invention must any environment as described embodiments, using or side
Formula can be implemented.Explanation accordingly, with respect to embodiment is only the explaination purpose of the present invention, and is not used to the directly limitation present invention.
In palpus expositor, following examples and schema, have been omitted from and do not illustrate to the indirect related element of the present invention.
The first embodiment of the present invention is a virtual navigation device 1, and its schematic diagram system is depicted in Figure 1A.Virtual navigation is filled
Put 1 and include a processor 11, a touch control detection module 13, a working face 15 and a transceiver interface 17.The electricity of touch control detection module 13
Property is connected to processor 11 and working face 15, and processor 11 is electrically connected to transceiver interface 17.
Processor 11 can be the various processors known to persond having ordinary knowledge in the technical field of the present invention, center
Any of processing unit (central processing unit), microprocessor or other computing devices.Working face 15 can
For plane or on-plane surface (for example:The cambered surface that suitable human finger according to ergonomic designs puts).Touch control detection module 13
Correspondence can be condenser type, resistance-type, optical profile type, piezoelectric sense formula or other kinds of touch control detection module to working face 15.
The different types of touch control detection module such as this and its function mode are persond having ordinary knowledge in the technical field of the present invention institute
Know, therefore not superfluous words.In addition, transceiver interface 17 can be each known to persond having ordinary knowledge in the technical field of the present invention
Plant transceiver interface.
In the present embodiment, touch control detection module 13 in detected in a time interval T1 a plurality of detection information 12a, 12b,
12c, 12d, as shown in Figure 1B.Persond having ordinary knowledge in the technical field of the present invention should be able to be readily understood upon, different type
Touch control detection module 13 will detect that different types of detection information 12a, 12b, 12c, 12d.Processor 11 then according to
Detection information 12a, 12b, 12c, 12d, judgement have at least three touch-control objects in being contacted in time interval T1 with working face 15.It
Afterwards, processor 11 judges a mobile message of each touch-control object according to detection information 12a, 12b, 12c, 12d, further according to these shiftings
Dynamic information determines a location information signal 10.If transceiver interface 17 is connected to a main frame (not illustrating), the meeting of transceiver interface 17
The so far main frame of location information signal 10 is transmitted, main frame is moved the vernier on a screen according to location information signal 10.
In other implementation aspects, if virtual navigation device 1 further includes a screen, or touch control detection module 13 is touch
Screen, the then location information signal 10 that aforementioned processor 11 is determined also can be used on the screen of mobile virtual guider 1
Vernier.In such implementation aspect, because virtual navigation device 1 is not connected to the main frame of outside, therefore transmitting-receiving can not be configured
Interface 17.
Then, one first touch-control object, one second touch-control object and one the are hereby included with above-mentioned at least three touch-controls object
The situation of three touch-control objects, illustrates how the present embodiment specifically determines location information signal 10.
Processor 11 judged in time interval T1 according to detection information 12a, 12b, 12c, 12d, the first touch-control object by
Position P1 on working face 15 be moved to the position P3 of position P2, the second touch-control object on working face 15 be moved to position P4 and
Position P5 of the 3rd touch-control object on working face 15 is moved to position P6, as shown in Figure 1 C.Processor 11 calculate position P1 and
Between a second distance and calculating position P5 and position P6 between one first distance between the P2 of position, calculating position P3 and position P4
One the 3rd distance.Afterwards, processor 11 calculates an average value of the first distance, second distance and the 3rd distance, and average with this
Value is used as location information signal 10.In short, processor 11 is with the first touch-control object, the second touch-control object and the 3rd touch control object
The average value of the motion track length of part is used as location information signal 10.
In other implementation aspects, processor 11 can be changed to, according to detection information 12a, 12b, 12c, 12d, be judged in the time
In interval T1, position P1 of the first touch-control object on working face 15 is moved to position P2, the second touch-control object by working face 15
On position P3 be moved to the position P5 of position P4 and the 3rd touch-control object on working face 15 and be moved to position P6, such as Fig. 1 C institutes
Show.Processor 11 calculates position P1, P3, P5 one first average value, calculates position P2, P4, P6 one second average value, calculates
A difference between second average value and the first average value, then location information signal 10 is used as using this difference.In short, processor 11
System is used as location information signal using the path length of the center of gravity of the first touch-control object, the second touch-control object and the 3rd touch-control object
10。
In other implementation aspects, processor 11 can change otherwise decision location information signal 10.Specifically, locate
A plurality of positions that reason device 11 is first contacted according to the first touch-control object, the second touch-control object and the 3rd touch-control object with working face 15
Define a polygon.Processor 11 further according to a polygonal area in the change in time interval T1 (for example:Become big, become
It is small, towards different directions deformation etc.), determine a moving direction and a displacement that location information signal 10 included.To scheme
Exemplified by three kinds of different situations depicted in 1D, Fig. 1 E, Fig. 1 F, processor 11 first defines polygon according to position P1, P3, P5
102a, then polygon 102a is judged in being changed into polygon 102b and the area change between the two in time interval T1, and become according to this
Change the moving direction and displacement for determining that location information signal 10 is included.Specifically, Fig. 1 D, Fig. 1 E, Fig. 1 F movement
Direction is respectively " toward upper right side ", " toward clockwise " and " outward ".
In under the implementation aspect described in leading portion, the contained moving direction of location information signal 10 just can regard host computer control one
Feel variation effect or a sense of hearing variation effect.Visual variation effects can the rotation comprising the viewing area on screen, amplification and
Therein one or its combination are reduced, wherein viewing area shows a schema, a form and a vernier therein one or its combination.Extremely
In sense of hearing variation effect, then therein one or its combination can be mixed comprising volume control, audio selection and an audio.To scheme
Exemplified by 1E, the moving direction that processor 11 judges is " toward clockwise ", and main frame just can increase its sound according to this moving direction
Amount, and according to displacement, determine the amplitude that volume is increased.Again by taking Fig. 1 F as an example, the moving direction that processor 11 judges is " past
Outside ", main frame just can amplify the picture presented on screen according to this moving direction, and according to displacement, determine amplification
Amplitude.
In other implementation aspects, processor 11 can change otherwise decision location information signal 10.Specifically, locate
Device 11 is managed according to each touch-control object in the motion track in time interval T1 on working face 15, location information signal 10 is determined
Comprising a moving direction.Processor 11 more calculates an average value of the grade motion track to be used as the institute of location information signal 10
Comprising a displacement.By taking two kinds of different situations depicted in Fig. 1 G, Fig. 1 H as an example, processor 11 is according to the first touch control object
Part, at second control object and the 3rd touch-control object in the motion track 104a in time interval T1 on working face 15,104b,
104c, determines the moving direction that location information signal 10 is included.Specifically, Fig. 1 G, Fig. 1 H moving direction be respectively "
Toward clockwise " and " outward ".Then, processor 11 calculate again motion track 104a, 104b, 104c average value using as
The displacement that location information signal 10 is included.Similar, implement in such a under aspect, the contained shifting of location information signal 10
Dynamic direction can make the visual variation effects of host computer control one or a sense of hearing variation effect.
The various mode systems of above-mentioned decision location information signal 10 are illustrated with the situation of three touch-control objects, this right hair
Bright those of ordinary skill in the art should be able to be based on described above, easily think and when have more than three touch-control object in
When being contacted with working face 15 in time interval T1, how processor 11 determines location information signal 10, therefore hereby not superfluous words.
Furthermore, virtual navigation device 1 more can be first judged after the touch-control object behaviour class finger on working face 15, just continue into
The follow-up operation of row (that is, location information signal 10 is determined, and the vernier or glimmering on screen is controlled according to location information signal 10
Vision and/or sense of hearing variation effect on curtain).Similar, one first touch control object is hereby included with above-mentioned at least three touch-controls object
The situation of part, one second touch-control object and one the 3rd touch-control object, illustrates how to detect whether the grade touch-control object is mankind's hand
Refer to.Processor 11 judges that the first touch-control object is by working in time interval T1 according to detection information 12a, 12b, 12c, 12d
Position P1 on face 15 is moved to the position P3 of position P2, the second touch-control object on working face 15 and is moved to position P4 and the 3rd
Position P5 of the touch-control object on working face 15 is moved to position P6, as shown in Figure 1 C.Then, processor 11 according to position P1,
The relative position between relative position and position P2, P4, P6 between P3, P5, judges the first touch-control object, the second touch-control object and
Whether three touch-control objects are respectively human finger.If human finger, the side of processor 11 can further determine location information signal
10, and control the vernier on screen or vision and/or sense of hearing variation effect on screen according to location information signal 10.
Through the configuration of first embodiment, user needn't grip one with hand has the device of hardware external form, it is only necessary to will touch
Control object (for example:Finger) it is placed on the working face 15 of virtual navigation device 1, the mobile touch control object part on working face 15,
Just the vision and/or sense of hearing variation effect on the vernier or screen on screen can be controlled.In addition, virtual navigation device 1 more can quilt
It is designed as, when the touch-control object behaviour class finger on working face 15, follow-up behaviour can be just carried out according to location information signal 10
Make, thereby avoid non-human (for example:Pet) influence caused by false touch working face 15.
The second embodiment of the present invention, refers to Fig. 2A, Fig. 2 B.Virtual navigation device 1 can also be performed in second embodiment
Its all running that can be performed in first embodiment and function.However, in second embodiment, virtual navigation device 1 in when
Between continue to running in time interval T2 after the T1 of interval.The deviation only for second embodiment and first embodiment is entered below
Row is described in detail.
It is similar with first embodiment, touch control detection module 13 in detected in time interval T1 a plurality of detection information 12a,
12b, 12c, 12d, as shown in Figure 2 B.Processor 11 has at least three then according to detection information 12a, 12b, 12c, 12d, judgement
Touch-control object is (for example:First touch-control object, the second touch-control object and the 3rd touch-control object) in time interval T1 and working face
15 contacts.Afterwards, processor 11 judges the mobile message of each touch-control object, then root according to detection information 12a, 12b, 12c, 12d
Location information signal 10 is determined according to these mobile messages.Now, the navigation feature of virtual navigation device 1 has been operated.
After the navigation feature of virtual navigation device 1 is operated, in the time interval T2 after time interval T1, touch-control
Detection module 13 detects a plurality of detection information 24a, 24b, 24c, 24d.Now, according to detection information 24a, 24b, 24c, 24d
Content, have several different situations, be hereby illustrated below.
Hereby first illustrate the first situation.Processor 11 judges the first touch control object according to detection information 24a, 24b, 24c, 24d
Part, the second touch-control object, the 3rd touch-control object and the 4th touch-control object in time interval T2 be located at working face 15 on.Processor
11 in judge the 4th touch-control object in time interval T2 be located at working face 15 on after, determine an operation signal 20a.Transceiver interface
17 again transfer operation signal 20a to main frame, make main frame according to operation signal 20a carry out an operation.In short, in the first situation
In, the navigation feature of virtual navigation device 1 has been operated in time interval T1, therefore, in follow-up time interval T2,
If adding another touch-control object (for example:Foregoing 4th touch-control object), then virtual navigation device 1 can produce operation signal 20a, make
Main frame carries out appropriate operation according to this.
Then second case is illustrated.Processor 11 judges the first touch control object according to detection information 24a, 24b, 24c, 24d
Part, the second touch-control object, the 3rd touch-control object and the 4th touch-control object in time interval T2 be located at working face 15 on.Processor
11 judge the 4th touch-control object in time interval T2 be located at working face 15 on after, determine an operation signal 20a.In addition, place
Reason device 11 judge the first touch-control object, the second touch-control object and the 3rd touch-control object in time interval T2 positioned at working face 15
After upper, an operation signal 20b is determined.Afterwards, transceiver interface 17 transfer operation signal 20a, 20b makes main frame according to behaviour to main frame
Make signal 20a, 20b and carry out one first operation and one second operation respectively.
In short, in second case, the navigation feature of virtual navigation device 1 has been transported in time interval T1
Make, therefore, in follow-up time interval T2, if adding another touch-control object (for example:Foregoing 4th touch-control object), then virtually
Guider 1 will be for the different operation of the move mode of original touch-control object and the touch-control object newly added.Citing and
Speech, in time interval T1, user is first moved with three fingers of the right hand on working face 15, in subsequent time interval T2
Interior, three fingers of user's right hand are still moved on working face 15, but add one finger of left hand.According to user's finger
Move mode in time interval T2, user can make main frame while carrying out different operations, such as:User is by the right hand
Movement increase the volume of main frame, by the position of image on the mobile change screen of left hand.
Then the third situation is illustrated.Processor 11 judges one the 4th touch-control according to detection information 24a, 24b, 24c, 24d
Object in time interval T2 be located at working face 15 on.Processor 11 more in judge the 4th touch-control object be located at working face 15 on
Afterwards, an operation signal 20a is determined.Afterwards, the transfer operation signal 20a of transceiver interface 17 makes main frame according to operation signal to main frame
20a carries out one and operated.In short, in the third situation, the navigation feature of virtual navigation device 1 has been entered in time interval T1
Row running, therefore, in follow-up time interval T2, if adding another touch-control object (for example:Foregoing 4th touch-control object), and
Original three touch-control objects are (for example:Foregoing first touch-control object, the second touch-control object and the 3rd touch-control object) leave work
Face 15, virtual navigation device 1 can still produce operation signal 20a, main frame is carried out appropriate operation according to this.
Through the configuration of second embodiment, after the navigation feature of virtual navigation device 1 has been operated, as long as user
It is additionally added at least one other touch-control object (for example:Foregoing 4th touch-control object) moved on working face 15, just it can enter
The more different operations of row.Even if in addition, when original touch-control object (for example:Foregoing first, second and third touch-control object)
Working face 15 is left, virtual navigation device 1 can still be produced according to other touch-control objects in the move mode moved on working face 15
Operation signal 20a, makes main frame carry out appropriate operation according to this.
The third embodiment of the present invention, please referring still to Fig. 2A, Fig. 2 B.Virtual navigation device 1 is carried out in 3rd embodiment
Running it is similar to the running carried out in second embodiment, therefore be described in detail below only for the two different place.
In second embodiment, after the navigation feature of virtual navigation device 1 has been operated, as long as user is extraly
Add at least one other touch-control object to move on working face 15, just can carry out more different operations.However, real the 3rd
Apply in example, after the navigation feature of virtual navigation device 1 has been operated, user is to carry out other extra operations, then
At least three other touch-control objects need to be added to move on working face 15.In other words, in 3rd embodiment, each is newly-increased
Operation, is required at least three touch-control objects to be moved on working face 15, can perform.
Specifically, after the navigation feature of virtual navigation device 1 is operated, the time interval after time interval T1
In T2, touch control detection module 13 detects a plurality of detection information 24a, 24b, 24c, 24d.Now, according to detection information 24a,
24b, 24c, 24d content, have several different situations, are hereby illustrated below.
Hereby first illustrate the first situation.Processor 11 judges the first touch control object according to detection information 24a, 24b, 24c, 24d
Part, the second touch-control object, the 3rd touch-control object, the 4th touch-control object, the 5th touch-control object and the 6th touch-control object are in time zone
Between be located on working face 15 in T2.Processor 11 in judge the 4th touch-control object, the 5th touch-control object and the 6th touch-control object in
After being located in time interval T2 on working face 15, an operation signal 20a is determined.Transfer operation signal 20a is extremely again for transceiver interface 17
Main frame, makes main frame carry out an operation according to operation signal 20a.
Then second case is illustrated.Processor 11 judges the first touch control object according to detection information 24a, 24b, 24c, 24d
Part, the second touch-control object, the 3rd touch-control object, the 4th touch-control object, the 5th touch-control object and the 6th touch-control object are in time zone
Between be located on working face 15 in T2.Processor 11 judge the 4th touch-control object, the 5th touch-control object and the 6th touch-control object in
After being located in time interval T2 on working face 15, an operation signal 20a is determined.In addition, processor 11 is judging the first touch control object
Part, the second touch-control object and the 3rd touch-control object determine an operation signal after being located in time interval T2 on working face 15
20b.Afterwards, transceiver interface 17 transfer operation signal 20a, 20b makes main frame be entered respectively according to operation signal 20a, 20b to main frame
Row one first is operated and one second operation.
Then the third situation is illustrated.Processor 11 judges the 4th touch control object according to detection information 24a, 24b, 24c, 24d
Part, the 5th touch-control object and the 6th touch-control object in time interval T2 be located at working face 15 on.Processor 11 is more in judging
After four touch-control objects, the 5th touch-control object and the 6th touch-control object are located on working face 15, an operation signal 20a is determined.Afterwards,
The transfer operation signal 20a of transceiver interface 17 makes main frame carry out an operation according to operation signal 20a to main frame.
In addition to foregoing running, 3rd embodiment can also perform all runnings and the function of first and second embodiment.Institute
Category technical field tool usually intellectual can be directly acquainted with how 3rd embodiment is based on first and second above-mentioned embodiment to hold
These operations of row and function, therefore do not repeat.
Through the configuration of 3rd embodiment, after the navigation feature of virtual navigation device 1 has been operated, if user is intended to
Other operations are carried out, at least three other touch-control objects need to be additionally added (for example:Foregoing four, the 5th and the 6th touch control object
Part) moved on working face 15.Even if in addition, when original touch-control object (for example:Foregoing first, second and third touch control object
Part) leave working face 15, virtual navigation device 1 still can according to other touch-control objects in the move mode moved on working face 15,
Operation signal 20a is produced, main frame is carried out appropriate operation according to this.
The fourth embodiment of the present invention is a virtual navigation device 3, and its schematic diagram system is depicted in Fig. 3 A.Virtual navigation is filled
Put 3 and include a processor 11, a touch control detection module 33, a working face 35 and a transceiver interface 17.The electricity of touch control detection module 33
Property is connected to processor 11 and working face 35, and processor 11 is electrically connected to transceiver interface 17.In the present embodiment, touch control detection
Module 33 includes sub- touch control detection module 33a, 33b, and working face 35 includes sub- working face 35a, 35b, and sub- working face 35a, 35b
Do not overlap.In addition, sub- touch control detection module 33a correspondences are to sub- working face 35a, and sub- touch control detection module 33b is corresponding to sub- work
Make face 35b.As for processor 11 and transceiver interface 17, it can be carried out in fourth embodiment and in first to 3rd embodiment
Running is identical, therefore hereby not superfluous words.
Fig. 3 B describe the detection signal detected by sub- touch control detection module 33a, and Fig. 3 C then describe sub- touch control detection mould
Detection signal detected by block 33b.Specifically, sub- touch control detection module 33a is in detecting a plurality of inspections in time interval T1
Measurement information 32a, 32b, 32c, 32d, as shown in Figure 3 B.Processor 11 judges then according to detection information 32a, 32b, 32c, 32d
There are at least three touch-control objects (for example:First touch-control object, the second touch-control object and the 3rd touch-control object) in time interval T1
It is interior to be contacted with sub- working face 35a.Afterwards, processor 11 judges each touch-control object according to detection information 32a, 32b, 32c, 32d
Mobile message, location information signal 30a is determined further according to these mobile messages.Now, the navigation feature of virtual navigation device 3 has been
Operated.
Similar with second embodiment, the navigation feature of virtual navigation device 3 is in the time interval T2 after time interval T1
Continue to running.
In time interval T2, sub- touch control detection module 33b detects a plurality of detection information 34a, 34b, 34c, 34d, such as
Shown in Fig. 3 C.Processor 11 judges one the 4th touch-control object in time interval T2 according to detection information 34a, 34b, 34c, 34d
On sub- working face 35b.Processor 11 determines that an operation is believed after judging that the 4th touch-control object is located on sub- working face 35b
Number 30b.This operation signal 30b is sent to main frame by transceiver interface 17 again, main frame is carried out one first according to operation signal 30b and is grasped
Make.
On the other hand, if sub- touch control detection module 33a detected in time interval T2 a plurality of detection information 36a, 36b,
36c、36d.Processor 11 judges the first touch-control object, the second touch-control object and according to detection information 36a, 36b, 36c, 36d
Three touch-control objects in time interval T2 be located at sub- working face 35a on.Processor 11 more in judge the first touch-control object, second touch
Object and the 3rd touch-control object are controlled after being located in time interval T2 on sub- working face 35a, an operation signal 30c is determined.Transmitting-receiving
This operation signal 30c is sent to main frame by interface 17 again, main frame is carried out one second according to operation signal 30c and is operated.
In addition to foregoing running, fourth embodiment can also perform all runnings and the function of first and second embodiment, poor
The different touch control detection module 33 for being only that fourth embodiment includes sub- touch control detection module 33a, 33b, and working face 35 includes son
Working face 35a, 35b.Art tool usually intellectual can be directly acquainted with how fourth embodiment is based on above-mentioned first
And second embodiment is to perform these operations and function, therefore do not repeat.
Fifth embodiment of the invention, please referring still to Fig. 3 A, Fig. 3 B, Fig. 3 C.Virtual navigation device 3 is entered in the 5th embodiment
Capable running is similar to the running that fourth embodiment is carried out, therefore is described in detail below only for the two different place.
In fourth embodiment, after the navigation feature of virtual navigation device 3 has been operated, as long as user is extraly
Add at least one other touch-control object to move on another sub- working face, just can carry out more different operations.However,
In five embodiments, after the navigation feature of virtual navigation device 3 has been operated, user is to carry out other extra behaviour
Make, then need at least three other touch-control objects of addition to be moved on another sub- working face.In other words, in the 5th embodiment, often
One newly-increased operation, is required at least three touch-control objects to be moved on sub- working face, can perform.
Specifically, sub- touch control detection module 33a in detected in time interval T1 a plurality of detection information 32a, 32b,
32c, 32d, as shown in Figure 3 B.Processor 11 has at least three touch-controls then according to detection information 32a, 32b, 32c, 32d, judgement
Object is (for example:First touch-control object, the second touch-control object and the 3rd touch-control object) in time interval T1 with sub- working face 35a
Contact.Afterwards, processor 11 judges the mobile message of each touch-control object according to detection information 32a, 32b, 32c, 32d, further according to
These mobile messages determine location information signal 30a.Now, the navigation feature of virtual navigation device 3 has been operated.
In time interval T2 after time interval T1, a plurality of detection information 34a of sub- touch control detection module 33b detections,
34b, 34c, 34d, as shown in Figure 3 C.Processor 11 judges one the 4th touch control object according to detection information 34a, 34b, 34c, 34d
Part, one the 5th touch-control object and one the 6th touch-control object in time interval T2 be located at sub- working face 35b on.Processor 11 is in sentencing
After disconnected 4th touch-control object, the 5th touch-control object and the 6th touch-control object are located on sub- working face 35b, an operation signal is determined
30b.This operation signal 30b is sent to main frame by transceiver interface 17 again, main frame is carried out one first according to operation signal 30b and is grasped
Make.
On the other hand, if sub- touch control detection module 33a detected in time interval T2 a plurality of detection information 36a, 36b,
36c、36d.Processor 11 judges the first touch-control object, the second touch-control object and according to detection information 36a, 36b, 36c, 36d
Three touch-control objects in time interval T2 be located at sub- working face 35a on.Processor 11 more in judge the first touch-control object, second touch
Object and the 3rd touch-control object are controlled after being located in time interval T2 on sub- working face 35a, an operation signal 30c is determined.Transmitting-receiving
This operation signal 30c is sent to main frame by interface 17 again, main frame is carried out one second according to operation signal 30c and is operated.
In addition to foregoing running, the 5th embodiment can also perform all runnings and the function of fourth embodiment.Affiliated technology
Field tool usually intellectual can be directly acquainted with the 5th embodiment how based on above-mentioned fourth embodiment with perform these operation and
Function, therefore do not repeat.
The sixth embodiment of the present invention is a kind of air navigation aid, and its flowchart is depicted in Fig. 4 A, Fig. 4 B.This air navigation aid
Suitable for a virtual navigation device, such as aforementioned virtual guider 1,3, and the virtual navigation device is touched comprising a working face, one
Control detection module and a processor.
First, step S401 is performed, a plurality of first detections of detection in a very first time is interval by touch control detection module
Information.Then, step S403 is performed, by processor according to the detection information of grade first, judgement there are at least three touch-control object (examples
Such as:First touch-control object, the second touch-control object and the 3rd touch-control object) contacted in the very first time is interval with working face.Afterwards,
Step S405 is performed, a mobile message of each touch-control object is judged according to the detection information of grade first by processor.Thereafter, perform
Step S407, a location information signal is determined by processor according to the grade mobile message.
If virtual navigation device further includes a transceiver interface and virtual navigation device is to control a main frame, then this navigates
Method can further perform step S409, and location information signal is transmitted to main frame by transceiver interface.Consequently, it is possible to which main frame just may be used
A vernier on one screen is moved according to location information signal.In other implementation aspects, if virtual navigation device 1 is glimmering comprising one
Curtain, or its touch control detection module are touch control type screen, then the location information signal that step S407 is determined also can be used to mobile void
Intend the vernier on the screen of guider.In such implementation aspect, because virtual navigation device is not connected to outside
Main frame, therefore step S409 can be omitted.
Then, one first touch-control object, one second touch-control object and one the are hereby included with above-mentioned at least three touch-controls object
The situation of three touch-control objects, illustrates the various modes for specifically determining location information signal.
In some implementation aspects, the grades first of step S405 as processor according to detected by step S401 are detected
Information, judges in the very first time is interval, a first position of the first touch-control object on working face be moved to a second place,
One threeth position of the second touch-control object on working face is moved to one the 4th position and the 3rd touch-control object is on working face
One the 5th position is moved to one the 6th position.In addition, step S407 can be completed by the flow depicted in Fig. 4 B.
Specifically, in step S407a, one first distance between first position and the second place is calculated by processor.
Then, step S407b is performed, the second distance between the 3rd position and the 4th position is calculated by processor.Perform step again afterwards
Rapid S407c, one the 3rd distance between the 5th position and the 6th position is calculated by processor.Need expositor, step S407a,
S407b, S407c execution sequence can be exchanged mutually.Afterwards, then perform step S407d, by processor calculate the first distance,
One average value of second distance and the 3rd distance.Then, in step S407e, by processor using step S407d average value as
Location information signal.
In some implementation aspects, the grades first of step S405 similarly as processor according to detected by step S401
Detection information, judges in the very first time is interval, a first position of the first touch-control object on working face is moved to one second
One the 3rd position of position, the second touch-control object on working face is moved to one the 4th position and the 3rd touch-control object is by working face
On one the 5th position be moved to one the 6th position.In addition, step S407 can be completed by the flow depicted in Fig. 4 C.
Specifically, in step S407f, by processor calculate first position, the 3rd position and the 5th position one the
One average value.Afterwards, in step S407g, calculate the second place, the 4th position and the 6th position by processor one second is averaged
Value.Then, in step S407h, the difference between the second average value and the first average value is calculated by processor.Thereafter, in step
S407i, location information signal is used as by processor using the difference.
In some implementation aspects, step S405 includes a step (not illustrating), at least three is touched according to this by processor
Define a polygon in a plurality of positions that control object is contacted with working face.Step S405 still includes another step (not illustrating), by
Processor judges a change of the polygonal area in the very first time is interval.Afterwards, step S407 then by processor according to
The change of the area, a moving direction and a displacement that decision location information signal is included.This moving direction can be used to
Make the visual variation effects of host computer control one and/or a sense of hearing variation effect, and displacement is available so that host computer control is foregoing regards
Feel the adjustment amplitude of variation effect and/or sense of hearing variation effect.Foregoing visual variation effect includes the viewing area on screen
Rotation, amplification and/or reduce, wherein viewing area shows a schema, a form and/or a vernier.Foregoing sense of hearing change effect
Fruit mixes comprising volume control, audio selection and/or an audio.
In some implementation aspects, step S405 judges each touch-control object in the very first time interval inherent work by processor
Make the motion track on face.Step S407 then includes a step (not illustrating), by processor according to the grade motion track, determines
The moving direction that location information signal is included.In addition, step S407 additionally comprises a step (not illustrating), calculated by processor
A displacement of one average value of the grade motion track to be included as location information signal.Similar, this moving direction
It can use so that the visual variation effects of host computer control one and/or a sense of hearing variation effect, and displacement is available so that host computer control
The adjustment amplitude of foregoing visual variation effect and/or sense of hearing variation effect.
In illustrating after the various modes for determining location information signal, the running after step S409 is hereby illustrated in.
In step S411, by touch control detection module in detecting a plurality of second detection informations in one second time interval.It
Afterwards, in step S413, by processor according to the detection information of grade second, which touch-control object judgement has in second time interval
It is interior to be located on the working face.Then, in step S415, by judged result of the processor according to step S413, determine that at least one grasps
Make signal.Then, in step S417, an at least operation signal is transmitted to main frame by transceiver interface, makes the main frame according to respectively this is extremely
A few operation signal carries out one and operated.
Step S413 and S415 several possible aspects are hereby then illustrated, but the present invention is not limited thereto.
In some implementation aspects, step S413, according to the detection information of grade second, judges the first touch-control by processor
Object, the second touch-control object, the 3rd touch-control object and one the 4th touch-control object in the second time interval be located at the working face on.
Then, in step S415, by processor in judging the 4th touch-control object after in the second time interval on working face, determine
One first operation signal, carries out one first according to first operation signal in order to the main frame and operates.In addition, step S415 can more enter one
Step ground is by processor in judging that the first touch-control object, the second touch-control object and the 3rd touch-control object be located in the second time interval
After on working face, one second operation signal is determined, carrying out one second according to second operation signal in order to the main frame operates.
In some implementation aspects, step S413, according to the detection information of grade second, judges that one the 4th touches by processor
Control object in the second time interval be located at the working face on.Then, in step S415, by processor in judging the 4th touch control object
Part determines an operation signal after in the second time interval on working face, and one is carried out according to the operation signal in order to the main frame
Operation.
In some implementation aspects, step S413, according to the detection information of grade second, judges the first touch-control by processor
Object, the second touch-control object, the 3rd touch-control object, one the 4th touch-control object, one the 5th touch-control object and one the 6th touch-control object
In in the second time interval be located at working face on.Then, in step S415, by processor in judging the 4th touch-control object, the 5th
Touch-control object and the 6th touch-control object determine one first operation signal, in order to this after being located in the second time interval on working face
Main frame carries out one first according to first operation signal and operated.In addition, step S415 more can be further by processor in judgement
First touch-control object, the second touch-control object and the 3rd touch-control object determine one after being located in the second time interval on working face
Second operation signal, carries out one second according to second operation signal in order to the main frame and operates.
In some implementation aspects, step S413, according to the detection information of grade second, judges that one the 4th touches by processor
Object, one the 5th touch-control object and one the 6th touch-control object are controlled in being located in the second time interval on the working face.Then, Yu Bu
Rapid S415, by processor in judging the 4th touch-control object after in the second time interval on working face, determines that an operation is believed
Number, carry out one according to the operation signal in order to the main frame and operate.
In some implementation aspects, the working face that virtual navigation device is included defines one first sub- working face and one second
Sub- working face, and the touch control detection module included includes one first sub- touch control detection module and one second sub- touch control detection mould
Block.Foregoing first sub- working face and the second sub- working face do not overlap, and the first sub- touch control detection module is corresponding to the first sub- working face,
And second sub- touch control detection module correspondence to the second sub- working face.In such implementation aspect, step S401 can be touched by the first son
Control detection module or the second sub- touch control detection module is performed, after the navigation feature of virtual navigation device comes into operation, step
S411 can then be performed by the first sub- touch control detection module and/or the second sub- touch control detection module.Hereby illustrate several different
Aspect is as follows, so needs to understand, the scope that the present invention is asked is not limited in the aspect such as this.
In some aspects, step S401 is performed by the first sub- touch control detection module, and step S403 is by processor root
According to the detection information of grade first, judgement has at least three touch-control objects (for example:First touch-control object, the second touch-control object and
Three touch-control objects) contacted in the very first time is interval with the first sub- working face.Subsequently, step S411 is by the second sub- touch control detection
Module is in detecting a plurality of second detection informations in the second time interval.Step 413 is to be detected by processor according to the grade second
Information, judge one the 4th touch-control object in the second time interval be located at the second sub- working face on.Step S415 is then by processor
In judge the 4th touch-control object be located at the second sub- working face on after, determine one first operation signal, in order to the main frame according to this first
Operation signal carries out one first and operated.
In the aspect described in leading portion, while step S411 to S415 is performed, corresponding three can be performed in addition
Step (is not illustrated).First step is by the first sub- touch control detection module in detecting a plurality of 3rd detections in the second time interval
Information.Second step, according to the detection information of grade the 3rd, judges the first touch-control object, the second touch-control object and by processor
Three touch-control objects in the second time interval be located at the first sub- working face on.3rd step is then by processor in judging that first touches
Object, the second touch-control object and the 3rd touch-control object are controlled after being located in the second time interval on the first sub- working face, one is determined
Second operation signal, carries out one second according to second operation signal in order to the main frame and operates.
In addition, in some aspects, step S401 is performed by the first sub- touch control detection module, and step S403 is by handling
Device has at least three touch-control objects (for example according to the detection information of grade first, judgement:First touch-control object, the second touch-control object
And the 3rd touch-control object) contacted in very first time interval with the first sub- working face.Subsequently, step S411 is by the second sub- touch-control
Detection module is in detecting a plurality of second detection informations in the second time interval.Step 413 is according to the grade second by processor
Detection information, judges one the 4th touch-control object, one the 5th touch-control object and one the 6th touch-control object in position in the second time interval
In on the second sub- working face.Step S415 is then by processor in judging the 4th touch-control object, the 5th touch-control object and the 6th touch-control
After object is located on the second sub- working face, one first operation signal is determined, one is carried out according to first operation signal in order to the main frame
First operation.
In the aspect described in leading portion, while step S411 to S415 is performed, corresponding three can be performed in addition
Step (is not illustrated).First step is by the first sub- touch control detection module in detecting a plurality of 3rd detections in the second time interval
Information.Second step, according to the detection information of grade the 3rd, judges the first touch-control object, the second touch-control object and by processor
Three touch-control objects in the second time interval be located at the first sub- working face on.3rd step is then by processor in judging that first touches
Object, the second touch-control object and the 3rd touch-control object are controlled after being located in the second time interval on the first sub- working face, one is determined
Second operation signal, carries out one second according to second operation signal in order to the main frame and operates.
Furthermore, in some implementation aspects, in addition to abovementioned steps S401 to step S417, a more executable step (is not painted
Show) to judge whether the touch-control object on working face is human finger.In such implementation aspect, step S405 is by processor
According to the detection information of grade first, judge in the very first time is interval, first position of the first touch-control object on working face is moved
Move that the 3rd position to the second place, the second touch-control object on working face is moved to the 4th position and the 3rd touch-control object is by work
The 5th position made on face is moved to the 6th position.Afterwards, air navigation aid can perform a step (not illustrating), by processor according to
It is relative between relative position and the second place, the 4th position and the 6th position between first position, the 3rd position and the 5th position
Position, judges the first touch-control object, the second touch-control object and the 3rd touch-control object respectively for a human finger.
In addition to foregoing step, sixth embodiment can also perform all runnings and the function of the first to the 5th embodiment.
Art tool usually intellectual can be directly acquainted with sixth embodiment how based on the above-mentioned first to the 5th embodiment with
These operations and function are performed, therefore is not repeated.
Furthermore, the air navigation aid described by sixth embodiment can be realized by a computer program product.Virtually lead when one
Boat device is loaded into this computer program product, and performs after a plurality of instructions that this computer program product is included, you can complete the
Air navigation aid described by six embodiments.Foregoing computer program product can be for can be by the archives transmitted on networking, also can quilt
It is stored in computer-readable recording medium, for example ROM (read only memory;ROM), fast flash memory bank, soft
Dish, hard disc, laser disc, Portable disk, tape, the data bank that can be accessed as networking are familiar with known by this those skilled in the art and with identical
In any other store media of function.
As shown in the above description, present invention system utilizes the dress with a working face, a touch control detection module and a processor
Put to reach the purpose of navigation.Present invention system detects a plurality of detection informations through touch control detection module, then judgement is according to this
It is no to there are at least three touch-control objects to be contacted in a time interval with working face.If having at least three touch-control objects in the time
Contacted in interval with working face, then the present invention can further determine a location information signal, so that a main frame is according to the position
Information signal moves the vernier on a screen.Because present invention system is using with a working face, a touch control detection module and one
The device of processor reaches the purpose of navigation, and such device does not have the external form of traditional slide-mouse or optical mice, therefore can allow makes
User easily carries.
The above embodiments are only used for enumerating the implementation aspect of the present invention, and explaination technical characteristic of the invention, not
For limiting the protection category of the present invention.Any skilled person unlabored can change or the arrangement of isotropism is belonged to
The scope that the present invention is advocated, the scope of the present invention should be defined by scope of the invention as claimed.
Claims (39)
1. a kind of virtual navigation device, it is characterised in that described virtual navigation device is included:
One working face;
One touch control detection module, is electrically connected to described working face, to the detection multiple first in a very first time is interval
Detection information;And
One processor, is electrically connected to described touch control detection module, according to described multiple first detection informations, judgement have to
Few three touch-control objects are contacted in the described very first time is interval with described working face, wherein, described at least three are tactile
Control object and include one first touch-control object, one second touch-control object and one the 3rd touch-control object;
Wherein, operate to judge the first touch-control object, the second touch-control object and described below the computing device
The 3rd respective mobile message of touch-control object:
According to described multiple first detection informations, judge in the described very first time is interval, the first described touch-control object
A first position on described working face is moved to a second place, the second described touch-control object is by described working face
On one the 3rd position be moved to one the 4th position and one the 5th on described working face of the 3rd described touch-control object
Put and be moved to one the 6th position,
Wherein, described processor is more according between described first position, the 3rd described position and the 5th described position
Relative position between relative position and the described second place, the 4th described position and the 6th described position, judges described
The first touch-control object, the second described touch-control object and the 3rd described touch-control object respectively be a human finger,
Wherein, described processor is more according to the movement of the mobile message of the first touch-control object, the second touch-control object
The mobile message of information and the 3rd touch-control object determines a location information signal, in order to a main frame according to described positional information
Signal moves the vernier on a screen.
2. virtual navigation device as claimed in claim 1, it is characterised in that described virtual navigation device is further included:
One transceiver interface, it is suitable to be connected to described main frame, and described location information signal is transmitted to described main frame.
3. virtual navigation device as claimed in claim 1, it is characterised in that described processor calculates described first position
And one first distance between the described second place, calculate described the 3rd position and the 4th described position between one second away from
From and calculate described the 5th position and the 6th described position between one the 3rd with a distance from, calculate the first described distance, described
Second distance and described the 3rd distance an average value, and described location information signal is used as using described average value.
4. virtual navigation device as claimed in claim 1, it is characterised in that described processor system calculates described first
Put, one first average value of the 3rd described position and the 5th described position, calculate the described second place, the described the 4th
Position and one second average value of the 6th described position, between the second described average value of calculating and the first described average value
One difference, and described location information signal is used as using described difference.
5. virtual navigation device as claimed in claim 1, it is characterised in that described processor is more according to described first
Put, a polygon is defined in the 3rd described position and the 5th described position, described processor system is according to described polygon
An area in the change in described very first time interval, determine the moving direction that described location information signal is included
And a displacement.
6. virtual navigation device as claimed in claim 1, it is characterised in that described processor system is according to first touch-control
Object, the second touch-control object and the 3rd touch-control object are on the working face described in described very first time interval inherence
Respective motion track, determines the moving direction that described location information signal is included, and described processor calculates described
The motion track of the motion track of first touch-control object, the motion track of the second touch-control object and the 3rd touch-control object
A displacement of the average value to be included as described location information signal.
7. the virtual navigation device as described in claim 5 or 6, it is characterised in that described moving direction is described to make
One of the visual variation effects of host computer control one and a sense of hearing variation effect or its combination.
8. virtual navigation device as claimed in claim 7, it is characterised in that described visual variation effects are comprising described glimmering
One of rotation, amplification and diminution of a viewing area on curtain or its combination, wherein described viewing area shows a figure
One of formula, a form and a vernier or its combination.
9. virtual navigation device as claimed in claim 7, it is characterised in that described sense of hearing variation effect includes a volume control
System, audio selection and an audio mix one of them or its combination.
10. virtual navigation device as claimed in claim 1, it is characterised in that described working face is an on-plane surface.
11. virtual navigation device as claimed in claim 1, it is characterised in that described working face defines one first son work
Face and one second sub- working face, described the first sub- working face and the described second sub- working face do not overlap, described touch-control inspection
Survey module and include one first sub- touch control detection module and one second sub- touch control detection module, the first described sub- touch control detection module
Correspondence is to the first described sub- working face, and the second described sub- touch control detection module is corresponding to the second described sub- working face, institute
It is interval that the first touch-control object, the second described touch-control object and the 3rd described touch-control object stated lie in the described very first time
Interior to be contacted with the first described sub- working face, described the second sub- touch control detection module in one second time interval in detecting multiple
Second detection information, the second described time interval lie in the described very first time it is interval after, described processor is more according to institute
Multiple second detection informations stated, judge that one the 4th touch-control object is sub in being located at described second in the second described time interval
On working face, described processor is more after the 4th touch-control object described in judgement is located on the second described sub- working face, certainly
Fixed one first operation signal, carries out one first according to the first described operation signal in order to described main frame and operates.
12. virtual navigation device as claimed in claim 11, it is characterised in that the first described sub- touch control detection module more in
Multiple 3rd detection informations of detection in the second described time interval, described processor is more according to described multiple 3rd detections
Information, judges described the first touch-control object, the second described touch-control object and the 3rd described touch-control object in described the
Be located on the first described sub- working face in two time intervals, described processor more in the first touch-control object described in judgement,
Described the second touch-control object and the 3rd described touch-control object are in being located at the first described son in the second described time interval
After on working face, one second operation signal is determined, carrying out one second according to the second described operation signal in order to described main frame grasps
Make.
13. virtual navigation device as claimed in claim 1, it is characterised in that described touch control detection module is more in one second
Multiple second detection informations of detection in time interval, the second described time interval lie in the described very first time it is interval after, institute
The processor stated more according to described multiple second detection informations, judges the first described touch-control object, the second described touch-control
Object, the 3rd described touch-control object and one the 4th touch-control object in the second described time interval be located at described working face
On, described processor determines one first operation more after the 4th touch-control object described in judgement is located on described working face
Signal, carries out one first according to the first described operation signal in order to described main frame and operates.
14. virtual navigation device as claimed in claim 13, it is characterised in that described processor is more in described in judgement
One touch-control object, the second described touch-control object and the 3rd described touch-control object in the second described time interval be located at institute
After on the working face stated, one second operation signal is determined, one the is carried out according to described the second operation signal in order to described main frame
Two operations.
15. virtual navigation device as claimed in claim 1, it is characterised in that described touch control detection module is more in one second
Multiple second detection informations of detection in time interval, the second described time interval lie in the described very first time it is interval after, institute
The processor stated judges one the 4th touch-control object in the second described time interval more according to described multiple second detection informations
Interior to be located on described working face, described processor is more located on described working face in the 4th touch-control object described in judgement
Afterwards, an operation signal is determined, carrying out one according to described operation signal in order to described main frame operates.
16. virtual navigation device as claimed in claim 1, it is characterised in that described working face defines one first son work
Face and one second sub- working face, described the first sub- working face and the described second sub- working face do not overlap, described touch-control inspection
Survey module and include one first sub- touch control detection module and one second sub- touch control detection module, the first described sub- touch control detection module
Correspondence is to the first described sub- working face, and the second described sub- touch control detection module is corresponding to the second described sub- working face, institute
It is interval that the first touch-control object, the second described touch-control object and the 3rd described touch-control object stated lie in the described very first time
Interior to be contacted with the first described sub- working face, described the second sub- touch control detection module in one second time interval in detecting multiple
Second detection information, the second described time interval lie in the described very first time it is interval after, described processor is more according to institute
Multiple second detection informations stated, judge one the 4th touch-control object, one the 5th touch-control object and one the 6th touch-control object in described
The second time interval in be located at the second described sub- working face on, described processor is more in the 4th touch control object described in judgement
After part, the 5th described touch-control object and the 6th described touch-control object are located on the second described sub- working face, one the is determined
One operation signal, carries out one first according to the first described operation signal in order to described main frame and operates.
17. virtual navigation device as claimed in claim 16, it is characterised in that the first described sub- touch control detection module more in
Multiple 3rd detection informations of detection in the second described time interval, described processor is more according to described multiple 3rd detections
Information, judges described the first touch-control object, the second described touch-control object and the 3rd described touch-control object in described the
Be located on the first described sub- working face in two time intervals, described processor more in the first touch-control object described in judgement,
Described the second touch-control object and the 3rd described touch-control object are in being located at the first described son in the second described time interval
After on working face, one second operation signal is determined, carrying out one second according to the second described operation signal in order to described main frame grasps
Make.
18. virtual navigation device as claimed in claim 1, it is characterised in that described touch control detection module is more in one second
Multiple second detection informations of detection in time interval, the second described time interval lie in the described very first time it is interval after, institute
The processor stated more according to described multiple second detection informations, judges the first described touch-control object, the second described touch-control
Object, the 3rd described touch-control object, one the 4th touch-control object, one the 5th touch-control object and one the 6th touch-control object are in described
It is located on described working face in second time interval, described processor is more in the 4th touch-control object described in judgement, described
Five touch-control objects and after described the 6th touch-control object is located on described working face, one first operation signal is determined, in order to institute
The main frame stated carries out one first according to the first described operation signal and operated.
19. virtual navigation device as claimed in claim 18, it is characterised in that described processor is more in described in judgement
One touch-control object, the second described touch-control object and the 3rd described touch-control object in the second described time interval be located at institute
After on the working face stated, one second operation signal is determined, one the is carried out according to described the second operation signal in order to described main frame
Two operations.
20. virtual navigation device as claimed in claim 1, it is characterised in that described touch control detection module is more in one second
Multiple second detection informations of detection in time interval, the second described time interval lie in the described very first time it is interval after, institute
The processor stated is judged in the second described time interval, one the 4th touch-control more according to described multiple second detection informations
The touch-control object of object 1 the 5th and one the 6th touch-control object are located on described working face, and described processor is more described in judging
The 4th touch-control object, five described touch-control objects and the 6th described touch-control object be located at described working face on after, determine
One operation signal, carries out one according to described operation signal in order to described main frame and operates.
21. a kind of air navigation aid, it is adaptable to a virtual navigation device, described virtual navigation device is touched comprising a working face, one
Control detection module and a processor, it is characterised in that described air navigation aid is comprised the steps of:
(a) multiple first detection informations are detected in a very first time is interval by described touch control detection module;
(b) by described processor according to described multiple first detection informations, judgement has at least three touch-control objects in described
The very first time it is interval in contacted with described working face, at least three described touch-control objects include one first touch-control object,
One second touch-control object and one the 3rd touch-control object;
(c) the first touch-control object, described second are judged according to described multiple first detection informations by described processor
Touch-control object and the respective mobile message of the 3rd touch-control object, are comprised the steps of:
According to described multiple first detection informations, judge in the described very first time is interval, the first described touch-control object
A first position on described working face is moved to a second place, the second described touch-control object is by described working face
On one the 3rd position be moved to one the 4th position and one the 5th on described working face of the 3rd described touch-control object
Put and be moved to one the 6th position;
(d) according to the relative position between described first position, the 3rd described position and the 5th described position and described
Relative position between the second place, the 4th described position and the 6th described position, judges the first described touch-control object, institute
The the second touch-control object and the 3rd described touch-control object stated are respectively a human finger;And
(e) by described processor according to the mobile message of the first touch-control object, the mobile letter of the second touch-control object
The mobile message of breath and the 3rd touch-control object determines a location information signal, is believed in order to a main frame according to described positional information
A vernier on a number mobile screen.
22. air navigation aid as claimed in claim 21, it is characterised in that described virtual navigation device includes and further includes a receipts
Interface is sent out, described air navigation aid further includes the following steps:
By the described location information signal of described transceiver interface transmission to described main frame.
23. air navigation aid as claimed in claim 21, it is characterised in that described step (e) is comprised the steps of:
One first distance between the described first position of described processor calculating and the described second place;
A second distance between the 3rd described position of described processor calculating and the 4th described position;
One the 3rd distance between the 5th described position of described processor calculating and the 6th described position;
Calculate the first described distance, described second distance and the 3rd described distance by described processor one is averaged
Value;And
Described location information signal is used as using described average value by described processor.
24. air navigation aid as claimed in claim 21, it is characterised in that described step (e) is comprised the steps of:
It is flat by the one first of the described first position of described processor calculating, the 3rd described position and the 5th described position
Average;
It is flat by the one second of the described second place of described processor calculating, the 4th described position and the 6th described position
Average;
A difference between the second described average value of described processor calculating and the first described average value;And
Described location information signal is used as using described difference by described processor.
25. air navigation aid as claimed in claim 21, it is characterised in that described step (c) is comprised the steps of:
It is polygon that according to described first position, the 3rd described position and the 5th described position one is defined by described processor
Shape;And
A change of the described polygonal area in the described very first time is interval is judged by described processor;
Wherein, described step (e) is the change according to described area as described processor, determines described position
A moving direction and a displacement that confidence information signal is included.
26. air navigation aid as claimed in claim 21, it is characterised in that described step (c) is sentenced by described processor
The disconnected first touch-control object, the second touch-control object and the 3rd touch-control object are interval inherent in the described very first time
Respective motion track on described working face, and described step (e) comprises the steps of:
By described processor according to the motion track of the first touch-control object, the motion track of the second touch-control object and
The motion track of the 3rd touch-control object, determines the moving direction that described location information signal is included;And
By described processor calculate the motion track of the first touch-control object, the motion track of the second touch-control object and
One average value of the motion track of the 3rd touch-control object using as described location information signal included one movement away from
From.
27. the air navigation aid as described in claim 25 or 26, it is characterised in that described moving direction is to make described master
Machine controls a visual variation effects and a sense of hearing variation effect therein one or its combination.
28. air navigation aid as claimed in claim 27, it is characterised in that described visual variation effects include described screen
On the rotation of a viewing area, amplification and reduce one of them or its combination, wherein described viewing area show a schema,
One of one form and a vernier or its combination.
29. air navigation aid as claimed in claim 27, it is characterised in that described sense of hearing variation effect includes a volume control
System, audio selection and an audio mix one of them or its combination.
30. air navigation aid as claimed in claim 21, it is characterised in that described working face define one first sub- working face and
One second sub- working face, described the first sub- working face and the described second sub- working face does not overlap, described touch control detection mould
Block includes one first sub- touch control detection module and one second sub- touch control detection module, the first described sub- touch control detection module correspondence
To the first described sub- working face, the second described sub- touch control detection module correspondence is described to the second described sub- working face
First touch-control object, the second described touch-control object and the 3rd described touch-control object lie in the described very first time it is interval in
The first described sub- working face contact, described air navigation aid further includes the following steps:
By the second described sub- touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, judge one the 4th touch-control object when described second
Between it is interval in be located on the second described sub- working face;And
After the 4th touch-control object as described in described processor in judgement is located on the second described sub- working face, one the is determined
One operation signal, carries out one first according to the first described operation signal in order to described main frame and operates.
31. air navigation aid as claimed in claim 30, it is characterised in that further include the following steps:
By the first described sub- touch control detection module in detecting multiple 3rd detection informations in the second described time interval;
By described processor according to described multiple 3rd detection informations, the first described touch-control object is judged, described the
Two touch-control objects and the 3rd described touch-control object in the second described time interval in being located on the first described sub- working face;
And
The first touch-control object, the second described touch-control object and the 3rd described touch-control as described in described processor in judgement
Object determines one second operation signal, in order to institute after being located in the second described time interval on the first described sub- working face
The main frame stated carries out one second according to the second described operation signal and operated.
32. air navigation aid as claimed in claim 21, it is characterised in that further include the following steps:
By described touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, the first described touch-control object is judged, described the
Two touch-control objects, the 3rd described touch-control object and one the 4th touch-control object are described in being located in the second described time interval
On working face;And
One first operation signal is determined by described processor, one is carried out according to the first described operation signal in order to described main frame
First operation.
33. air navigation aid as claimed in claim 32, it is characterised in that further include the following steps:
The first touch-control object, the second described touch-control object and the 3rd described touch-control as described in described processor in judgement
Object determines one second operation signal, in order to described master after being located in the second described time interval on described working face
Machine carries out one second according to the second described operation signal and operated.
34. air navigation aid as claimed in claim 21, it is characterised in that further include the following steps:
By described touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, judge one the 4th touch-control object when described second
Between it is interval in be located on described working face;And
One operation signal is determined by described processor, carrying out one according to described operation signal in order to described main frame operates.
35. air navigation aid as claimed in claim 21, it is characterised in that described working face define one first sub- working face and
One second sub- working face, described the first sub- working face and the described second sub- working face does not overlap, described touch control detection mould
Block includes one first sub- touch control detection module and one second sub- touch control detection module, the first described sub- touch control detection module correspondence
To the first described sub- working face, the second described sub- touch control detection module correspondence is described to the second described sub- working face
First touch-control object, the second described touch-control object and the 3rd described touch-control object lie in the described very first time it is interval in
The first described sub- working face contact, described air navigation aid further includes the following steps:
By the second described sub- touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, one the 4th touch-control object, one the 5th touch control object are judged
Part and one the 6th touch-control object in the second described time interval in being located on the second described sub- working face;And
The 4th touch-control object, the 5th described touch-control object and the 6th described touch-control as described in described processor in judgement
Object determines one first operation signal after on the second described sub- working face, in order to described main frame according to the first described behaviour
Make signal and carry out one first operation.
36. air navigation aid as claimed in claim 35, it is characterised in that further include the following steps:
By the first described sub- touch control detection module in detecting multiple 3rd detection informations in the second described time interval;
By described processor according to described multiple 3rd detection informations, the first described touch-control object is judged, described the
Two touch-control objects and the 3rd described touch-control object in the second described time interval in being located on the first described sub- working face;
And
The first touch-control object, the second described touch-control object and the 3rd described touch-control as described in described processor in judgement
Object determines one second operation signal, in order to institute after being located in the second described time interval on the first described sub- working face
The main frame stated carries out one second according to the second described operation signal and operated.
37. air navigation aid as claimed in claim 21, it is characterised in that further include the following steps:
By described touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, the first described touch-control object is judged, described the
Two touch-control objects, the 3rd described touch-control object, one the 4th touch-control object, one the 5th touch-control object and one the 6th touch-control object in
It is located in the second described time interval on described working face;And
One first operation signal is determined by described processor, one is carried out according to the first described operation signal in order to described main frame
First operation.
38. air navigation aid as claimed in claim 37, it is characterised in that further include the following steps:
The first touch-control object, the second described touch-control object and the 3rd described touch-control as described in described processor in judgement
Object determines one second operation signal, in order to described master after being located in the second described time interval on described working face
Machine carries out one second according to the second described operation signal and operated.
39. air navigation aid as claimed in claim 21, it is characterised in that further include the following steps:
By described touch control detection module in detecting multiple second detection informations in one second time interval;
By described processor according to described multiple second detection informations, judge in the second described time interval, one
The touch-control object of four touch-control object 1 the 5th and one the 6th touch-control object are located on described working face;
One operation signal is determined by described processor, carrying out one according to described operation signal in order to described main frame operates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310060217.7A CN104007849B (en) | 2013-02-26 | 2013-02-26 | Virtual navigation device and its air navigation aid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310060217.7A CN104007849B (en) | 2013-02-26 | 2013-02-26 | Virtual navigation device and its air navigation aid |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104007849A CN104007849A (en) | 2014-08-27 |
CN104007849B true CN104007849B (en) | 2017-09-22 |
Family
ID=51368541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310060217.7A Active CN104007849B (en) | 2013-02-26 | 2013-02-26 | Virtual navigation device and its air navigation aid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104007849B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101308417A (en) * | 2007-05-15 | 2008-11-19 | 宏达国际电子股份有限公司 | Electronic device and its software user interface operation method |
CN101349956A (en) * | 2008-08-11 | 2009-01-21 | 深圳华为通信技术有限公司 | Method and apparatus for executing pattern touch order |
CN101661363A (en) * | 2008-08-28 | 2010-03-03 | 比亚迪股份有限公司 | Application method for multipoint touch sensing system |
CN101950212A (en) * | 2009-07-10 | 2011-01-19 | 群康科技(深圳)有限公司 | Multipoint identification method for touch screen |
CN202120234U (en) * | 2011-03-31 | 2012-01-18 | 比亚迪股份有限公司 | Multipoint translation gesture recognition device for touch device |
TW201205421A (en) * | 2010-07-30 | 2012-02-01 | Kye Systems Corp | A operation method and a system of the multi-touch |
CN102467261A (en) * | 2010-10-28 | 2012-05-23 | 致伸科技股份有限公司 | Method of combining at least two touch control signals into a computer system, and computer mouse |
TW201235903A (en) * | 2011-02-24 | 2012-09-01 | Avermedia Tech Inc | Gesture manipulation method and multimedia display apparatus |
-
2013
- 2013-02-26 CN CN201310060217.7A patent/CN104007849B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101308417A (en) * | 2007-05-15 | 2008-11-19 | 宏达国际电子股份有限公司 | Electronic device and its software user interface operation method |
CN101349956A (en) * | 2008-08-11 | 2009-01-21 | 深圳华为通信技术有限公司 | Method and apparatus for executing pattern touch order |
CN101661363A (en) * | 2008-08-28 | 2010-03-03 | 比亚迪股份有限公司 | Application method for multipoint touch sensing system |
CN101950212A (en) * | 2009-07-10 | 2011-01-19 | 群康科技(深圳)有限公司 | Multipoint identification method for touch screen |
TW201205421A (en) * | 2010-07-30 | 2012-02-01 | Kye Systems Corp | A operation method and a system of the multi-touch |
CN102467261A (en) * | 2010-10-28 | 2012-05-23 | 致伸科技股份有限公司 | Method of combining at least two touch control signals into a computer system, and computer mouse |
TW201235903A (en) * | 2011-02-24 | 2012-09-01 | Avermedia Tech Inc | Gesture manipulation method and multimedia display apparatus |
CN202120234U (en) * | 2011-03-31 | 2012-01-18 | 比亚迪股份有限公司 | Multipoint translation gesture recognition device for touch device |
Also Published As
Publication number | Publication date |
---|---|
CN104007849A (en) | 2014-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106605202B (en) | Handedness detection from touch input | |
US20130241832A1 (en) | Method and device for controlling the behavior of virtual objects on a display | |
KR102385759B1 (en) | Inactive region for touch surface based on contextual information | |
US20120299848A1 (en) | Information processing device, display control method, and program | |
CN104969148A (en) | Depth-based user interface gesture control | |
JP2014503925A (en) | Terminal having touch screen and touch event identification method in the terminal | |
US10949668B2 (en) | Electronic apparatus and method for controlling thereof | |
CN105320265B (en) | Control method of electronic device | |
JP6483556B2 (en) | Operation recognition device, operation recognition method and program | |
CN105320275A (en) | Wearable device and method of operating the same | |
US11928286B2 (en) | Electronic apparatus having a sensing unit to input a user command and a method thereof | |
US10503256B2 (en) | Force feedback | |
CN107291277A (en) | Method, device, equipment and storage medium for preventing false touch | |
JP2016128992A (en) | Electronic equipment, control method and program thereof and recording medium | |
US9285962B2 (en) | Display with shared control panel for different input sources | |
CN107272892A (en) | A kind of virtual touch-control system, method and device | |
US20140362017A1 (en) | Input device, input control method, and input control program | |
EP2771766B1 (en) | Pressure-based interaction for indirect touch input devices | |
KR102176575B1 (en) | Electronic device and method for sensing inputs | |
US20140327631A1 (en) | Touch screen panel display and touch key input system | |
KR20190038422A (en) | Methods and apparatus to detect touch input gestures | |
CN104007849B (en) | Virtual navigation device and its air navigation aid | |
CN103941922B (en) | Optical touch system, touch detection method and computer program product | |
US10521108B2 (en) | Electronic apparatus for detecting touch, method of controlling the same, and display apparatus including touch controller | |
CN109582084A (en) | computer system and input method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |