CN108780365B - Apparatus, system and method for inputting commands or characters using a touch screen - Google Patents

Apparatus, system and method for inputting commands or characters using a touch screen Download PDF

Info

Publication number
CN108780365B
CN108780365B CN201680061416.1A CN201680061416A CN108780365B CN 108780365 B CN108780365 B CN 108780365B CN 201680061416 A CN201680061416 A CN 201680061416A CN 108780365 B CN108780365 B CN 108780365B
Authority
CN
China
Prior art keywords
finger
touch
sensitive surface
contact
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680061416.1A
Other languages
Chinese (zh)
Other versions
CN108780365A (en
Inventor
B·E·亚龙
N·亚龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inpris Innovative Products Ltd
Original Assignee
Inpris Innovative Products Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inpris Innovative Products Ltd filed Critical Inpris Innovative Products Ltd
Publication of CN108780365A publication Critical patent/CN108780365A/en
Application granted granted Critical
Publication of CN108780365B publication Critical patent/CN108780365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/143Touch sensitive instrument input devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/146Instrument input by gesture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04804Transparency, e.g. transparent or translucent windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method, apparatus and system for inputting commands using a touch-sensitive surface are disclosed. Instead of providing a means for entering predefined positions for different commands, the means identifies the position of three or more of the user's fingers or other objects. After the position of the fingers is determined, the input mode allows one or more commands to be input based on the command and the association between the motion of the different fingers and the type or direction of movement. The association may include a first command associated with a sliding motion of a first finger in only a first direction and a second, different command associated with a sliding motion of a second finger in only a second direction. The first and second directions may be the same (e.g., within about 30 °, within about 20 °, within about 15 °, within about 10 °, or within about 5 °) or different (e.g., at an angle greater than 30 °, at an angle greater than 40 °, or at an angle greater than 50 °). Preferably, the command input mode is triggered by a triggering event. Preferably, the positions of the fingers are identified by inputting a command, one of the fingers being held on the touch-sensitive surface.

Description

Apparatus, system and method for inputting commands or characters using a touch screen
Priority requirement
This application claims priority from us provisional patent application 62/207,564 filed on day 8/20 of 2015 and us provisional patent application 62/266,916 filed on day 12/14 of 2015. U.S. patent applications 14/976,005 and 13/091,158 and U.S. provisional patent applications 62/207,564 and 62/266,916 are hereby incorporated by reference in their entirety.
Technical Field
The present teachings relate to apparatuses, systems, and methods for using a touch-sensitive surface to input one or more commands to control an electronic device. Preferably, a positioning mode is employed to position a control location (e.g., a finger contact area) based on contact of the touch-sensitive surface with three or more fingers.
Background
Using a touch screen display to control a device typically enables the operation of the device to be more intuitive and/or simplified. However, in many scenarios, viewing a touch screen device under unsafe, impossible, or otherwise inefficient conditions requires control of the device. In these scenarios, the user operates substantially blindly. One approach is to use structural clues (textual clues) to identify the various control positions. By way of example, positioning of fingers and reducing typing errors, particularly when touch typing input (i.e., without viewing the results of the input), through surface markings on one or more character keys, often facilitates input on a keyboard. However, the screen surface of a typical touch screen has no structural cues. In contrast, typical touch screen displays rely on the user's vision to identify and locate control locations. Touch typing is difficult on touch screen displays. In fact, many people who want to send messages on a mobile phone prefer to have unique keys on a cell phone with a touch screen display when the cell phone is hidden under a desk or in a pocket. When touching a key, the operator continuously receives feedback on the position (i.e., control position) of each key, for example, by touching a space between keys or by touching a key having a different shape from an adjacent key.
Although a touch screen display device may be used as a control device, there is a need for an improved method and system to allow the use of a touch screen device as a control device in situations where blind operation is required.
Touch screen devices typically provide visual cues to identify the location of control locations on a surface, rather than structural cues.
A typical control location is defined by an application. Although an operator may move the control location, for example, by dragging the window, such movement typically requires the operator to first visually identify the initial position of the control location.
There are some applications that can input controls without the need to identify the control location. For example, when viewing a photograph, each applicant moves two fingers anywhere on the touchscreen to zoom in, zoom out, rotate, or move the image position. Here, the number of possible controls is limited, because the same operation occurs regardless of which two fingers are used, and because there is no recognition of the control position.
There is a need for methods, devices, and systems that allow control locations to be defined when a user contacts a touch screen surface, rather than requiring the user to identify the location of existing control locations (e.g., by visual or tactile cues).
Disclosure of Invention
Instead of requiring the user to identify the control location (e.g., by visual cues), the control method according to the present teachings allows the control location to be defined by 3, 4, or 5 fingers of a hand while the user is contacting the touch screen surface.
Here, instead of having the user apply visual or tactile cues to identify the control location, the device moves or identifies (i.e., defines) the control location based on the contact location of the user's finger.
After contacting the touch-sensitive surface with 3, 4, or 5 fingers of the hand, some or all of the fingers may be removed and the control position may be maintained until the user makes input contact with the surface using 1, 2, 3, 4, or 5 fingers for controlling the device.
Disclosed is a method of inputting a command, including the steps of: i) a processor connected to the touch-sensitive surface senses simultaneous localization of three or more objects (e.g., fingers) above and near (e.g., within 10mm of) or on the touch-sensitive surface at three or more different sensing locations (e.g., coincident with the locations of three or more fingers of the user), including localization centered on a first finger initial sensing point (e.g., by the first finger), localization centered on a second finger initialization sensing point (e.g., by the second finger), and localization centered on a third finger initial sensing point (e.g., by the third finger); ii) the processor assigns finger position regions for two or more (e.g., each) of the three or more objects, wherein each finger position region is a distinct region of the touch-sensitive surface and each finger position region includes one of the plurality of initial finger sensing points (e.g., wherein each finger position region corresponds to one finger); iii) the processor entering a command input mode after the step of assigning the finger position area (e.g., after a predetermined event such as removing at least one finger, or after a predetermined time interval), wherein the command input mode includes an association having at least a first command associated with movement of only one of the objects beginning in the first finger position area and a second, different command associated with movement of only one of the objects beginning in the second finger position area (e.g., the same movement or a different movement used with the first command); and iv) the processor recognizes a gesture on the touch-sensitive surface, including sensing motion of only one of the objects beginning at the first finger position area, and recognizing an associated first command based on the gesture.
Also disclosed is a method of inputting a command, comprising the steps of: identifying simultaneous contacts at three or more locations on the touch-sensitive surface (e.g., consistent with contact by three or more fingers of the user), including contacts centered on a first finger initial contact point (e.g., by a first finger), contacts centered on a second finger initial contact point (e.g., by a second finger of a second finger), and contacts at a third finger initial contact point (e.g., by a third finger); assigning a control location (e.g., a finger contact area) for each of three or more fingers, wherein each finger contact area is a different area of the touch-sensitive surface, and each finger contact area includes an initial point of contact (e.g., wherein each finger contact area corresponds to a finger); and identifying contact removal of the touch-sensitive contact at three or more locations (e.g., removing three or more fingers from the touch-sensitive surface). After assigning a control position and sensing removal of a contact, the process may include one or more steps of inputting a control command through a contact having one or more control positions (i.e., an input contact). For example, the process may include identifying simultaneous input contacts at one or more finger contact areas (e.g., simultaneous contact of one or more finger contact areas with a respective finger); each finger contact area is at an input contact point; while maintaining the continuous input contact, recognizing a gesture on the touch-sensitive surface, including movement of the input contact in one or more directions on the touch-sensitive surface beginning at the input contact point; (e.g., an action on the touch-sensitive surface by a user making a gesture on the touch-sensitive surface, wherein one or more fingers are in contact with the touch-sensitive surface, wherein each gesture originates from a respective finger contact area); and recognizing a command to be executed based on the contacted control location and the gesture originating from the control location.
Further disclosed is a method of inputting commands using a processor connected to a touch-sensitive surface, comprising the steps of:
(i) the processor identifies simultaneous contacts (e.g., consistent with contact by three or more fingers of the user) at three or more contact locations on the touch-sensitive surface, including contacts centered on a first-finger initial-contact point (e.g., by the first finger), contacts centered on a second-finger initial-contact point (e.g., by the second finger of the second finger), and contacts centered on a third-finger initial-contact point (e.g., by the third finger); (ii) the processor assigns a finger contact area to each of the three or more fingers, wherein each finger contact area is a different area of the touch-sensitive surface and each finger contact area includes an initial point of contact (e.g., wherein each finger contact area corresponds to a finger); (iii) the processor identifies removal of contact with the touch-sensitive surface at one or more (e.g., all) contact locations (e.g., removal of one, two, three, or all fingers from the touch-sensitive surface); (iv) after the step of identifying a removal contact, the processor identifies simultaneous input contacts at one or more finger contact areas (e.g., simultaneous contacts of one or more finger contact areas with a respective finger), each finger contact area being at an input contact point, and/or the processor identifies one or more contacts of step (i), each contact remaining on a finger contact area and identifying the remaining contacts as finger input contacts; (v) while maintaining the continuous input contact, the processor identifies a gesture on the touch-sensitive surface that includes movement of the input contact in one or more directions on the touch-sensitive surface beginning at the input contact point; (e.g., an action on the touch-sensitive surface by a user making a gesture on the touch-sensitive surface, wherein one or more fingers are in contact with the touch-sensitive surface, wherein each gesture originates from a respective finger contact area); and (vi) the processor identifying a command to be executed based on the contacted finger contact area and the gesture originating from the finger contact area.
In addition, the method for inputting the control command comprises the following steps: simultaneously contacting the touch-sensitive surface with three or more fingers of the user, including a contact of a first finger centered on the first finger initial contact point, a contact of a second finger centered on the second finger initial contact point, and a contact of a third finger at a third finger initial contact point; assigning a finger contact area to each of three or more fingers, wherein each finger contact area is a different area of the touch-sensitive surface and each finger contact area comprises an initial point of contact, wherein each finger contact area corresponds to a finger; removing three or more fingers from the touch-sensitive surface; one of the finger contact areas is brought into contact with a respective finger and the finger is slid along one or more finger movement directions. For example, sliding a finger may control a device, a component of a device, a communication, a display screen, or any combination thereof. For example, the direction of movement of the finger may cause the cursor to move in a corresponding one or more cursor movement directions.
Also disclosed is a method of inputting a command, comprising the steps of: simultaneously contacting the touch-sensitive surface with three or more fingers of the user, including a first finger contact centered on a first finger initial contact point, a second finger contact centered on a second finger initial contact point, and a third finger contact at a third finger initial contact point; assigning a finger contact area to each of three or more fingers, wherein each finger contact area is a different area of the touch-sensitive surface and each finger contact area comprises an initial point of contact, wherein each finger contact area corresponds to a finger; and removing three or more fingers from the touch-sensitive surface. After establishing the finger contact area based on the contact location of the user, one or more gestures may be input using the finger contact area. The input of the gesture may include simultaneously contacting one or more finger contact areas with respective fingers; making gestures on the touch-sensitive surface, wherein one or more fingers are in contact with the touch-sensitive surface, wherein each gesture originates from a respective finger contact area; and identifying a control command to be executed based on the contacted finger contact area and the gesture originating from the finger contact area.
The control location (e.g., finger position area or finger contact area) may be static or dynamic. For example, the static finger contact area may remain fixed in the command input mode and may change only after ending the command input mode and initiating a new positioning mode.
Preferably, the control position is a finger position area. The finger position regions are typically spaced in arcs, such as would be expected from the spacing of the fingertips. While the finger position area may be identified by sensing an object near the touch-sensitive surface (e.g., a finger slightly above the surface, typically within 10mm of the surface), the finger position area is preferably a finger contact area, where the object contacts the surface when the object is near the touch-sensitive surface.
Preferably, the control position (e.g., finger position area or finger contact area) may be dynamic. For example, after one or more contacts during the command input mode, the finger position area may be repositioned based on the finger position or the actual position of the finger contact (e.g., within the control position) during the command input mode.
The first finger position area (e.g., finger contact area) may be characterized by an initial contact area and an initial center point (e.g., a geometric center of the initial contact area). During the command input mode, the touch surface may be contacted in an initial contact area, but the center of the contact may be at a contact point that is offset from the initial center point. Then, the center point of the first finger contact area may be moved at least partially toward (e.g., fully to) the contact point such that the first finger contact area is characterized by a new center point that is different from the initial center point. In addition to moving the center point of the first finger contact area, an offset between the center point and the center of contact in one finger contact area may also be used to reposition one or more different finger contact areas. This repositioning may be used to compensate for gradual movement of the hand's position on the touch-sensitive surface. Thus, rather than the user adjusting the hand to a fixed position of the device, the device adjusts to the user's hand position.
Another aspect of the teachings herein relates to a system for inputting control commands for controlling a device, comprising: an input device comprising a touch-sensitive surface; a processor; a memory storing instructions that, when executed by the processor, cause the processor to: identifying simultaneous contacts on the touch-sensitive surface at three or more locations (e.g., consistent with contact by three or more fingers of the user), including contacts centered on a first finger initial contact point (e.g., by a first finger), contacts centered on a second finger initial contact point (e.g., by a second finger of a second finger), and contacts at a third finger initial contact point (e.g., by a third finger); assigning a control location (e.g., a finger contact area) for each of three or more fingers, wherein each control location is a different area of the touch-sensitive surface, and each control location includes an initial point of contact (e.g., wherein each finger contact area corresponds to a finger); and identifying contact removal of the touch-sensitive contact at the three or more locations (e.g., removing three or more fingers from the touch-sensitive surface) before or after assigning the control location. The memory storage instructions, when executed after assigning the control locations, may also cause the processor to: identifying simultaneous input contacts in one or more control locations (e.g., simultaneous contact of one or more finger contact areas with a corresponding finger), each control location being at an input contact point; while maintaining the continuous input contact, recognizing a gesture on the touch-sensitive surface, including movement of the input contact in one or more directions on the touch-sensitive surface beginning at the input contact point; (e.g., an action on the touch-sensitive surface by a user making a gesture on the touch-sensitive surface, wherein one or more fingers are in contact with the touch-sensitive surface, wherein each gesture originates from a respective finger contact area); and recognizing a command to be executed based on the contacted control location and the gesture originating from the control location.
In another aspect, the teachings herein relate to a machine-readable storage medium containing instructions that, when executed, cause a processor of an electronic device to identify an input control command by: the processor identifies simultaneous contacts on the touch-sensitive surface at three or more locations (e.g., consistent with contact by three or more fingers of the user), including contacts centered on a first finger initial contact point (e.g., by a first finger), contacts centered on a second finger initial contact point (e.g., by a second finger of a second finger), and contacts at a third finger initial contact point (e.g., by a third finger); the processor assigns a control location (e.g., a finger contact area) for each of three or more finger initial contact points, wherein each control location is a different area of the touch-sensitive surface, and each control location includes one initial contact point (e.g., wherein each finger contact area corresponds to one finger); the processor identifies contact removal of the touch-sensitive contact at three or more locations (e.g., removal of three or more fingers from the touch-sensitive surface); the processor identifies simultaneous input contacts in one or more finger contact areas (e.g., one or more finger contact areas in simultaneous contact with a corresponding finger), each finger contact area being at an input contact point; while maintaining the continuous input contact, the processor identifies a gesture on the touch-sensitive surface that includes movement of the input contact in one or more directions on the touch-sensitive surface beginning at the input contact point; (e.g., an action on the touch-sensitive surface by a user making a gesture on the touch-sensitive surface, wherein one or more fingers are in contact with the touch-sensitive surface, wherein each gesture originates from a respective finger contact area); and the processor identifies a command to be executed based on the contacted control location and the gesture originating from the control location.
In yet another aspect, the teachings herein relate to methods, systems, and devices for unlocking one or more electronic device controls by a processor, including: receiving an indication of simultaneous contact at a plurality of locations on the touch-sensitive surface; determining that the plurality of locations have a pitch consistent with contacts with the touch-sensitive surface by one or both hands of the user; assigning a plurality of finger contact areas based on the plurality of locations of simultaneous contact; receiving an indication of a series of consecutive contacts, each contact having one or more finger contact areas; a series of consecutive contacts is compared to a predetermined consecutive series (i.e., a password series) and one or more controls are unlocked when the consecutive series of contacts matches the predetermined consecutive series.
Drawings
Fig. 1 is a diagram showing an illustrative mode for controlling a session according to the teachings herein.
Fig. 2 is a diagram showing an illustrative mode for controlling a session according to the teachings herein.
FIG. 3 is a diagram illustrating simultaneous contacts on a touch-sensitive surface at three or more locations (e.g., corresponding to touches by three or more fingers of a hand).
FIG. 4 is a diagram illustrating a touch-sensitive surface displaying a removed contact (e.g., removing a finger from the surface) and illustrating the location of a previous simultaneous contact.
FIG. 5 is a diagram illustrating simultaneous contact of the touch-sensitive surface with all 5 fingers of the hand.
FIG. 6 is a diagram illustrating command control locations (e.g., finger contact areas) assigned to a touch-sensitive surface based on the location of simultaneous contacts.
FIG. 7 is a schematic diagram of a touch-sensitive surface having a primary finger contact area and a secondary finger contact area.
FIG. 8 is a schematic diagram of a touch-sensitive surface having a primary finger contact area and a secondary finger contact area; for example, the primary finger contact and the two secondary finger contact areas may be controlled by the same finger.
Fig. 9 is a diagram showing the characteristics of the process of assigning the initial finger contact area.
FIG. 10 is a schematic diagram illustrating contact of a touch sensitive surface in one of the finger contact areas.
FIG. 11A is a schematic diagram illustrating a sliding motion of a contact with a touch-sensitive surface in a generally rightward direction.
FIG. 11B is a schematic diagram illustrating a sliding motion of a contact with a touch-sensitive surface in a generally leftward direction.
Fig. 11C is a schematic diagram illustrating a sliding motion of a contact with the touch-sensitive surface in a generally upward direction (e.g., within about 45 ° or within about 30 ° of the upward direction).
FIG. 12 is a schematic diagram showing features of a process of inputting a command through a gesture including contacting a finger contact area.
FIG. 13A is a schematic diagram illustrating contact with a finger contact area that is offset from a center point of the finger contact area.
FIG. 13B is a schematic diagram showing a previous (e.g., initial) finger contact area and a new finger contact area; the finger contact area may be repositioned based on a direction and/or distance between a previous (e.g., initial) center of the finger contact area and the contact location identified on the surface.
FIG. 14 is a schematic diagram showing features of a process of repositioning a finger contact area.
FIG. 15 is a schematic flow chart diagram for inputting one or more control commands, it being understood that some steps may be eliminated and/or the order of some steps may be changed, with the process possibly including other steps as well.
FIG. 16 is a schematic diagram showing the touch-sensitive surface facing away from the user.
FIG. 17 is a schematic diagram showing a touch sensitive surface in a vehicle facing away from a user.
FIG. 18 is a schematic diagram showing a user-facing touch-sensitive surface in a motor vehicle; the use of a "blind" control function for the touch-sensitive surface may reduce the level of distraction during its operation compared to devices that require visual observation of the control location.
Fig. 19A and 19B are diagrams illustrating a device mounted to a steering wheel or steering column with the touch-sensitive surface facing away from the driver.
20A, 20B, 20C, and 20D are diagrams of an exemplary game controller having one or more touch-sensitive surfaces (e.g., on a concave surface of the game controller).
Detailed Description
The explanations and illustrations presented herein are intended to acquaint others skilled in the art with the invention, its principles, and its practical application. Accordingly, the specific embodiments of the present disclosure as set forth are not intended to be exhaustive or limiting. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. The disclosures of all articles and references, including patent applications and publications, are hereby incorporated by reference for all purposes. Other combinations are also possible, as will be gleaned from the following claims, which are also incorporated into this written specification by reference.
Methods, systems, apparatuses, and devices for controlling a session (control session) in accordance with the teachings herein are generally based on a processor that recognizes a contact with a touch-sensitive surface, thereby causing a selection of one or more control commands.
Referring to fig. 1, a control session 2 typically includes a pattern for initially positioning a control location 4 (e.g., a finger contact area) on a touch-sensitive surface 14. The initial positioning mode of the control location is preferably used to position the control location based on the location of simultaneous contacts at three or more spaced locations on the touch-sensitive surface (e.g., by three or more finger contacts with the surface). After assigning the control positions, the control session 2 comprises a mode for entering control commands 6. The mode for entering control commands is used to select one or more control commands 8 based on contact with a control location on the touch-sensitive surface 14. The control session may include a mode for ending the control session. The control session may include a mode for repositioning the control position. Such a repositioning pattern may be employed to compensate for user hand movement relative to the touch-sensitive surface. As shown in fig. 2, the mode for inputting the control command 6 may include selection of the control command 7 and a mode for repositioning the control position 8. The ending control session 9 may occur after the mode for inputting the control command 6.
The processor is preferably in electronic communication with the touch-sensitive surface for the entire duration of the control session, for identifying the location of objects above or in contact with the touch-sensitive surface, for identifying movement of objects or contacts on the surface, and for identifying removal of objects from the touch-sensitive surface (e.g., removal of one or more contacts from the surface).
By employing a mode of initially locating the control location, the processor locates or assigns the control location based on the user's contact location, thereby enabling "blind" interaction through use with the touch-sensitive surface.
As will be understood from the description herein, methods, apparatus, systems, and devices according to the present teachings rely on contact with one or more touch-sensitive surfaces, and more preferably, contact with a processor that receives indications of contact with the touch-sensitive surface.
METHODs, devices, AND SYSTEMs in accordance with the teachings herein may employ one or more of the features described in U.S. provisional patent application No. 62/142,170 entitled "SYSTEM, APPARATUS AND METHOD for vehicle COMMAND AND control," filed on U.S. patent application No. 13/091,158, year 2011, month 4, 21, AND filed on U.S. patent application No. 2015, month 4, 2, both of which are incorporated herein by reference in their entirety.
Touch sensitive surface
Methods and systems in accordance with the teachings herein may use a touch-sensitive surface as a component in an input device for inputting commands. As used herein, a touch-sensitive surface is capable of identifying the location of multiple simultaneous contacts on the surface. Each contact preferably comprises sufficient force applied to the surface required by the touch sensitive surface to identify the contact. The touch sensitive surface may be a flat surface, may be a curved surface, or may have a flat region and a curved region. Preferably, the touch sensitive surface is characterized by being substantially smooth and/or having a substantially uniform structure. For example, the touch-sensitive surface may be sufficiently smooth and/or have a sufficiently uniform structure such that a user cannot identify a location of contact of the surface based on surface topography or other tactile cues on the surface.
The touch-sensitive surface may be a surface of a pure input component or device (i.e., a component or device that does not display images), such as a touch pad, or may be a surface of a combined input/display component or device such as a touch screen display. The device comprising the touch-sensitive surface and/or a processor connected to the device is preferably able to identify each of a plurality of contacts to the surface, maintain contact, move (if any) contact, and terminate contact (i.e. divide).
Fig. 3 shows an exemplary touch sensitive surface. Referring to fig. 3, the touch-sensitive surface 14 may have a first direction 82 (e.g., an upward direction) and an opposite second direction 84 (e.g., a downward direction). The touch-sensitive surface may have a third direction 86 (e.g., a right direction) and an opposite fourth direction 88 (e.g., a left direction). The first and second directions 82, 84 may be substantially orthogonal to the third and fourth 86, 88 directions. When directions are described as up, down, right and left, it should be understood that the description refers to the first, second, third and fourth directions and may have alternative meanings depending on the actual orientation of the touch sensitive surface. For example, when the touch sensitive surface is oriented in a horizontal plane, the up-down direction may actually refer to forward and reverse, respectively.
Initial positioning mode of control position (e.g. finger contact area)
According to the teachings herein, initial positioning of a control location (e.g., a finger contact area) typically requires simultaneous contact with the touch-sensitive surface at three or more points or areas. The three or more contacts preferably coincide with the contact of three or more fingers of one hand or multiple hands. The number of simultaneous contacts required to control the initial position of the position may be 3 or more, 4 or more, 5 or more, 6 or more, 7 or more, 8 or more, 9 or more, or 10. Preferably, the number of simultaneous contacts on the touch sensitive surface is a predetermined target number or a predetermined minimum number. For example, if there is a predetermined target number of four, the positioning of the control location will not be completed until four simultaneous contacts are accurately identified on the touch-sensitive surface. Conversely, if the predetermined minimum number of contacts is four, the positioning of the control location will not be completed until at least four (e.g., four, five, six, eight, or more) simultaneous contacts are identified on the touch-sensitive surface.
FIG. 3 illustrates simultaneous contact of the touch-sensitive surface 14 at three or more spaced-apart locations 16 on the touch-sensitive surface of the touch-sensitive device 12 (e.g., a pure input device).
The simultaneous contacts of the touch sensitive surface are preferably spaced apart on the surface in such a way that successive finger positions can be assigned. For example, the location of the contact may relate to a natural arch (or arches) formed by the fingertips of one hand. Referring to fig. 4, the location of the contact 16 may be along an arch 17, e.g., corresponding to the natural arch of three or more adjacent fingertips.
In the initial positioning mode of the control position, the process typically comprises the step of the user removing some, or preferably all, of the fingers from the touch-sensitive surface. As such, the processor, after identifying simultaneous contact at three or more spaced apart locations, will identify removal at the three or more spaced apart locations (i.e., no contact is present). As shown in FIG. 4, the processor preferably identifies the removal of all contacts with the touch-sensitive surface. It will be appreciated that after initial positioning of the control location, one or more fingers may remain in contact with the touch-sensitive surface (assuming that at least one finger, and preferably at least two fingers, have been removed from the surface). Such a remaining finger may be used to directly input a blending control (controllcompound) as discussed herein without removing the finger and reestablishing contact with the surface.
Referring to fig. 5, the initial positioning of the control location may include simultaneous contact with four or more fingers (e.g., with all fingers of one hand). As shown in fig. 5, the touch sensitive surface may be a touch screen display 15. In this way, the location of the initial contact, the area including the location of the initial contact, or the resulting control location 18 may be displayed on the touch sensitive screen display.
Fig. 6 shows the control position after an initial positioning mode of the control position. Each control location 18 is spaced apart from the other control locations. Each control position may be characterized by a geometric center 22. The location or area of the contact 16 is preferably located within the control location 18. The geometric center 22 is preferably within the location or area of the contact 16. The control location may be an area having any shape. For example, the shape of the control position may be polygonal, circular, semicircular, rectangular, elliptical, egg-shaped, or square. Referring to fig. 6, each location on the touch sensitive surface is associated with at most one control location 18.
The number of control positions (e.g., finger contact areas) may be greater than the number of fingers employed when initially establishing the position of the finger contact area. For example, there may be a primary finger contact area for the first finger and one or more secondary finger contact areas for the first finger. When defining a finger contact area, the base finger contact area may include the contact point of the finger. Each secondary finger contact area for the first finger, if any, is sufficiently displaced from the primary finger contact area for the first finger such that the primary finger contact area and the secondary finger contact area do not overlap. It should be appreciated that the location of the various finger contact areas for the first finger may depend on different amounts of curvature of the first finger. For example, the first finger may be relatively curved when contacting the primary finger contact area and may be relatively less curved (e.g., more extended when contacting the secondary finger contact area). This may be similar to typing "d" (relatively curved) and "e" (relatively extended) on a QWERTY keyboard. As another example, the first finger may be relatively less curved when contacting the primary finger contact area and the first finger may be relatively more curved when contacting the secondary finger contact area. This may be similar to typing "d" (relatively less curved) and "c" (relatively more curved) on a QWERTY keyboard. Examples of touch screen surfaces including a primary finger contact area 72 and a secondary finger contact area 74 are shown in fig. 7 and 8. In fig. 7, there are 5 primary finger contact areas (one for each finger of a hand) and 3 secondary finger contact areas (one for each of the three fingers). In fig. 8, for each of the three fingers, there is one primary finger contact area 72 and two secondary finger contact areas 74.
Each finger contact area is preferably a different area of the touch sensitive surface. In this way, any location on the touch-sensitive surface may correspond to at most one finger contact area at a given time. It should be understood that some areas of the touch-sensitive surface will not correspond to finger contact areas. According to the teachings herein, the finger contact area associated with a given location may change over time. For example, a location p on the touch-sensitive surface may be associated with one finger contact area after a first stage of locating the finger contact area and may be associated with a different finger contact area after a later stage of locating the finger contact area. As another example, after one or more input contacts in a finger contact area that is offset from the center of the finger contact area, a location p on the touch-sensitive surface that is initially associated with the finger contact area may no longer be associated with it (conversely, p may not be associated with the finger contact area or may be associated with a different finger contact area).
Features of a process for locating an initial control position are shown in FIG. 9. The process may be identified from an action of a user of the touch sensitive device, an action of the device or the processor, or both. The actions of the user may include: a step in which the user simultaneously contacts the touch-sensitive surface with three or more fingers; and after contacting the touch-sensitive surface, the user removing three or more fingers from the touch-sensitive surface. Actions of the device or processor may include the step of identifying simultaneous contacts on the touch-sensitive surface at three or more spaced-apart contact points or contact areas (e.g., consistent with finger contact of a user's hand); followed by the step of identifying removal of the contact from the touch-sensitive surface and controlling the location of the location (e.g., finger contact location) based on the location of the contact. It should be appreciated that the positioning of the control locations may occur at any time after the device or processor recognizes simultaneous contact. For example, the positioning of the control positions may be before the contact is removed. As shown in fig. 9, for example, the positioning of the control location may be after identifying the removal of the contact. In this way, the positioning of the control location may be based on an initial position of the contact, or a later (e.g., final) position of the contact in the case where the contact moves during a mode of positioning the control location.
Mode of inputting control command
After establishing the initial command position on the touch-sensitive surface, the touch-sensitive surface may be applied in a mode for inputting control commands.
The touch-sensitive surface may be used to control one or more devices and/or to control a number of features or functions of the device. As such, the processor may need to identify which device/feature/function to control using the control command. The selection of the control command may be based on the control position or the position touched during the mode for inputting the control command. To increase the number of different possible commands that can be entered, the processor may identify and use one or any combination of the following features of contact with the surface: the number of contact locations contacted, the sliding of the contact, the direction of movement of the contact, the length of time of the contact, the number of sliding of the contact, and the removal of the contact. The problem of requiring selection of a control command from a large number of different commands with relatively few fingers is solved by the user using different types of gestures and by the processor recognizing the gestures.
The process may include inputting, by a user, a control command of one or more gestures on the touch-sensitive surface and a processor that recognizes the gesture and selects the control command based on a predetermined association between the gesture and the control command.
Typically, one or more fingers remain in contact with the touch-sensitive surface during input of a command.
The gesture may be any gesture that can be recognized by the processor. Preferably, the gesture is initiated with contact of one or more control locations. The user may initiate an input gesture by contacting the touch-sensitive surface with one of the fingers beginning in the finger contact area and, while maintaining contact, performing one or more sliding motions with the finger, including sliding the finger in at least a first direction, and then removing the one finger from the touch-sensitive surface. For example, a single finger may contact the finger contact area and then slide in one or any combination of the following directions: up, down, right, and left, and then remove the finger from the touch-sensitive surface. It should be appreciated that after moving in a first direction, the finger may be moved in an opposite direction before removing the finger from the touch-sensitive surface.
The touch-sensitive surface is contacted with one of the fingers in the finger contact area, then one or more contacts are made with a second finger with the touch-sensitive surface, and then the one finger is removed from the touch-sensitive surface. The contact with the second finger may be a sliding contact, a tapping contact, or a contact held in a single position.
The first finger remains at a constant position on the touch-sensitive surface while the second finger is in dynamic contact with the touch-sensitive surface. The input of the command preferably ends with the removal of the first finger from the touch-sensitive surface.
The touch-sensitive surface is contacted with two fingers, each finger in a different finger contact area, and at least one of the fingers is moved in a first direction. Preferably moving both fingers in the same direction or moving both fingers towards each other.
It should be understood that the control of the device may include multiple gesture inputs. For example, the process may require the input of a first control command to select the device to be controlled and the later input of a control command to control the function of the device.
The distance moved and/or the length of contact may be used to determine the level or degree of control of the device. For example, in controlling the speed of the device, movement in one direction may be employed to continue increasing the speed until the end of the gesture is recognized (e.g., by removing contact or other input in accordance with the teachings herein). The rate of increase of the speed may be related to the distance of movement of the gesture. As another example, the volume of the device may be controlled by a gesture starting at the control location. Movement of the contact a first movement distance in a first direction may cause the volume of sound to increase at a rate related to the first movement distance. The increase in volume may continue until contact is removed or other recognition of the gesture is completed. The reduction in volume may similarly be achieved by contact of a second distance of movement in a second direction (e.g., an opposite direction) different from the first direction.
Referring to FIG. 10, the input of the gesture may include a contact 40 on the touch-sensitive surface at a point or area 16 within the control location 18 after the control location has been established. As shown in fig. 10, the point or area of contact 16 may be offset from the center of the control location 22. The point or area of contact may include a center of control location (not shown).
The gesture may include movement of one or more contacts. The movement may be a sliding movement in one or more sliding directions. For example, the gesture may include movement of the contact 16 across the touch-sensitive surface in a generally rightward direction (FIG. 11A), a generally leftward direction (FIG. 11B), a generally upward direction (FIG. 11C), or a generally downward direction. It should be understood that the sliding motion may include motion in multiple directions and/or motion that is non-linear. Using gestures to input commands may include multiple contacts, each contact having a different control location. Each contact may include movement in the same sliding direction, may include movement in different directions, or may include stationary and moving contacts. Preferably, the multiple contacts during the input command have simultaneous sliding movements (same or different). For example, the gesture may include a sliding motion of the two contacts toward each other, a sliding motion of the two contacts away from each other, or sliding contacts that are substantially in the same direction. The sliding motion may be a generally small motion (e.g., within the control position), a generally large motion (e.g., outside the control position or greater than a distance from a center to an edge of the control position).
The association between the command and the gesture may include a first command associated with a sliding motion of a first finger in only a first direction and a second, different command associated with a sliding motion of a second finger in only a second direction.
The first and second directions may be the same (e.g., within about 30 °, within about 20 °, within about 15 °, within about 10 °, or within about 5 °) or different (e.g., at an angle greater than 30 °, at an angle greater than 40 °, or at an angle greater than 50 °).
Inputting control commands using gestures may include one or more features as shown in fig. 12. The user may contact one or more control locations using different fingers and then make gestures on the touch-sensitive surface while remaining in contact with the surface. The gesture is preferably associated with a predetermined control command. The process preferably includes the step of removing a finger from the touch-sensitive surface or other action for completing the gesture input. After the input of the gesture is completed, another gesture may be input. During input of control commands using gestures, the processor or device typically recognizes contact with one or more control locations (e.g., after the control locations have not made contact). The process or device then identifies the type of gesture formed at the contact. The process or device may recognize the gesture as a continuous contact with the surface. After recognizing the gesture, the processor or device may recognize a predetermined control command associated with the input gesture. Preferably, the control commands are executed, sent, or otherwise acted upon. It should be understood that the control command may be associated with a single gesture or a series of gestures. According to the teachings herein, a processor or device may recognize a gesture immediately upon contact, after removing one or more sliding motions of the contact from the surface, or after completing an input gesture.
The input for each gesture includes contact with one or more control locations (e.g., finger contact areas). The input of the gesture may be accomplished by removing all fingers from the touch-sensitive surface.
The input of the gesture may be completed (e.g., timed out by the processor) after contact with the control location exceeds a predetermined time limit. For example, the input of the gesture may be completed after a predetermined time limit of about 0.5 seconds or more, about 1 second or more, about 1.5 seconds or more, about 2 seconds or more, about 3 seconds or more, or about 4 seconds or more of continuous contact with the touch screen surface. The predetermined time limit, if any, is typically about 100 seconds or less, about 30 seconds or less, about 15 seconds or less, about 10 seconds or less, or about 6 seconds or less.
The input of the gesture may be completed (e.g., timed out by the processor) after a predetermined limit of the number of directional changes of the sliding motion is reached (i.e., a predetermined directional change limit). Here, multiple sliding directions typically occur while maintaining contact with the touch-sensitive surface. As used herein, the change in direction may be a change of about 15 ° or more, about 45 ° or more, about 90 ° or more, about 135 ° or more, or about 180 °. For example, the gesture may be completed after the processor recognizes an initial sliding motion and then makes a first change in the sliding direction (i.e., the predetermined direction change limit is 1). As another example, the gesture may be completed after the processor recognizes a sliding motion in an initial direction, followed by a sliding motion in a second direction, followed by a sliding motion in a third direction (i.e., the predetermined direction change limit is two). The predetermined limit may be one or more, two or more, three or more, or four or more. Typically, the predetermined direction change limit will be 10 or less, or 5 or less.
The input of the gesture may be completed (e.g., timed out by the processor) when the contact moves in a predetermined direction. The predetermined direction may be any direction. For example, the predetermined direction may be a generally upward direction, a generally downward direction, a generally rightward direction, or a generally leftward direction.
The input of the gesture may be completed (e.g., timed out by the processor) when the contact returns to the finger contact area (or a location in the finger contact area) after the sliding motion away from the finger contact area.
The input of the gesture may be completed (e.g., timed out by the processor) when the contact moves in the predetermined shape. For example, the input of the gesture may be accomplished when the processor recognizes a sliding motion in a shape such as an arc, a semicircle, a circle, a triangle, a rectangle, a square, a star, a letter, a number, or any combination thereof.
After the gesture is completed (e.g., after the input of the processor-determined gesture is completed), the next gesture may be input by contacting one or more control locations.
During the mode of inputting the control command, the contact with the control position may be offset from the center of the control position. It should be understood that such deviations may be systematic and/or indicative of changes in the user's perception of the control location. To compensate for such variations, the mode for inputting the control command may include a mode for repositioning the control position. Instead of having a mode for repositioning the control position, the control position may remain fixed,
mode for repositioning control position
During the mode of inputting control commands, the touch-sensitive surface may be contacted at a point or area of contact 16 within the control location 18, but offset from the center 22 of the control location 18, as shown in FIG. 13A.
The processor or device may reposition the control location based at least in part on the offset distance and offset direction between the center of the contact 15 and the center 22 of the control location 18.
It will be appreciated that repositioning may occur whenever contact is offset from the center of the control position, or only in certain circumstances. For example, the offset distance must reach a threshold value before repositioning the control position. As another example, the offset must occur with sufficient frequency before repositioning the control position.
The new control position may have the same size as the previous control position or a different size. The new control position may have the same size or a different shape than the previous control position. The new control position preferably has the same size or the same shape as the previous control position. More preferably, the new control position has the same size and the same shape as the previous control position.
Fig. 13B shows the repositioning of the control position, and shows the positioning of the new position and the positioning of the previous position. The new control position 36 may have a center 38 that is displaced from the center 22 of the previous control position 18. The direction of displacement 32 is preferably substantially the same as the direction of offset. The distance of displacement between the previous center 22 and the new center 38 is preferably about equal to or less than the distance of offset. For example, the distance between the previous center 22 and the new center 38 may be a percentage of the offset distance (preferably about 100% or less than 100%).
It should be understood that the offset distance and offset direction of one contact location may be used to reposition one or more other contact locations.
The repositioning of the finger contact area may include one or any combination of the steps shown in fig. 14. The repositioning of the control positions typically occurs after the control positions have come into contact. The repositioning of the control position may occur before the contact sliding motion, after the contact sliding, or after the contact is removed from the touch-sensitive surface.
This process may allow for the repeated or sequential input of different control commands. For example, after the input of the recognition control command is ended, the process may be used to input a subsequent control command, as shown in fig. 15. It should be understood that the step of removing the contact may be replaced by a step of recognizing an end of input of a control command, such as described herein. It will also be appreciated that such a step of identifying the end of the input of the control command (e.g., identifying removal of a finger contact from the touch-sensitive surface) may occur, which may occur after the step of identifying the command based on the gesture.
Orientation of touch sensitive surface
The touch-sensitive surface may be oriented in a direction and/or position that prevents a user from viewing the touch-sensitive surface. For example, a user may hold a device that includes a touch-sensitive surface such that the touch-sensitive surface faces away from the user. Referring to fig. 16, a user 62 may hold the device 12 such that the touch-sensitive surface 14 is directed away from the user's eyes 64. For example, the screen direction 52, which is perpendicular to and away from the touch-sensitive surface 14, may be partially or fully in a forward direction. In this way, the screen display may be oriented away from the user's eyes 64. Referring to FIG. 17, the user 62 may be a driver of a vehicle 66, and the touch sensitive device 14 may be mounted to the vehicle 66. Here, the touch-sensitive surface is mounted such that the touch-sensitive surface faces away from the eyes of the driver.
The touch-sensitive surface may be oriented within a field of view of the user. However, the control locations may be set based on contact with the touch-sensitive surface using three or more fingers to enhance operation and/or control of the device. For example, as shown in fig. 18, the touch-sensitive surface may be located within the field of view of the vehicle driver.
The touch sensitive surface may be mounted to, attached to, or integrated with a device positioned to be in contact with a user. For example, the touch sensitive surface may be located in the vehicle close to the driver and preferably at an ergonomic position. By way of example, the touch-sensitive surface may be mounted to a steering wheel and/or steering column, as shown in fig. 19A and 19B.
Devices and systems in accordance with the teachings herein may include one or more viewing panels (e.g., display panels of a non-touch sensitive display screen) for viewing available command functions. The display panel may be located on the same device as the touch-sensitive surface, but at a different location. For example, the display panel and the touch-sensitive surface may be located on opposite sides of the device. The display panel may be located on a different device than the touch-sensitive surface. For example, the touch sensitive panel may be mounted on a steering wheel, while the display panel may be a panel attached to or integrated with the dashboard of the vehicle. Referring to fig. 19B, the display panel 68 may be a display screen of a mobile phone or a display screen of a vehicle display device. The viewing panel display may be dimmed or turned off after a predetermined period of inactivity on the touch-sensitive surface.
A device or system incorporating a touch-sensitive surface may include control means for turning the device on or off or for resetting the device. The control component may be a switch or other component capable of performing and/or communicating an on, off, or reset function.
The apparatus or system may include one or more features for disabling the touch-sensitive surface. Such inhibited functionality may be particularly useful in vehicles when operation of the apparatus and system is desired to be inhibited based on vehicle operating conditions. For example, the touch-sensitive surface may be disabled when the vehicle is turning and/or currently changing direction, traveling beyond a certain/speed, etc.
A touch sensitive device according to the teachings herein may include a grip for spreading fingers to different locations on the touch sensitive surface.
Control commands according to the teachings herein may be used to control air conditioning, radio, windows, lights, locks, cruise control, applications on a mobile phone, navigation control, position of a cursor, mechanical devices, electronic devices, operation of a land vehicle, operation of a water vehicle, operation of an aircraft, remote operation of a vehicle or other device, communication devices, or any combination thereof. As used herein, control commands include providing APIs or code to enable two computer systems to interact, such as smart phones and automotive computer systems.
Entry of a password
Devices, methods, systems, and apparatus according to the teachings herein may be used to enter a password for unlocking the device. Here, the password may include a series of gestures, each gesture employing one or more contacts with the finger contact area. When the device is unlocked, the touch-sensitive surface is simultaneously in contact with three or more fingers to assign finger contact areas based on the location of the contact. The user may then enter a password by contacting the finger contact area to enter a series of gestures. As an example, the password may be a series of 2 or more gestures, 4 or more gestures, 6 or more gestures, or 8 or more gestures. Each gesture may be the same or different from the previous gesture. Each gesture may employ the same or different finger contact area as the previous gesture. Each gesture may employ the same number of fingers or a different number of fingers as the previous gesture. For example, the passcode may include one gesture requiring sliding movement of one, two, or more fingers in the same direction, the passcode may include sliding movement of two fingers toward each other, the passcode may include tapping one, two, or more finger contact areas simultaneously, or any combination thereof. For purposes of illustration, for password input, the user may enter a password by first simultaneously contacting the touch-sensitive surface with first, second, third and fourth fingers to assign a finger contact area for each of the four fingers, followed by an input of a first gesture to slide the first finger to the right, followed by an input of a second gesture to slide the third and fourth fingers together, followed by an input of a third gesture to slide the first, second and third fingers in an upward direction, followed by an input of a fourth gesture to tap the surface using the fourth finger. It should be appreciated that a large number of possible gestures will result in a more secure password and/or reduce the number of inputs required to obtain a secure password. It will also be appreciated that the password may be entered at different locations on the touch screen surface, thereby reducing the likelihood that the indication of touch of the password (evidence) will remain on the screen after repeated entry of the password over time.
A preferred passcode includes two or more consecutive gestures in different directions. Preferred passwords include two or more consecutive gestures using different finger contact areas or different combinations of finger contact areas.
When the password is entered, the processor may identify a continuous contact of the touch-sensitive surface at three or more locations and coincide with the surface contact of three or more fingers, and then assign a finger contact location based on the location of the contact. The number of simultaneous contacts used to assign the finger contact area may be a predetermined number, such as 3, 4, 5, 6, 7, 8, 9, or 10. After the processor identifies that multiple simultaneous contacts are removed from the surface, the processor may monitor the surface for contacts and gestures consistent with a first gesture of a predetermined passcode.
The system may include a mode to set or reset the password. Setting or resetting of the passcode may include simultaneously contacting the touch-sensitive surface with three or more fingers to assign finger contact areas, followed by continuous input of the gesture passcode. The system may need to re-enter the password to confirm the password. The system may store the password. The password may be encrypted by the system. The system may delete the previous password.
The touch sensitive surface may be attached to a vehicle. A touch screen surface in a vehicle may be proximate to a driver's seat and/or a driver of the vehicle. The touch screen surface used by the passenger may be proximate to the seat of one or more passengers and/or the location of one or more passengers of the vehicle. For example, a touch screen device in a vehicle may be connected to a steering wheel, a steering wheel column, a dashboard, a seat back, a pillar, a door, or any combination thereof.
The touch screen surface may be communicatively coupled to one or more devices to be controlled. For example, the touch screen surface may be communicatively coupled to a control processor of the vehicle.
The touch-sensitive surface may be oriented face down (beyond the driver's or other user's field of view). The touch sensitive surface may be part of or connected to a device adapted to track motion through a plurality of individually identified fingers.
The touch-sensitive surface may be positioned such that the vehicle driver may contact the surface using one or more fingers without having to remove his hands from the steering wheel.
The touch-sensitive surface may be associated with device software (e.g., an application) that identifies and/or tracks multiple contacts of the surface. The device software may run on a computer system. The device software may be adapted to recognize finger-generated contacts, finger-generated gestures, or both. The device software may be adapted to recognize multiple (preferably three or more, four or more, or five or more) simultaneous finger-generated contacts, multiple finger-generated gestures, or both.
Data input device/game controller
Methods, systems, and apparatus according to the teachings herein may be used with data input devices, such as handheld data input devices. For example, the data input device may be a device for controlling the operation of an application, processor, connected device or remote device. By way of illustration, the data input device may be a device for controlling (e.g., remotely controlling) a video game, a machine, a vehicle, a flying device, a robotic device, or any combination thereof. As an example, the data input device may be a game controller. A handheld device such as a game controller may have one or any combination of the features shown in fig. 20A and 20B. Referring to fig. 20A and 20B, the hand held device 110 may have a front surface 112 and an opposing rear surface 116. The front surface may include one or more touch sensitive surfaces 14. Referring to fig. 20B, front surface 112 of device 110 may include a touch-sensitive surface for fingers of a user's right hand and a second touch-sensitive surface for fingers of a user's left hand. It should be understood that the forward facing surface may have a single touch sensitive surface large enough for receiving simultaneous contact from two fingers. The touch-sensitive surface 14 of the forward-facing surface 112 of the device is preferably large enough for contact with three or more spaced-apart fingers of the hand. The device preferably has a sufficient number of touch sensitive surfaces and dimensions for contact with 4 or more fingers, 6 or more fingers, or 8 or more fingers. The rear surface may include one or more thumb controls, such as a button, knob, dial, joystick or roller ball that can be controlled by a thumb when a finger is resting on the front surface. Referring to fig. 20A, the device 110 may include one or more (e.g., two or more) thumb controls for the left hand, one or more (e.g., two or more) thumb controls 118 for the right hand, or both. The device has a side surface 114 connecting the front and rear surfaces. The side surface 114 is preferably adapted to receive a palm of a hand. The side surface 114 may be rounded or otherwise curved. The device 110 may include gripping features for assisting in finger placement and/or making the device easier to grip. An example of a gripping feature 120 is shown in fig. 20C. The front surfaces 112, 112', 112 "may be generally concave, generally flat, or generally convex. Preferably, the anterior surface 112 "is generally convex and the posterior surface is generally concave, as shown in fig. 20D, making the arrangement of the lateral surface 114 between the thumb and fingers more natural.
It should be understood that during any of the aforementioned modes (e.g., a mode for controlling an initial position of a position, a mode for controlling repositioning of a position, or a mode for inputting control commands), one or more contacts of the touch-sensitive surface (e.g., by contact with one or more fingers) may be replaced by sensing the position of multiple fingers. For example, the position of the finger may be identified by light (e.g., laser) or other forms of radiation, electric fields, magnetic fields, blackness levels (e.g., shadows), or any combination thereof. It will be appreciated that a glove or other device may be placed over one or more fingers to enable the location of the fingers to be identified and/or to enhance the above-described observation of the finger location. Similarly, gestures on the surface of the device may be sensed in one of the ways described above with or without actual contact with the surface. Preferably, any such sensing occurs when a finger is at least proximate to the touch-sensitive surface (e.g., about 30mm or less, about 10mm or less, about 3mm or less, or about 1mm or less). Sensing may be accomplished when the finger is off the surface. For example, the end of mode or end of command input may be completed when the finger is sensed to no longer be near the touch-sensitive surface (e.g., more than 1mm, 3mm, 10mm, or 30mm from the surface, or when the distance from the surface increases by at least about 1mm, about 3mm, about 5mm, about 10mm, or about 20 mm).
One or more of the above modes may be completed (e.g., by a processor-determined time) after exceeding a predetermined time limit from one or any combination of: i) one or more contacts with the touch-sensitive surface; ii) sensing one or more objects (e.g., fingers) above the touch-sensitive surface; iii) removing one or more contacts from the touch-sensitive surface; or iv) sense motion of one or more objects (e.g., fingers) away from the touch-sensitive surface. Preferably, the predetermined time limit, if any, is about 0.5 seconds or more, about 1 second or more, about 1.5 seconds or more, about 2 seconds or more, about 3 seconds or more, or about 4 seconds or more. Preferably, the predetermined time limit, if any, is about 100 seconds or less, about 30 seconds or less, about 15 seconds or less, about 10 seconds or less, or about 6 seconds or less. The predetermined time limit may be fixed, may be adjusted (e.g., based on experience or historical values, or set by a user).
The transition from the mode for the initial position of the control position to the mode for inputting the control command may be triggered by a triggering event. As described herein, the trigger event may be a predetermined time limit. The trigger event may be the sensing of the removal of one or more objects (e.g., fingers) away from the touch-sensitive surface. A triggering event may require the processor to sense that exactly one object (e.g., one finger) remains exactly on or near the touch-sensitive surface while the other objects move away from the touch-sensitive surface.
As an example, if values specifying the number of parts or process variables, such as temperature, pressure, time, and others, are, for example, from 1 to 90, preferably from 20 to 80, more preferably from 30 to 70, but values such as 15 to 85, 22 to 68, 43 to 51, 30 to 32, etc., are expressly recited in this specification, for values less than 1, one unit is considered to be 0.0001, 0.001, 0.01, or 0.1, as appropriate, these are merely specifically intended examples, and all possible combinations of numerical values between the lowest value and the highest value recited will be expressly contemplated in this application.
Unless otherwise indicated, all ranges include endpoints and all numbers between endpoints. The use of "about" or "approximately" in relation to a range applies to both ends of the range. Thus, "about 20 to about 30" is intended to cover "about 20 to about 30," including at least the endpoints specified.
The disclosures of all articles and references, including patent applications and publications, are incorporated herein by reference for all purposes. The term "consisting essentially of" as used to describe a combination is intended to include the identified elements, components, constituents, or steps, as well as other elements, components, or steps, that do not materially affect the basic and novel characteristics of the combination. Combinations of elements, components, compositions, or steps herein are described using the term "comprising" or "comprises," and embodiments are also contemplated that consist essentially of the elements, components, compositions, or steps. By using the term "may" herein, it is intended that any described attribute that "may" be included is optional.
A plurality of elements, components, groups or steps may be provided as a single integrated element, component, group or step. Alternatively, a single integrated element, component or step may be divided into separate plural elements, components or steps. The disclosure of "a" or "an" to describe an element, component or step is not intended to exclude other elements, components or steps.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and many applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. The disclosures of all articles and references, including patent applications and publications, are incorporated herein by reference for all purposes. The omission in the claims of any aspect of the subject matter disclosed herein is not a disclaimer of such subject matter, and should not be taken as an inventor who does not consider such subject matter to be part of the disclosed inventive subject matter.
2 control speech path
4 initial positioning mode of control position (e.g. initial finger contact position)
Mode of 6 input control command
7 control commands
8 mode of repositioning control positions
9 end control session
10 simultaneous contact with a touch-sensitive surface
12 touch sensitive device
14 touch sensitive surface
16 contact points/areas of a touch sensitive surface
17 Arch (e.g., natural arch with spaced fingertips)
18 finger contact area
20 removing part or all of a finger from a touch-sensitive surface
22 center of finger contact area
24 finger
Assignment of 30 finger contact areas
32 for changing the direction of the finger contact area
34 gesture direction or sliding direction when contacting a surface
36 new finger contact area
38 new center of finger contact area
40 Command input contact (contact touch sensitive surface after establishing finger contact area)
52 viewing direction of the touch sensitive surface (e.g., plane perpendicular to the surface)
62 user of a touch sensitive surface
64 user's eye mask
66 autonomous vehicle
68 display panel
72 basic finger contact area
74 Secondary finger contact area
82 in a first direction (e.g., an upward direction) on the touch-sensitive surface
84 in a second direction (e.g., a downward direction) on the touch-sensitive surface
86 third direction (e.g., to the right) on the touch-sensitive surface
88 in a fourth direction (e.g., leftward) on the touch-sensitive surface
90 set initial control position (e.g., initial finger contact position)
92 mode for inputting control command
94 mode of repositioning control positions
110 game controller
112 forward surface (e.g., surface away from user)
114 side surface
116 rear surface (e.g., user facing surface)
118 thumb control
120 grab feature

Claims (21)

1. A method of inputting a command, comprising the steps of:
i. a processor connected to the touch sensitive surface senses simultaneous precise positioning of three or four objects above and near each touch sensitive surface at different sensing locations, including a position centered on a first finger initiating sensing point, a position centered on a second finger initiating sensing point, and a position centered on a third finger initiating sensing point;
assigning, by the processor, finger position regions for two or more of the three or four objects, wherein each finger position region is a distinct region of the touch sensitive surface and each finger position region comprises one of a plurality of initial finger sensing points and each finger position region is assigned based only on input to the touch sensitive surface at a different said sensing location;
the processor entering a command input mode after the step of assigning the finger position zones, wherein the command input mode comprises an association having at least a first command associated with movement of only one of the objects beginning in the first finger position zone and a second command different from the first command associated with movement of only one of the objects beginning in the second finger position zone; and
the processor recognizing a gesture on the touch-sensitive surface, including sensing motion of only one of the objects beginning at a first finger position area, and recognizing an associated first command based on the gesture;
wherein the finger position area is assigned only after sensing the positions of three or four objects in step i; the touch-sensitive surface is one that faces the user in the vehicle and one of the objects remains on the touch-sensitive surface of the finger position area assigned by the first command of the gesture input.
2. The method of claim 1, wherein the method comprises executing control commands for controlling a device.
3. The method of claim 2, wherein the touch-sensitive surface is a surface of a pure input device, the touch-sensitive surface not being a display surface.
4. The method of claim 3, wherein the touch-sensitive surface is attached to a steering wheel.
5. The method of claim 4, wherein a second touch-sensitive surface is attached to the steering wheel.
6. A method of inputting a command, comprising the steps of:
i. a processor connected to the touch-sensitive surface senses simultaneous locations of three or more objects above and near the touch-sensitive surface at three or more different sensing locations, including a location centered on a first finger-initialized sensing point, a location centered on a second finger-initialized sensing point, and a location centered on a third finger-initialized sensing point;
assigning, by the processor, finger position regions for two or more of the three or more objects, wherein each finger position region is a distinct region of the touch-sensitive surface and each finger position region comprises one of a plurality of initial finger sensing points and each finger position region is assigned based only on input to the touch-sensitive surface at a different said sensing location;
the processor entering a command input mode after the step of assigning the finger position zones, wherein the command input mode comprises an association having at least a first command associated with movement of only one of the objects beginning in the first finger position zone and a second command different from the first command associated with movement of only one of the objects beginning in the second finger position zone; and
the processor recognizing a gesture on the touch-sensitive surface, including sensing motion of only one of the objects beginning at a first finger position area, and recognizing an associated first command based on the gesture;
wherein during the command input mode the processor identifies an input location of an object within a first finger position area, the first finger position being centered on a first finger offset sensing point different from a first finger initial sensing point, and the method includes the step of repositioning the first finger position area.
7. The method of claim 6, wherein the method includes the processor recognizing a gesture on the touch-sensitive surface, including movement of only one of the objects beginning at the second finger position area, and recognizing an associated second command based on the gesture.
8. The method of claim 6, wherein the command input mode is initiated after a predetermined time interval in which a processor senses simultaneous positioning of three or more objects.
9. The method of claim 6, wherein the command input mode is initiated after the processor recognizes removal of the one or more objects.
10. The method of claim 9, wherein the command input mode is initiated after the processor identifies removal of all but one of the objects.
11. The method of claim 10, wherein the processor senses continuous contact between one of the objects and the touch-sensitive surface from sensing a location of the object to sensing a gesture made with the object.
12. The method of claim 11, wherein a third command different from the first and second commands is associated with a different motion of only one of the objects beginning with a first finger position area.
13. The method of claim 6, wherein the touch-sensitive surface is a surface of a pure input device, the touch-sensitive surface not being a display surface.
14. The method of claim 6, wherein the touch-sensitive surface is oriented such that the surface faces away from the user.
15. The method of claim 6, wherein the method comprises executing control commands for controlling a device.
16. The method of claim 6, wherein the first finger position region has a geometric center, and the step of repositioning the first finger position region includes moving the geometric center of the first finger position region toward the first finger offset sensing point.
17. The method of claim 6, wherein the method comprises one or more of the following features:
i) the three or more objects are fingers;
ii) a step of sensing simultaneous localization when the object is 10mm or less from the touch-sensitive surface;
iii) the three or more different sensing locations coincide with the locations of three or more fingers of the hand;
iv) the processor assigning a finger position area to each object;
v) each finger position area corresponds to only one finger;
vi) initiating a command input mode after the processor assigns a transition event for the finger position area; or
vii) the transition event is a sensing by the processor of all but one of the objects being removed from the touch-sensitive surface, or a sensing up to a predetermined time interval after the positioning of the object.
18. The method of claim 6, wherein the input command controlled device is selected from the group consisting of: radio, telecommunication device, heating and/or air conditioning system, electric motor, internet connection, internet application, video game, light.
19. The method of claim 6, wherein each location on the touch-sensitive surface is within at most one of the finger position areas.
20. A method of inputting a control command, comprising the steps of:
i. identifying simultaneous contacts on the touch-sensitive surface at three or more different contact locations, including a contact centered on the first finger initial contact point, a contact centered on the second finger initial contact point, and a contact centered on the third finger initial contact point;
assigning a finger contact area to each of the three or more fingers, wherein each finger contact area is a different area of the touch sensitive surface and each finger contact area comprises one initial point of contact and each finger contact area is assigned based only on input to the touch sensitive surface at a different said contact location;
identifying a removal of contact with the touch-sensitive surface at one or more contact locations; and
identifying an input contact in one of the finger contact areas, followed by a sliding movement of the input contact in one or more contact movement directions, and moving a cursor on the display screen in a corresponding one or more cursor movement directions;
wherein the method comprises:
processor identification
i) A single contact on the touch-sensitive surface in a single finger contact area,
ii) movement of a single contact in one or more directions for selecting an application to be controlled, and
iii) removing the single contact from the touch-sensitive surface;
the processor selecting an application to control based on the finger contact area and the motion;
the processor identifies:
i) a different single contact on the touch-sensitive surface, wherein the single contact is at a different one of the finger contact areas,
ii) movement of different individual contacts in one or more directions to select control of an application, and
iii) removing a different single contact from the touch-sensitive surface; and the processor selects control of the application based on the location of the different individual contacts and the motion of the different individual contacts.
21. A system for inputting a control command for controlling a device, comprising:
an input device comprising a touch-sensitive surface;
a processor connected to the touch-sensitive surface;
a memory storing instructions that, when executed by a processor, cause the processor to perform:
i. sensing simultaneous positioning of three or more objects above and near or on the touch-sensitive surface at three or more different sensing locations, including sensing centered at a first finger initial sensing point, sensing centered at a second finger initial sensing point, and sensing centered at a third finger initial sensing point;
assigning finger position regions to two or more of the three or more objects, wherein each finger position region is a distinct region of the touch sensitive surface and each finger position region comprises one of a plurality of initial sensing points and each finger position region is assigned based only on input to the touch sensitive surface at a different said sensing location;
entering a command input mode after a predetermined event, wherein the command input mode comprises an association of at least a first command and a second, different input command, wherein the first command is associated with a movement of only one of the objects starting in the first finger position area and the second, different input command is associated with the same movement of a different one of the objects starting in the second finger position area; and
identifying a gesture on the touch-sensitive surface, including sensing motion of only one of the objects beginning at a first finger position area, and identifying a relevant input command based on finger position and motion;
wherein during the command input mode, the processor identifies an input location of an object within a first finger position area, the first finger position centered on a first finger offset sensing point different from a first finger initial sensing point, the processor relocates the first finger position area.
CN201680061416.1A 2015-08-20 2016-08-22 Apparatus, system and method for inputting commands or characters using a touch screen Active CN108780365B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201562207564P 2015-08-20 2015-08-20
US62/207,564 2015-08-20
US201562266916P 2015-12-14 2015-12-14
US62/266,916 2015-12-14
PCT/IB2016/001256 WO2017029555A2 (en) 2015-08-20 2016-08-22 Device, system, and methods for entering commands or characters using a touch screen

Publications (2)

Publication Number Publication Date
CN108780365A CN108780365A (en) 2018-11-09
CN108780365B true CN108780365B (en) 2020-07-14

Family

ID=58050803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680061416.1A Active CN108780365B (en) 2015-08-20 2016-08-22 Apparatus, system and method for inputting commands or characters using a touch screen

Country Status (3)

Country Link
EP (1) EP3338172A4 (en)
CN (1) CN108780365B (en)
WO (1) WO2017029555A2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10146428B2 (en) 2011-04-21 2018-12-04 Inpris Innovative Products From Israel Ltd Device, system, and methods for entering commands or characters using a touch screen
US10120567B2 (en) 2015-04-02 2018-11-06 Inpris Innovative Products From Israel Ltd System, apparatus and method for vehicle command and control
US11449167B2 (en) 2017-06-26 2022-09-20 Inpris Innovative Products Fromisrael, Ltd Systems using dual touch and sound control, and methods thereof
DE102018100196A1 (en) * 2018-01-05 2019-07-11 Bcs Automotive Interface Solutions Gmbh Method for operating a human-machine interface and human-machine interface
US10915184B1 (en) * 2020-01-10 2021-02-09 Pixart Imaging Inc. Object navigation device and object navigation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7030861B1 (en) * 2001-02-10 2006-04-18 Wayne Carl Westerman System and method for packing multi-touch gestures onto a hand
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
CN102947783A (en) * 2010-03-26 2013-02-27 欧特克公司 Multi-touch marking menus and directional chording gestures

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292268A1 (en) * 2010-05-26 2011-12-01 T-Mobile Usa, Inc. Multi-region touchpad device
US9235340B2 (en) * 2011-02-18 2016-01-12 Microsoft Technology Licensing, Llc Modal touch input
US9261972B2 (en) * 2011-04-21 2016-02-16 Inpris Innovative Products Ltd Ergonomic motion detection for receiving character input to electronic devices
US8970519B2 (en) * 2012-02-01 2015-03-03 Logitech Europe S.A. System and method for spurious signal detection and compensation on an input device
US20130194235A1 (en) * 2012-02-01 2013-08-01 Logitec Europe S.A. Multi-sensor input device
US9075462B2 (en) * 2012-12-10 2015-07-07 Sap Se Finger-specific input on touchscreen devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7030861B1 (en) * 2001-02-10 2006-04-18 Wayne Carl Westerman System and method for packing multi-touch gestures onto a hand
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
CN102947783A (en) * 2010-03-26 2013-02-27 欧特克公司 Multi-touch marking menus and directional chording gestures

Also Published As

Publication number Publication date
CN108780365A (en) 2018-11-09
EP3338172A4 (en) 2019-07-03
EP3338172A2 (en) 2018-06-27
WO2017029555A2 (en) 2017-02-23
WO2017029555A3 (en) 2017-04-27

Similar Documents

Publication Publication Date Title
US10146428B2 (en) Device, system, and methods for entering commands or characters using a touch screen
CN108780365B (en) Apparatus, system and method for inputting commands or characters using a touch screen
EP2541385B1 (en) Information processing apparatus, information processing method, program and remote control system
US10120567B2 (en) System, apparatus and method for vehicle command and control
US9111076B2 (en) Mobile terminal and control method thereof
US10076839B2 (en) Robot operation apparatus, robot system, and robot operation program
EP2509335B1 (en) Remote operation device, remote operation system, remote operation method, and program
JP4351599B2 (en) Input device
DK2834050T3 (en) A method of operating an industrial robot.
US20170262057A1 (en) Method for operating a display, display device for a motor vehicle, and motor vehicle having a display device
EP2933130A2 (en) Vehicle control apparatus and method thereof
US20120203544A1 (en) Correcting typing mistakes based on probabilities of intended contact for non-contacted keys
US9387590B2 (en) Method for operating an industrial robot
EP2469386A1 (en) Information processing device, information processing method and program
US9703375B2 (en) Operating device that can be operated without keys
US10146432B2 (en) Method for operating an operator control device of a motor vehicle in different operator control modes, operator control device and motor vehicle
US20180150136A1 (en) Motor vehicle operator control device with touchscreen operation
JP2019169128A (en) Method for operating man-machine interface and man-machine interface
CN110045815A (en) For running the method and man-machine interface of man-machine interface
WO2013071198A2 (en) Finger-mapped character entry systems
CN110780732A (en) Input system based on space positioning and finger clicking
CN105283829B (en) Method for operating a touch-sensitive operating system and touch-sensitive operating system
JP2015184841A (en) gesture input device
JP2008009596A (en) Input device
KR101500412B1 (en) Gesture recognize apparatus for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant