CN110488974B - Method and wearable device for providing virtual input interface - Google Patents

Method and wearable device for providing virtual input interface Download PDF

Info

Publication number
CN110488974B
CN110488974B CN201910757959.2A CN201910757959A CN110488974B CN 110488974 B CN110488974 B CN 110488974B CN 201910757959 A CN201910757959 A CN 201910757959A CN 110488974 B CN110488974 B CN 110488974B
Authority
CN
China
Prior art keywords
wearable device
input
virtual
input interface
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910757959.2A
Other languages
Chinese (zh)
Other versions
CN110488974A (en
Inventor
尹仁国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020140179354A external-priority patent/KR102360176B1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN110488974A publication Critical patent/CN110488974A/en
Application granted granted Critical
Publication of CN110488974B publication Critical patent/CN110488974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and wearable apparatus for providing a virtual input interface are disclosed. There is provided a wearable device including: an image sensor configured to sense a gesture image in which a user sets a user input region; a display configured to provide a virtual input interface corresponding to the set user input region.

Description

Method and wearable device for providing virtual input interface
The present application is a divisional application of the patent application filed on date 2015 03-17, having application number 201580001071.6 and entitled "method and wearable device for providing virtual input interface" with the chinese national intellectual property office.
Technical Field
One or more exemplary embodiments relate to a method and a wearable apparatus for providing a virtual input interface.
Background
The real world is a space composed of three-dimensional (3D) coordinates. People can recognize a 3D space by combining visual information acquired using both eyes. However, photographs or moving images captured by general digital devices are expressed in 2D coordinates and thus do not include information about a space. In order to get a sense of space, a 3D camera or a display product that captures and displays a 3D image by using two cameras has been introduced.
Disclosure of Invention
Technical problem
Meanwhile, the current input methods of smart glasses are limited. The user basically controls the smart glasses by using voice commands. However, if text input is required, it is difficult for the user to control the smart glasses only by using voice commands. Accordingly, there is a need for wearable systems that provide various input interaction methods.
Solution to the problem
Methods and apparatuses consistent with exemplary embodiments include a method and wearable device for setting an input region on an aerial or real object based on a user action and providing a virtual input interface in the set input region.
Advantageous effects of the invention
A user of the glasses-type wearable device can easily make an input to control the wearable device by using a virtual input interface in an input area provided in the air or on a real object.
Drawings
These and/or other aspects will become apparent and more readily appreciated from the following description of the exemplary embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1a to 1e are diagrams describing a system for providing a virtual input interface by a wearable device according to an exemplary embodiment;
fig. 2 is a flowchart illustrating a method of providing a virtual input interface by a wearable device, according to an example embodiment;
fig. 3a to 5b are diagrams describing a method of setting an input region according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating a method of providing a virtual input interface according to depth values of an input area according to an exemplary embodiment;
FIGS. 7 through 9 are diagrams describing types and sizes of virtual input interfaces changed according to depth values of an input area according to an exemplary embodiment;
FIGS. 10a and 10b are diagrams describing types of virtual input interfaces adaptively changed according to a change in depth values of actual objects provided with input regions according to exemplary embodiments;
FIGS. 10c and 10d are diagrams describing a type of a virtual input interface changed based on a user input according to an exemplary embodiment;
FIG. 11 is a flowchart illustrating a method of providing a virtual input interface determined based on a size of an input area or a setting action of the input area, according to an exemplary embodiment;
fig. 12a to 13b are diagrams describing types of virtual input interfaces that change according to the size of an input area;
FIGS. 14a to 15b are diagrams describing a type of a virtual input interface changed according to a gesture for setting an input region;
fig. 16a and 16b are diagrams illustrating providing a virtual input interface determined based on an object provided with an input region according to an exemplary embodiment;
fig. 17a to 17c are diagrams describing a virtual input interface provided by a wearable device according to an exemplary embodiment, the virtual input interface being determined based on a type of an actual object provided with an input area;
18a and 18b are diagrams illustrating a virtual input interface determined based on an input tool setting an input region according to an exemplary embodiment;
fig. 19 is a flowchart illustrating a method of providing a virtual input interface determined based on an application being executed by a wearable device, according to an example embodiment;
FIGS. 20a and 20b are diagrams illustrating providing a virtual input interface determined based on the type of application being executed according to an exemplary embodiment;
FIG. 21 is a diagram depicting a virtual input interface determined based on the type of content being executed in accordance with an illustrative embodiment;
fig. 22a to 23b are diagrams describing the same virtual input interface as a previous virtual input interface provided when a wearable device recognizes an actual object provided with the previous virtual input interface, according to an exemplary embodiment;
FIG. 24 is a flowchart illustrating a method of providing a virtual input interface in an input area disposed over the air in accordance with an illustrative embodiment;
FIGS. 25a and 25b are diagrams for describing a method of determining whether an input is generated through a virtual input interface when an input area is set in the air;
FIG. 26 is a flowchart illustrating a method of providing a virtual input interface in an input area disposed on an in-air or real object according to an exemplary embodiment;
fig. 27a and 27b are diagrams describing a method of determining whether an input is generated through a virtual input interface when an input area is set on an actual object;
FIGS. 28a and 28b are diagrams for describing a method of acquiring a first depth value of an input area and a second depth value of an input tool according to an exemplary embodiment;
FIG. 29 is a flowchart illustrating a method of providing feedback regarding whether input was generated through a virtual input interface in accordance with an illustrative embodiment;
fig. 30 and 31 are diagrams describing outputting a notification signal corresponding to whether an input is generated by the wearable device according to an exemplary embodiment;
FIG. 32 is a diagram illustrating outputting a notification signal corresponding to whether an input is generated through a virtual input interface in accordance with an illustrative embodiment;
fig. 33 and 34 are block diagrams of wearable devices according to example embodiments.
Best mode for carrying out the invention
Methods and apparatuses consistent with exemplary embodiments include a method and wearable device for setting an input region on an aerial or real object based on a user action and providing a virtual input interface in the set input region.
Additional aspects will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the exemplary embodiments shown.
According to one or more exemplary embodiments, a wearable device includes: an image sensor configured to sense a gesture image in which a user sets a user input region; a display configured to provide a virtual input interface corresponding to a user input region set by using the sensed gesture image.
The sensed gesture image may correspond to a drawing drawn by the user, and the virtual input interface may be displayed to correspond to the sensed drawing.
The virtual input interface may be displayed to correspond to a size of the user input area.
The virtual input interface may be determined based on a type of application being executed by the glasses-type wearable device.
The display may include a transparent display, wherein the transparent display is configured to provide a virtual input interface on an area of the transparent display corresponding to a user input area viewed through the transparent display.
The image sensor may be configured to capture a first image of the user input area and the display may be configured to display a second image of the virtual input interface over the user input area of the first image.
The glasses-type wearable device may further include: a depth sensor configured to sense a first depth value corresponding to a distance from the glasses-type wearable device to the user input area and a second depth value corresponding to a distance from the glasses-type wearable device to the input tool; a controller configured to determine whether an input is generated through the virtual input interface based on the first depth value and the second depth value.
The size of the displayed virtual input interface may be determined based on the first depth value.
The controller may be configured to determine that an input is generated through the virtual input interface when a difference between the first depth value and the second depth value is less than a threshold.
The controller may be configured to determine that the input is generated through the virtual input interface when the second depth value is greater than the first depth value.
According to one or more exemplary embodiments, there is provided a method of providing a virtual input interface by a wearable device, the method comprising: acquiring a gesture image of a user for setting a user input area; providing a virtual input interface corresponding to the user input area such that the virtual input interface corresponds to a size of the user input area.
The step of acquiring the gesture image may include: acquiring a gesture image by recognizing a drawing drawn by a user; an area corresponding to a drawing by a user is set as a user input area.
The virtual input interface may be determined based on a size of the user input area.
The method may further comprise: the virtual input interface is determined based on a type of an object provided with the user input area.
The method may further comprise: the virtual input interface is determined based on a type of application being executed by the wearable device.
The virtual input interface may be disposed on the transparent display such that the virtual input interface corresponds to a user input area viewed through the transparent display.
The step of providing a virtual input interface may comprise: capturing a first image of a user input area by using an image sensor; generating a second image of the virtual input interface; a second image is displayed over the user input area of the first image.
The method may further comprise: acquiring a first depth value corresponding to a distance from the glasses-type wearable device to the user input area and a second depth value corresponding to a distance from the glasses-type wearable device to the input tool; based on the first depth value and the second depth value, it is determined whether an input is generated through the virtual input interface.
The size of the displayed virtual input interface may be determined based on the user input area.
The step of determining whether an input is generated may include: determining that a difference between the first depth value and the second depth value is less than a threshold.
The step of determining whether an input is generated may include: it is determined that the second depth value is greater than the first depth value.
According to one or more exemplary embodiments, a wearable input device includes: a sensor configured to sense a plurality of gestures and a real world image; a display configured to display a graphical user interface; a controller configured to determine an input region of the real-world image, control the display to display the graphical user interface on an area corresponding to the determined input region, and determine an input based on an input gesture of the plurality of gestures.
The wearable input device may include a communicator, wherein the communicator is configured to receive a touch signal from an external device. The controller may also be configured to determine an input based on the touch signal.
The determination may also be made based on an input region definition gesture of the plurality of gestures.
The sensor may be further configured to determine a distance between the eyewear-type wearable input device and the input region.
The controller may be further configured to continuously update the display area of the graphical user interface based on the real world image.
Detailed Description
Terms used in the present specification will be described briefly and one or more exemplary embodiments will be described in detail.
All terms used herein, including descriptive or technical terms, should be understood to have meanings apparent to those of ordinary skill in the art. However, these terms may have different meanings according to the intention of one of ordinary skill in the art, precedent decisions, or emerging new technologies. In addition, the applicant can arbitrarily select some terms, and in this case, the meanings of the selected terms will be described in detail in the detailed description of the present invention. Therefore, the terms used herein must be defined based on the meanings of the terms as well as the descriptions throughout the specification.
In addition, when a component "comprises" or "comprising" an element, the component may also comprise, but not exclude, other elements unless there is a particular description to the contrary. In the following description, terms such as "unit" and "module" indicate a unit for processing at least one function or operation, wherein the unit and the block may be implemented as hardware or software or by combining hardware and software.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. When a statement such as "at least one of …" follows a list of elements, the statement modifies the entire list of elements rather than modifying individual elements in the list.
One or more exemplary embodiments will now be described more fully with reference to the accompanying drawings. The one or more exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein; rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of one or more exemplary embodiments to those skilled in the art. In the following description, well-known functions or constructions are not described in detail since they would not obscure the one or more exemplary embodiments in unnecessary detail. Throughout the specification, like reference numerals in the drawings represent like or similar elements.
Fig. 1a to 1e are diagrams describing a system for providing a virtual input interface by a wearable device 100 according to an exemplary embodiment.
The wearable device 100 according to an example embodiment may include a head-mounted display (HMD) that can be mounted on a head. For example, the HMD may be glasses, a helmet, or a hat, but is not limited thereto. The first wearable device 100 according to an exemplary embodiment may be a watch, a band, a ring, a necklace, a bracelet, a shoe, an earring, a headband, a garment, a glove, or a thimble.
The wearable device 100 according to an exemplary embodiment may be one device or a combination of multiple devices. For example, the wearable device 100 may be glasses or a combination of at least two devices, such as glasses and a ring, glasses and a watch, or glasses and a thimble.
The wearable device 100 according to an example embodiment may provide at least one virtual input interface. For example, the wearable device 100 according to an example embodiment may display a virtual input interface on the optical display 121 such that the virtual input interface matches the real world as viewed through the optical display 121.
Now, the structure of the optical display 121 will be described in detail with reference to fig. 1 b.
Referring to FIG. 1b, the optical display 121 may include a display device 210 and a light guide 200 a. The light guide 200a may include a light guide 220a and a variable lens 240 a. In addition, the display device 210 may output the first light 201 forming the image to the light guide device 220 a. The display device 210 may have a shape of a quadrangular plate, and may display an image in units of pixels according to data input from a controller. For example, the display device 210 may be a Light Emitting Diode (LED), an organic LED (oled), a Liquid Crystal Display (LCD), or a Liquid Crystal On Silicon (LCOS).
The light guide 220a may include first to fifth surfaces 221 to 225 a. The light guide device 220a may guide the first light 201 input from the display device 210 to the variable lens 240a via internal reflection or total internal reflection.
The first surface 221 corresponds to a portion of a rear surface of the light guide device 220a facing the display device 210, and may transmit the first light 201 input from the display device 210 toward the second surface 222. The second surface 222 corresponds to a first side surface of the light guide 220a between the first surface 221 and the third surface 223, and may reflect the first light 201, which has penetrated the first surface 221, toward the third surface 223 or the fourth surface 224.
The third surface 223 corresponds to the front surface of the light guide 220a, the fourth surface 224 corresponds to the remaining portion of the rear surface of the light guide 220a, and both the third surface 223 and the fourth surface 224 reflect or totally reflect the first light 201 such that the first light 201 reaches the fifth surface 225 a. Here, the total reflection means that the first light 201 incident from the inside of the light guide 220a to the interface (i.e., the third surface 223 or the fourth surface 224) of the light guide 220a and the external air layer is totally reflected without penetrating the interface.
The fifth surface 225a corresponds to a second side surface of the light guide device 220a between the third surface 223 and the fourth surface 224, and may transmit the first light 201 toward the variable lens 240a and reflect the first light 201 incident from the variable lens 240a toward the user's eye. The fifth surface 225a may transmit the second light 202 forming a front view of the first wearable device 100 toward the user's eye.
The light guide device 220a may include: a body portion 232a disposed between the third surface 223 and the fourth surface 224 and having a uniform thickness; a first inclined portion 231 disposed between the first surface 221 and the second surface 222 and having a thickness gradually decreasing away from the main body portion 232 a; a second inclined portion 233a disposed between the third surface 223 and the fourth surface 224 and having a thickness gradually decreasing away from the main body portion 232 a. The second inclined portion 233a may have a fifth surface 225a, wherein the fifth surface 225a is an inclined surface facing the variable lens 240a and the user's eye.
The variable lens 240a may include: a light-transmitting surface 241 through which the first light 201 is transmitted; a refractive surface 242 that refracts the first light 201; the reflective surface 243a reflects the first light 201. The shape or curvature of the refractive surface 242 may be changed according to the control of the controller. The variable lens 240a may adjust a virtual object distance from the user's eye to the virtual object by adjusting an angle of the first light 201 incident on the user's eye (i.e., an incident angle) according to a change in the shape or curvature of the refractive surface 242.
Fig. 1c and 1d are diagrams describing adjusting a distance of a virtual input interface by using a variable lens 240a according to an exemplary embodiment.
The variable lens 240a may adjust the distance from the user's eye 30 to the virtual input interface 41 recognized by the user by adjusting the incident angle of the first light 43 incident on the eye 30 according to the control of the controller.
Referring to fig. 1c, the thickness of the eyepiece 31 is reduced to focus the eye 30 on the actual object 34 at a long distance. The second light 35 originating from the actual object 34 moves parallel to the optical axis 33 of the eye 30, is incident on the eyepiece 31 through the fifth surface 225a of the light guide device 220a, and is converged on the retina 32 by being refracted at the eyepiece 31. In other words, the eyepiece 31 forms an image of the actual object 34 on the retina 32.
The variable lens 240a may transmit the first light 43 to the fifth surface 225 a. The first light 43 reflected at the fifth surface 225a moves parallel to the optical axis 33 of the eye 30 to be incident on the eyepiece 31, and the eyepiece 31 can refract the first light 43 to condense the first light 43 onto the retina 32. In other words, eyepiece 31 may form an image of virtual input interface 41 on retina 32. For example, when the real object 34 (or the image of the real object 34) is in focus, the real object 34 (or the image of the real object 34) and the virtual input interface 41 (or the image of the virtual input interface 41) may have the same first object distance OD1 and the same image distance ID.
Referring to fig. 1d, the thickness of the eyepiece 31 is increased to focus the eye 30 onto the actual object 36 at a short distance. The second light 37 originating from the actual object 36 moves along the optical axis 33 of the eye 30 while being condensed (or diffused), is incident on the eyepiece 31 through the fifth surface 225a of the light guide device 220a, and is condensed onto the retina 32 by being refracted by the eyepiece 31. In other words, the eyepiece 31 forms an image of the actual object 36 on the retina 32. The variable lens 240a may transmit the first light 44 to the fifth surface 225 a. The first light 44 reflected from the fifth surface 225a is incident on the eyepiece 31 by moving along the optical axis 33 of the eye 30 while being condensed (or diffused), and the eyepiece 31 can refract the first light 44 to condense the first light 44 onto the retina 32. In other words, eyepiece 31 may form an image of virtual input interface 42 on retina 32. For example, when the real object 36 (or the image of the real object 36) is in focus, the real object 36 (or the image of the real object 36) and the virtual input interface 42 (or the image of the virtual input interface 42) may have the same second object distance OD2 and the same image distance ID.
Meanwhile, as will be described later in detail with reference to fig. 2, the wearable device 100 according to an exemplary embodiment may recognize an action of an input tool for setting an input region and provide a virtual input interface determined based on a property of the input region.
Referring to fig. 1e, the virtual input interface 50 according to an exemplary embodiment may be a Graphical User Interface (GUI) for receiving user input using the first wearable device 100. Alternatively, the virtual input interface 50 may be implemented in any of various forms, for example, the virtual input interface 50 may be a keyboard (such as a QWERTY keyboard or a portable terminal keyboard), a notepad, a game controller, a calculator, a piano keyboard, a drum, or a dial, but is not limited thereto.
The wearable device 100 according to an exemplary embodiment may provide the virtual input interface 50 on an input area set by a user. Wearable device 100 may display virtual input interface 50 on optical display 121 such that virtual input interface 50 overlaps the input region.
Here, the wearable device 100 may display the virtual input interface 50 on the optical display 121 in the form of an Augmented Reality (AR), a hybrid reality (MR), or a Virtual Reality (VR).
For example, when virtual input interface 50 is provided in the form of an AR or MR, wearable device 100 may display virtual input interface 50 on a transparent display such that virtual input interface 50 overlaps with an input region viewed through the transparent display.
As shown in fig. 1e, the area 20 defined by the dashed line represents the real world area as viewed through the optical display 121 of the wearable device 100. The wearable device 100 may display the virtual input interface 50 on the optical display 121 such that the virtual input interface 50 matches the region 20 viewed through the optical display 121.
Alternatively, when the virtual input interface 50 is provided in the form of a VR, the wearable device 100 may capture a first image including an input region set in the real world, and generate a second image by adding the virtual input interface 50 to the input region of the first image. The wearable device 100 may display a second image on the opaque display, wherein in the second image the virtual input interface 50 overlaps the input area.
The wearable device 100 according to an exemplary embodiment may include an image sensor 111 and a depth sensor 112.
The image sensor 111 may capture an external image or detect a user motion of setting the input area. In addition, the image sensor 111 may detect movement of the input tool. Here, the input tool may be a preset tool, and examples of the input tool include a pen, a finger, a stylus pen, and a joystick, but are not limited thereto.
The depth sensor 112 may measure a depth value of the input area or a depth value of the input tool set by the user. The depth value may correspond to a distance from the depth sensor 112 to a particular object. In this specification, as the distance from the depth sensor 112 to a specific object increases, the depth value increases.
For example, the depth value may be a distance from the depth sensor 112 to a particular object on the Z-axis. As shown in fig. 1a, in the 3D space, the X-axis may be a reference axis passing through the wearable device 100 from left to right, the Y-axis may be a reference axis passing through the wearable device 100 from top to bottom, and the Z-axis may be a reference axis passing through the wearable device 100 from back to front. In addition, the X, Y, and Z axes may be perpendicular to each other.
According to an example embodiment, the depth sensor 112 may obtain the depth value of the object via any of various methods. For example, the depth sensor 112 may measure depth values using at least one of a time-of-flight (TOF) method, a stereo vision method, and a structured light pattern method.
The TOF method is a method of measuring a distance to an object by analyzing a time taken until light returns after being reflected at the object. In a TOF system, an infrared LED illuminates a pulse of infrared light, and an infrared camera measures the time before the infrared light pulse returns after being reflected at an object. In this case, the depth sensor 112 may include an infrared LED and an infrared camera. The depth sensor 112 may repeatedly irradiate and receive light several tens of times per second to acquire distance information in the form of a moving image. In addition, the depth sensor 112 may generate a depth map, wherein the depth map indicates distance information representing the brightness of the color of each pixel.
The stereoscopic vision method is a method of acquiring a 3D effect of an object by using two cameras. Thus, the depth sensor 112 may include two cameras. The depth sensor 112 may calculate the distance based on triangulation by using difference information of the images captured by the two cameras. The human perceives a 3D effect by the difference between images seen by the left and right eyes, and the depth sensor 112 measures the distance in the same manner as the human eyes. For example, when the distance is short, the difference between the images captured by the two cameras is high, and when the distance is long, the difference between the images captured by the two cameras is low.
The structured light pattern method is a method of irradiating an object with patterned light and measuring a distance to the object by analyzing a position of the pattern on the surface of the object. The depth sensor 112 typically projects a linear pattern or a dot pattern onto the object, which varies based on the curve of the object.
The structured light pattern method may be performed by replacing one of the two cameras used in the stereoscopic vision method with a light projector. For example, the depth sensor 112 may calculate a depth map in real time through an algorithm that analyzes the position of a pattern generated as light emitted from an infrared projector is incident on the surface of an object.
Meanwhile, the image sensor 111 and the depth sensor 112 may be separate sensors or configured as one sensor.
The wearable device 100 according to an exemplary embodiment may determine whether an input is generated through the virtual input interface 50 by using the depth value of the input area or the input tool acquired through the image sensor.
Fig. 2 is a flowchart of a method of providing a virtual input interface by wearable device 100, according to an example embodiment.
Referring to fig. 2, in operation S210, the wearable device 100 may set an input region. The input area may be a real-world 2D or 3D space that overlaps with the virtual input interface when the virtual input interface is displayed on the optical display 121.
Wearable device 100 may set the input region based on a user action. For example, the wearable device 100 may recognize a diagram drawn by a user in the air using an input tool (such as a finger, a pen, a stylus, or a joystick) or an actual object (such as a palm, a table, or a wall), and set an area corresponding to the diagram as an input area.
Alternatively, the wearable device 100 may recognize a preset object and set an area corresponding to the preset object as an input region. Alternatively, the wearable device 100 may recognize a movement of the user touching a preset object using the input tool and set an area corresponding to the preset object as an input area.
A method of setting the input area will be described in detail later with reference to fig. 3a to 5 b.
In addition, the wearable device 100 according to an exemplary embodiment may receive a preset voice input or a preset key input for entering an input region setting mode. For example, when a voice input or a key input for entering a user mode is received, the wearable device 100 may be controlled to acquire a user gesture image for setting an input region. Alternatively, the wearable device 100 may be controlled to acquire a user gesture image for setting an input region when an application requiring input is executed.
When the input region is set, the wearable device 100 may determine a virtual input interface to be displayed based on the attribute of the input region in operation S220.
For example, the wearable device 100 may determine a virtual input interface to be displayed on the optical display 121 based on at least one of a size of the input area, a shape of the input area, a distance between the input area and the wearable device 100 (a depth value of the input area), a type of an actual object in which the input area is set, and a gesture in which the input area is set.
In operation S230, the wearable device 100 may display the virtual input interface to overlap the input region.
Here, wearable device 100 may display the virtual input interface in the form of AR, MR, or VR.
For example, when the virtual input interface is displayed in the form of an AR or MR, wearable device 100 may display the virtual input interface on a transparent display (such as a see-through display) such that the virtual input interface overlaps with an input region (a real-world 2D or 3D space) viewed through the transparent display.
Alternatively, when the virtual input interface is displayed in the form of VR, the wearable device 100 may capture a first image (real image) including an input region (2D or 3D space of the real world) and generate a second image by adding the virtual input interface (virtual image) to the input region of the first image. The wearable device 100 may display a second image on an opaque display (such as a near-view display), where in the second image the virtual input interface overlaps the input region.
In operation S240, the wearable device 100 according to an exemplary embodiment may acquire a first depth value of an input area and a second depth value of an input tool touching a virtual input interface.
The wearable device 100 may measure a distance from the wearable device 100 to the input area (a depth value of the input area, i.e., a first depth value) by using the depth sensor 112.
Meanwhile, when the input area does not exist on the same plane, there may exist a plurality of depth values of the input area. When there are a plurality of depth values of the input area, the first depth value may be one of an average depth value of the plurality of depth values, a minimum depth value of the plurality of depth values, and a maximum depth value of the plurality of depth values, but is not limited thereto.
When the input area is disposed on the real object, the first depth value may be a depth value of the real object.
The wearable device 100 may measure a distance from the wearable device 100 to the input tool (a depth value of the input tool, i.e., the second depth value) by using the depth sensor 112.
When the input tool is a 3D object, there may be multiple depth values of the input tool. When there are a plurality of depth values of the input tool, the second depth value may be one of an average depth value of the plurality of depth values, a minimum depth value of the plurality of depth values, and a maximum depth value of the plurality of depth values, but is not limited thereto.
For example, when the virtual input interface is touched by the input tool, a point at which the input tool and the virtual input interface contact each other (an end point of the input tool) may be the second depth value.
In operation S250, the wearable device 100 may determine whether an input is generated through the virtual input interface by comparing the first depth value and the second depth value.
For example, a first depth value of the input area may be a reference value used to determine whether the input is generated, and wearable device 100 may determine that the input is generated through the virtual input interface when a difference between the first depth value and the second depth value is less than a threshold.
Alternatively, wearable device 100 may determine that an input was generated through the virtual input interface when the second depth value is greater than the first depth value.
The wearable device 100 according to an exemplary embodiment may set an input area based on a user action and determine whether an input is generated by comparing a depth value of the input area and a depth value of an input tool to improve accuracy of input through a virtual input interface.
Fig. 3a to 5b are diagrams describing a method of setting an input region according to an exemplary embodiment.
Referring to fig. 3a and 3b, the wearable device 100 according to an exemplary embodiment may set an input region by recognizing a drawing drawn by a user in the air or on an actual object.
For example, as shown in fig. 3a, a user may draw a diagram, such as a rectangle, in the air by using an input tool 310 (such as a pen, joystick, stylus, or finger). The wearable device 100 may recognize the map and set an area corresponding to the map as the input region 320. For example, an area having a depth value of the diagram (distance from the wearable device 100 to the diagram), a shape of the diagram, and a size of the diagram may be set as the input area 320.
As shown in fig. 3a, the figure may be rectangular, but the shape of the figure is not limited thereto. Examples of the map include maps having various shapes and sizes (such as circles, polygons, and free-form circular curves), 2D maps, and 3D maps.
Alternatively, as shown in fig. 3b, the user may draw a drawing 340 (such as a rectangle) on the actual object 330 by using an input tool 345 (such as a pen, joystick, stylus, or finger). The wearable device 100 may recognize the map 340 drawn by the user and set an area corresponding to the map 340 as an input area. For example, an area having a depth value of the map 340 (distance from the wearable device 100 to the actual object 330), a shape of the map 340, and a size of the map 340 may be set as the input area.
Referring to fig. 4a and 4b, the wearable device 100 according to an exemplary embodiment may set an input region by recognizing a specific object.
For example, as shown in fig. 4a, wearable device 100 may identify palm 410 by using image sensor 111. Here, information about the shape or size of the palm 410 may be prestored in the wearable device 100. Accordingly, the wearable device 100 may compare the shape and size of the palm 410 with pre-stored information and determine whether to set the palm 410 as an input area.
When the shape and size of the palm 410 are the same as the pre-stored information, the wearable device 100 may set the preset region 420 of the palm 410 as an input region. Here, the shape and size of the preset region 420 may be different.
As shown in fig. 4a, wearable device 100 may recognize palm 410 and set an input zone. Alternatively, the wearable device 100 may set the input area by recognizing any of various objects such as a table and a notepad.
In addition, the wearable device 100 may define a specific shape as a marker, and set a plane of an actual object including the marker as an input region when the marker is recognized.
For example, when a rectangle is defined as the mark, the wearable device 100 may recognize the rectangle as the mark by using the image sensor 111. As shown in fig. 4b, wearable device 100 may identify the notepad in rectangle 430 as a marker.
When the marker is recognized, the wearable device 100 may set a plane of the actual object including the marker as an input region. For example, as shown in fig. 4b, wearable device 100 may set the plane of the notepad in rectangle 430 as the input region. Here, the wearable device 100 may set the entire plane of the notepad as the input area, or set a partial area of the plane of the notepad as the input area.
As shown in fig. 4b, a rectangle may be defined as a marker. Alternatively, any of various shapes such as a circle and a polygon may be defined as the mark.
Referring to fig. 5a, the wearable device 100 according to an exemplary embodiment may set an input region by recognizing an actual input interface.
Wearable device 100 may recognize the actual input interface and display a virtual input interface that is of the same type as the actual input interface. In addition, wearable device 100 may receive input from a user touching an actual input interface using input tool 520 (such as a pen, joystick, stylus, or finger) and then identify the actual input interface.
Examples of the actual input interface include, but are not limited to, an actual keyboard, an actual keypad, an actual notepad interface, an actual calculator, an actual piano keyboard, an actual game controller, and an actual dial. Alternatively, the actual input interface may be a GUI displayed on the mobile terminal.
For example, as shown in fig. 5a, when the user touches the actual keyboard 510 by using the input tool 520, the wearable device 100 may recognize the actual keyboard 510 touched by the input tool 520. At this time, the wearable device 100 may acquire the depth value of the real keyboard 510 and the depth value of the input tool 520 by using the depth sensor 112, and determine that the real keyboard 510 is touched when a difference between the depth value of the real keyboard 510 and the depth value of the input tool 520 is equal to or less than a threshold value.
Additionally, information regarding the type, shape, and size of one or more actual input interfaces may be pre-stored in the wearable device 100. Accordingly, the wearable device 100 may compare the type, shape, and size of the actual keyboard 510 recognized by the image sensor 111 with pre-stored information and determine whether the actual keyboard 510 is the actual input interface.
In addition, wearable device 100 may display a virtual input interface that corresponds to the actual input interface. Wearable device 100 may display a virtual input interface on optical display 121 that is the same size and shape as the actual input interface such that the virtual input interface overlaps the area of the actual input interface.
For example, as shown in fig. 5a, when the actual keyboard 510 is identified, the wearable device 100 may display a virtual keyboard having the same size and shape as the actual keyboard 510 such that the virtual keyboard overlaps with the area where the actual keyboard 510 is displayed.
Meanwhile, referring to fig. 5b, the wearable device 100 according to an exemplary embodiment may recognize a plane of an actual object and set an input region.
The wearable device 100 may recognize a plane of an actual object, and when the user touches the plane by using an input tool (such as a pen, a joystick, a stylus pen, or a finger), the wearable device 100 may set the touched plane as an input region.
For example, as shown in fig. 5b, when the user touches the plane 540 of the notepad by using an input tool 530, such as a pen, the wearable device 100 may identify the plane 540 of the notepad touched by the input tool 530. Here, the wearable device 100 may acquire the depth value of the plane 540 and the depth value of the input tool 530 by using the depth sensor 112, and determine that the input tool 530 touches the plane 540 when a difference between the depth value of the plane 540 and the depth value of the input tool 530 is equal to or less than a threshold value.
Accordingly, the wearable device 100 may set the plane 540 touched by the input tool 530 as an input region.
Fig. 6 is a flowchart illustrating a method of providing a virtual input interface according to a depth value of an input area according to an exemplary embodiment.
Referring to fig. 6, in operation S610, the wearable device 100 may set an input region based on a user action. Since the operation S610 has been described in detail above with reference to the operation S210 of fig. 2, fig. 3a to 5b, the details thereof are not repeated.
In operation S620, the wearable device 100 may acquire a first depth value of the input region.
When the input area is set in the air, the wearable device 100 may acquire a depth value of the input area based on a user action of setting the input area. For example, when the user draws a drawing in the air by using the input tool, the wearable device 100 may acquire a depth value of the input tool of the drawing by using the depth sensor 112 and set the depth value of the input tool to a first depth value of the input area.
Alternatively, when the input area is set on the real object, the wearable device 100 may acquire the depth value of the real object by using the depth sensor 112 and set the depth value of the real object as the first depth value of the input area.
In operation S630, the wearable device 100 may determine a type of virtual input interface to be displayed based on the first depth value of the input area.
For example, when the first depth value of the input area is equal to or less than the first threshold value, the wearable device 100 may determine a first keyboard having a first size as the virtual input interface to be displayed on the optical display 121.
In addition, when the first depth value of the input area is greater than the first threshold and equal to or less than a second threshold that is greater than the first threshold, the wearable device 100 may determine a second keyboard having a second size as the virtual input interface to be displayed on the optical display 121, wherein the second size is less than the first size.
In addition, when the first depth value of the input area is greater than the second threshold value, the wearable device 100 may determine a third keyboard having a third size as the virtual input interface to be displayed on the optical display 121, wherein the third size is smaller than the second size.
As the first depth value of the input region increases, the size of the input region observed by the user of wearable device 100 decreases, so wearable device 100 may determine a virtual input interface having a relatively smaller size. However, the exemplary embodiments are not limited thereto.
In addition, as will be described in detail later with reference to fig. 7 to 9, the wearable device 100 may determine not only the size but also the shape of the virtual input interface based on the first depth value of the input area.
Referring back to fig. 6, in operation S640, the wearable device 100 may display the virtual input interface determined in operation S630 on the optical display 121 such that the virtual input interface overlaps with the input region set in operation S610.
In addition, the wearable device 100 may acquire a second depth value of the input tool touching the virtual input interface in operation S650, and compare the first depth value and the second depth value to determine whether an input is generated through the virtual input interface in operation S660.
Since operations S640 to S660 of fig. 6 have been described in detail above with reference to fig. S230 to S250, the details thereof are not repeated.
Fig. 7 to 9 are diagrams describing the type and size of a virtual input interface displayed on the optical display 121 changed according to the depth value of an input area.
Referring to fig. 7, the wearable device 100 may recognize a gesture (e.g., a gesture of drawing a rectangle) in which a user sets an input region on a palm 710 that is 7cm away from the wearable device 100 using an input tool such as a finger, a pen, a stylus, or a joystick. Wearable device 100 may display QWERTY keyboard 720 on optical display 121 based on the gesture such that QWERTY keyboard 720 matches palm 710 as viewed through optical display 121. Here, as shown in fig. 7, the QWERTY keyboard 720 may include an input window (a window displaying "enter message"), on which text input through the QWERTY keyboard 720 may be displayed.
In addition, referring to fig. 8, the wearable device 100 may recognize a gesture (e.g., a gesture of drawing a rectangle) in which the user sets an input region on a palm 810, which is 10cm away from the wearable device 100, using an input tool such as a finger, a pen, a stylus, or a joystick.
When the distance between the palm 810 and the wearable device 100 is 10cm, the size of the palm 810 as viewed through the optical display 121 may be smaller than the size of the palm 710 of fig. 7 that is 7cm from the wearable device 100. Accordingly, the wearable device 100 may display a mobile terminal keyboard 820, such as a Cheonjiin keyboard, on the optical display 121 such that the mobile terminal keyboard 820 matches the palm 810 as viewed through the optical display 121.
In addition, referring to fig. 9, the wearable device 100 may recognize a gesture (e.g., a gesture of drawing a rectangle) in which the user sets an input region on a palm 910 that is 15cm away from the wearable device 100 using an input tool such as a finger, a pen, a stylus, or a joystick.
When the distance between the palm 910 and the wearable device 100 is 15cm, the size of the palm 910 as viewed through the optical display 121 may be smaller than the size of the palm 810 of fig. 8 that is 10cm from the wearable device 100. Accordingly, wearable device 100 may display handwriting input window 920 on optical display 121 such that handwriting input window 920 matches palm 910 viewed through optical display 121.
As shown in fig. 7 to 9, as the distance between the palm (input region) and the wearable device 100 increases (as the first depth value of the input region increases), it is determined that the virtual input interface is the QWERTY keyboard 720, the mobile terminal keyboard 820, and the handwriting input window 920 in this order, but the exemplary embodiments are not limited thereto. As the distance between the palm (input region) and the wearable device 100 decreases (as the first depth value of the input region decreases), the virtual input interface may be determined to be the handwriting input window 920, the mobile terminal keyboard 820, and the QWERTY keyboard 720 in that order, and any type of virtual input interface may be determined.
Fig. 10a and 10b are diagrams describing types of virtual input interfaces adaptively changed according to a change in depth values of actual objects provided with input regions according to exemplary embodiments.
Referring to fig. 10a, the wearable device 100 may recognize a gesture (e.g., a gesture of drawing a rectangle) in which a user sets an input region on a palm 1010 that is 7cm away from the wearable device 100 using an input tool such as a finger, a pen, a stylus, or a joystick. The wearable device 100 may display a QWERTY keyboard 1020 on the optical display 121 based on the gesture, such that the QWERTY keyboard 1020 matches the palm 1010 viewed through the optical display 121.
Although a QWERTY keyboard 1020 is shown, the user may move the palm 1010 away from the wearable device 100 such that the distance between the wearable device 100 and the palm 1010 is 10 cm.
As shown in fig. 10a, when the distance between wearable device 100 and palm 1010 is 10cm, the size of palm 1010 as viewed through optical display 121 may be smaller than the size of palm 1010 that is 7cm from wearable device 100. Thus, the wearable device 100 may display a mobile terminal keyboard 1030, such as a cheronjiin keyboard, on the optical display 121 instead of the previously displayed QWERTY keyboard 1020. Thus, the mobile terminal keypad 1030 matches the palm 1010 as viewed through the optical display 121.
Alternatively, the wearable device 100 may recognize a gesture (e.g., a gesture of drawing a rectangle) in which the user sets an input region on the palm 1010 10cm away from the wearable device 100 using the input tool. Wearable device 100 may display mobile terminal keyboard 1030 as overlapping palm 1010 based on the gesture.
When the mobile terminal keypad 1030 is displayed, the user may move the palm 1010 closer to the wearable device 100 such that the distance between the wearable device 100 and the palm 1010 is 7 cm.
When the distance between wearable device 100 and palm 1010 is 7cm, the size of palm 1010 as viewed through optical display 121 may be greater than the size of palm 1010 that is 10cm from wearable device 100. Thus, the wearable device 100 may display a QWERTY keyboard 1020 on the optical display 121 instead of the displayed mobile terminal keyboard 1030 such that the QWERTY keyboard 1020 matches the palm 1010 as viewed through the optical display 121.
As such, the user may change the type of virtual input interface by changing the position of the actual object (the distance between the actual object and the wearable device) after the input area is set on the actual object.
Referring to fig. 10b, wearable device 100 may obtain a first distance (e.g., 7cm) between wearable device 100 and palm 1010 (the actual object) and display a first virtual input interface (e.g., QWERTY keyboard 1020) based on the first distance on palm 1010 as viewed through optical display 121. For example, the variable lens 240a of fig. 1b may be changed (or the curvature of the refractive surface of the variable lens may be changed) to adjust the angle of incidence of the first light 1025 incident on the user's eye such that the distance from the user's eye to the QWERTY keyboard 1020 recognized by the user is a first distance.
In addition, the wearable device 100 may obtain a second distance (e.g., 10cm) between the wearable device 100 and the palm 1010 (the actual object) and display a second virtual input interface (e.g., the mobile terminal keyboard 1030) having the second distance on the palm 1010 as viewed through the optical display 121. For example, the variable lens 240a of fig. 1b may be changed (or the curvature of the refractive surface of the variable lens may be changed) to adjust the incident angle of the first light 1035 incident on the user's eye such that the distance from the user's eye to the mobile terminal keypad 1030 recognized by the user is a second distance.
Fig. 10c and 10d are diagrams describing a type of a virtual input interface changed based on a user input according to an exemplary embodiment.
Referring to fig. 10c, wearable device 100 may display a first virtual input interface (e.g., QWERTY keyboard 1020) on optical display 121 based on the user's gesture such that QWERTY keyboard 1020 matches palm 1010 as viewed through optical display 121. Here, the wearable device 100 may display keys 1050 for changing the virtual input interface. When an input of the selection key 1050 is received from the user, as shown in fig. 10d, the wearable device 100 may display a second virtual input interface (e.g., mobile terminal keyboard 1030) in the area where the first virtual input interface is displayed. In addition, keys 1050 for changing the virtual input interface may be displayed. Upon receiving an input from the user selecting the key 1050, as shown in fig. 10c, the wearable device 100 may display a third virtual input interface in the area where the second virtual input interface is displayed, or may display a QWERTY keyboard 1020.
Fig. 11 is a flowchart of a method of providing a virtual input interface determined based on a size of an input area or a setting action of the input area, according to an exemplary embodiment.
Referring to fig. 11, in operation S1110, the wearable device 100 may set an input region by using a user gesture for allocating an area in which a virtual input interface is displayed. Since the operation S1110 has been described in detail with reference to the operation S210 of fig. 2 and fig. 3a to 5b, the details thereof are not repeated.
In operation S1120, the wearable device 100 may determine a shape or type of the virtual input interface based on the size of the input region or the user gesture.
For example, when the area of the input region is equal to or less than a first threshold, the wearable device 100 may provide a virtual input interface having a first area.
Alternatively, when the area of the input region is greater than the first threshold and equal to or less than a second threshold that is greater than the first threshold, the wearable device 100 may provide a virtual input interface having a second area that is greater than the first area. Here, the size of the input area may be determined by a height, a width, a diagonal length or diameter, and an area.
Additionally, wearable device 100 may provide different types of virtual input interfaces based on the graph drawn by the user. The graph may be drawn in air or on a real object and may be used to set the input area.
For example, when the user draws a first graph to set the input region, the wearable device 100 may recognize the first graph and provide a virtual input interface corresponding to the first graph. In addition, when the user draws a second drawing to set the input region, the wearable device 100 may provide a virtual input interface corresponding to the second drawing.
This will be described in detail later with reference to fig. 12a to 15 b.
Referring back to fig. 11, in operation S1130, the wearable device 100 may display the virtual input interface determined in operation S1120 on the optical display 121 according to the size of the input region set in operation S1110.
For example, the virtual input interface may be displayed on the optical display 121 such that the virtual input interface is shown in the input area. At this time, the shape of the virtual input interface may be the same as the shape of the input region, and the size of the virtual input interface may be equal to or smaller than the size of the input region.
In addition, the wearable device 100 may acquire a first depth value of the input area and a second depth value of the input tool touching or approaching the virtual input interface in operation S1140, and the wearable device 100 may determine whether the input is generated through the virtual input interface by comparing the first depth value and the second depth value in operation S1150.
Since operations S1130 to S1150 of fig. 11 have been described above with reference to S230 to S250 of fig. 2, the details thereof are not repeated.
Fig. 12a to 13b are diagrams describing types of virtual input interfaces displayed according to the size of an input area.
As shown in fig. 12a, a user of wearable device 100 may draw a map for setting an input area on table 1210. For example, a user may draw a rectangle 1220 having a first size (e.g., 20cm by 10cm) on a table 1210 using both hands. Here, the wearable device 100 may set the input region by using a gesture in which the user draws the rectangle 1220 using both hands.
Additionally, as in fig. 12b, in response to the gesture drawing rectangle 1220, wearable device 100 may display virtual piano keyboard 1230 as overlapping with the area of rectangle 1220 as viewed through optical display 121. The wearable device 100 may display the virtual piano keyboard 1230 on the optical display 121 such that the virtual piano keyboard 1230 matches the first sized rectangle 1220. Here, the size of the virtual piano keyboard 1230 may be determined from the first size of the rectangle 1220.
As shown in fig. 13a, the user may draw a diagram for setting an input area on the table 1310. For example, the user may draw a rectangle 1320 having a second size (e.g., 10cm by 10cm) on the table 1310 using both hands. Here, the wearable device 100 may recognize a gesture in which the user draws the rectangle 1320 using both hands as a gesture to set the input region.
Additionally, as shown in fig. 12b, in response to the gesture drawing the rectangle 1320, the wearable device 100 may display the virtual piano keyboard 1330 as overlapping with the area of the rectangle 1320 viewed through the optical display 121. The wearable device 100 may display the virtual piano keyboard 1330 on the optical display 121 such that the virtual piano keyboard 1330 matches the second sized rectangle 1320. Here, the size of the virtual piano keyboard 1330 may be determined according to the second size of the rectangle 1320.
Alternatively, the wearable device 100 may provide a virtual input interface having not only different sizes but also different shapes based on the size of the input area.
Referring to fig. 12b and 13b, the virtual piano keyboard 1230 shown in fig. 12b may be a piano keyboard displayed in a row, and the virtual piano keyboard 1330 shown in fig. 13b may be a piano keyboard displayed in two rows, but is not limited thereto.
Fig. 14a to 15b are diagrams describing types of virtual input interfaces changed according to a gesture of setting an input region.
As shown in fig. 14a, when the user draws a rectangle 1430 on a palm 1410 viewed through the optical display 121 using a finger 1420, the wearable device 100 may recognize a gesture drawing the rectangle 1430 by using the image sensor 111, and set an area corresponding to the rectangle 1430 as an input area.
At this time, as shown in fig. 14b, the wearable device 100 may display the virtual mobile terminal keyboard 1450 on the optical display 121 such that the virtual mobile terminal keyboard 1450 overlaps with the rectangular area viewed through the optical display 121. For example, the wearable device 100 may display the virtual mobile terminal keyboard 145 on the optical display 121 according to the size of the rectangular area. Alternatively, the wearable device 100 may display the virtual mobile terminal keyboard 1450 on an opaque display.
As shown in fig. 15a, when the user draws a circle 1530 on the palm 1510 viewed through the optical display 121 using the finger 1520, the wearable device 100 may recognize a gesture of drawing the circle 1530 by using the image sensor 111 and set an area corresponding to the circle 1530 as an input region.
At this point, as shown in fig. 15b, wearable device 100 may display virtual dial 1550 on optical display 121 such that virtual dial 1550 overlaps the circular area viewed through optical display 121. For example, wearable device 100 may display virtual dial 1550 on optical display 121 to match the size of the circular region. Alternatively, wearable device 100 may display a virtual dial on an opaque display.
As such, the wearable device 100 according to an exemplary embodiment may provide virtual input interfaces having different shapes according to the type of gesture setting the input region, and information about the type, size, and shape of the virtual input interface provided according to the gesture type may be stored in the wearable device 100.
Fig. 16a and 16b are diagrams describing providing a virtual input interface determined based on an object provided with an input region according to an exemplary embodiment.
Referring to fig. 16a, a user may draw a diagram (e.g., a rectangle) for setting an input area on a table 1610 viewed through an optical display 121. For example, the user draws a rectangle on the table 1610 by using both hands.
The wearable device 100 may recognize a gesture of drawing a rectangle as a gesture of setting an input region, and set an area corresponding to the rectangle drawn on the table 1610 as the input region.
Here, when the table 1610 is an actual object provided with an input area, the user can use both hands, and thus the wearable device 100 may determine the QWERTY keyboard 1620 as a virtual input interface.
Additionally, the wearable device 100 may display a QWERTY keyboard 1620 on the optical display 121 such that the QWERTY keyboard 1620 overlaps a rectangular area of the table 1610 viewed through the optical display 121. For example, the wearable device 100 may display a QWERTY keyboard 1620 on the optical display 121 according to the size of the rectangular area. Alternatively, the wearable device 100 may display a QWERTY keyboard 1620 on an opaque display.
Referring to fig. 16b, the user may draw a diagram (e.g., a rectangle) for setting an input region on the palm 1630 viewed through the optical display 121. For example, the user may draw a rectangle on the palm 1630 by using a finger.
The wearable device 100 may recognize a gesture of drawing a rectangle as a gesture of setting an input region, and set an area corresponding to the rectangle drawn on the palm 1630 as the input region.
Here, when the palm 1630 is an actual object provided with an input area, the user can use only one hand, and thus the wearable device 100 can set the mobile terminal keyboard 1640 as a virtual input interface.
Additionally, the wearable device 100 may display the mobile terminal keyboard 1640 on the optical display 121 overlapping a rectangular area on the palm 1630 viewed through the optical display 121. For example, the wearable device 100 may display the mobile terminal keyboard 1640 on the optical display 121 according to the size of the rectangular area. Alternatively, the wearable device 100 may display the mobile terminal keyboard 1640 on an opaque display.
The color of the virtual input interface may be determined based on the color of the input area. For example, when the color of the input region is a first color, the color of the virtual input interface may be determined to be a second color different from the first color or a third color that is a complementary color of the first color. In this manner, the user can easily distinguish the virtual input interface overlapping the input area viewed through the optical display 121 from the input area.
Fig. 17a to 17c are diagrams describing a virtual input interface provided by the wearable device 100 according to an exemplary embodiment, the virtual input interface being determined based on a type of an actual object provided with an input area.
As shown in fig. 17a to 17c, it is assumed that a user wearing the wearable device 100 performs a gesture of setting an input region on a book 1700 while reading the book 1700.
The wearable device 100 according to an exemplary embodiment may recognize the type of an actual object provided with an input region by using the image sensor 111. For example, as shown in fig. 17a, wearable device 100 may detect a gesture by a user drawing rectangle 1710 on book 1700 using input tool 1701 by using image sensor 111. At this time, the wearable device 100 may recognize, via image processing, that the book 1700 is an actual object on which the input area is drawn, and thus, may determine the notepad as a virtual input interface corresponding to the book 1700.
As shown in fig. 17b, wearable device 100 may display virtual notepad 1720 on optical display 121 such that virtual notepad 1720 overlaps an input area disposed on book 1700 viewed through optical display 121.
Alternatively, the wearable device 100 according to an exemplary embodiment may set a blank space in the book 1700 where no text or image is displayed as an input area via image processing, and may display the virtual notepad 1720 on the optical display 121 such that the virtual notepad 1720 corresponds to the blank space viewed through the optical display 121.
In addition, wearable device 100 may obtain a first depth value of book 1700 and a second depth value of input tool 1701, and display the input on virtual notepad 1720 when it is determined that the input was generated based on the first depth value and the second depth value.
Additionally, as shown in fig. 17c, wearable device 100 may store input data 1730 displayed on the virtual notepad based on user input.
As such, when a user reads the book 1700 while wearing the wearable device 100, the user can easily store important information by using the virtual notepad.
Fig. 18a and 18b are diagrams describing a virtual input interface determined based on an input tool setting an input region according to an exemplary embodiment.
Referring to fig. 18a and 18b, a user may draw a diagram (e.g., a rectangle) for setting an input region in the air or on an actual object by using an input tool such as a finger or a pen.
The wearable device 100 may recognize a gesture of drawing a rectangle by using the input tool as a gesture of setting the input region, and set a rectangle drawn in the air or on an actual object as the input region.
When the input region is set, the wearable device 100 may determine the virtual input interface based on the input tool that set the input region.
For example, as shown in fig. 18a, when the input area 1810 is set using a finger 1820 as an input tool, the wearable device 100 may determine a mobile terminal keyboard 1830 that is easily touched by the finger 1820 as a virtual input interface.
As such, the wearable device 100 may display the mobile terminal keypad 1830 on the optical display 121 overlapping the input area 1810 viewed through the optical display 121. Alternatively, wearable device 100 may display mobile terminal keypad 1830 on an opaque display.
Meanwhile, as shown in fig. 18b, when the input region 1840 is set using the pen 1850 as an input tool, the wearable device 100 may determine a handwriting input window 1860, which is easily used through the pen 1850, as a virtual input interface.
As such, wearable device 100 may display handwriting input window 1860 on optical display 121 such that it overlaps input region 1840 viewed through optical display 121. Alternatively, wearable device 100 may display handwriting input window 1860 on an opaque display.
Fig. 19 is a flowchart illustrating a method of providing a virtual input interface determined based on an application being executed by a wearable device, according to an example embodiment.
Referring to fig. 19, in operation S1910, the wearable device 100 may execute an application. For example, wearable device 100 may select and execute any of a plurality of applications provided in wearable device 100. Here, the user may execute the application by using a voice input or a key input.
For example, wearable device 100 may execute a messaging application when a message is to be sent to an external device. At this time, the message may be a text message, an instant message, a chat message, or an email.
Alternatively, the wearable device 100 may receive a message from an external device and execute a message application to respond to or view the received message.
When an application that requires input of text or numbers, such as a message application, is executed (when a virtual input interface is to be displayed), wearable device 100 may receive a gesture and set an input region based on the gesture in operation S1920. Since the operation S1920 has been described in detail above with reference to the operation S210 of fig. 2 and fig. 3a to 5b, the details thereof are not repeated.
In operation S1930, wearable device 100 may determine a virtual input interface based on the type of application being executed.
For example, as will be described in detail later with reference to fig. 20a and 20b, when a message application is executed and text input is required for preparing a message, the wearable device 100 may determine a virtual keyboard, such as a QWERTY keyboard or a mobile terminal keyboard, as a virtual input interface. Alternatively, wearable device 100 may determine the virtual dial as the virtual input interface when the messaging application requires a numeric input, such as the recipient's phone number.
In operation S1940, the wearable device 100 may display the virtual input interface to overlap the input area.
Here, wearable device 100 may display the virtual input interface in the form of AR, MR, or VR.
For example, when wearable device 100 displays a virtual input interface in the form of an AR or MR, the virtual input interface may be displayed on the transparent display to overlap with the input area.
Alternatively, when wearable device 100 displays the virtual input interface in the form of a VR, the virtual input interface may be displayed on an opaque display to overlap with the input area.
In operation S1950, the wearable device 100 may acquire a first depth value of an input area and a second depth value of an input tool touching the virtual input interface.
In operation S1960, the wearable device 100 may determine whether an input is generated through the virtual input interface by comparing the first depth value and the second depth value.
Since operations S1940 to S1960 of fig. 19 correspond to operations S230 to S250 of fig. 2, the details thereof are not repeated.
Fig. 20a and 20b are diagrams describing providing a virtual input interface determined based on the type of an application being executed according to an exemplary embodiment.
Wearable device 100 may execute a call application based on user input. For example, the call application may be executed by using a voice input or a key input.
When the call application is executed, the user may set an input area to display a virtual input interface for inputting a phone number of a person the user wants to call. For example, the wearable device 100 may recognize a gesture in which the user draws an input region on the palm 2010 and set the input region on the palm 2010.
Then, the wearable device 100 may determine a virtual input interface corresponding to the calling application being executed, and as shown in fig. 20a, display a virtual dial 2020 as the virtual input interface on the optical display 121 such that the virtual dial 2020 overlaps with the palm 2010 viewed through the optical display 121.
Alternatively, wearable device 100 may execute a notepad application based on user input. For example, the user may execute the notepad application by using a voice input or a key input.
When the notepad application is executed, the user may set the input area to display a virtual input interface for entering text. For example, the wearable device 100 may recognize a gesture that sets an input region on the palm 2010 and set the input region on the palm 2010.
Then, wearable device 100 may determine a virtual input interface corresponding to the notepad application and, as shown in fig. 20b, display a virtual mobile terminal keyboard 2030 as the virtual input interface on optical display 121 such that virtual mobile terminal keyboard 2030 overlaps with palm 2010 viewed through optical display 121. However, the exemplary embodiments are not limited thereto.
FIG. 21 is a diagram illustrating a virtual input interface determined based on the type of content being executed, according to an illustrative embodiment.
The wearable device 100 according to an example embodiment may determine a virtual input interface to display based on the type of content being executed by the wearable device 100.
Examples of the content include, but are not limited to, still images, moving images, texts, and web pages. For example, the content may be educational content, movie content, broadcast content, game content, commercial content, picture content, or news content.
Executing content may mean that the content is displayed, output, or reproduced.
Referring to fig. 21, the wearable device 100 may detect a gesture to set an input region while executing game content 2110. At this point, the wearable device 100 may display the virtual game control panel 2115 corresponding to the game content 2110 on a transparent or opaque display so that it overlaps with the input area.
Alternatively, the wearable device 100 may detect a gesture to set the input region while executing music content 2120 such as drum content. At this time, the wearable device 100 may display the drum panel 2125 corresponding to the music content 2120 on a transparent or opaque display so as to overlap the input region.
Alternatively, wearable device 100 may detect a gesture to set the input region while displaying web page 2130. At this time, the wearable device 100 may display the virtual keyboard 2135 for searching for information from the web page 2130 on a transparent or opaque display so as to overlap the input area.
Fig. 22a to 23b are diagrams describing the same virtual input interface as a previous virtual input interface provided when the wearable device 100 recognizes an actual object provided with the previous virtual input interface, according to an exemplary embodiment.
As shown in fig. 22a, when the user draws a rectangle 2230 on the palm 2210 by using the finger 2220, the wearable device 100 may recognize a gesture of drawing the rectangle 2230 by using the image sensor 111, and set an area corresponding to the rectangle 2230 as an input region.
Here, the wearable device 100 may determine the type of virtual input interface to be displayed based on the type of application currently being executed. For example, wearable device 100 may determine mobile terminal keyboard 2250 as a virtual input interface when a notepad application requiring text input is being executed, although example embodiments are not limited thereto.
As shown in fig. 22b, the wearable device 100 may display the mobile terminal keypad 2250 on the optical display 121 such that the mobile terminal keypad 2250 overlaps with the rectangular area viewed through the optical display 121. Alternatively, the wearable device 100 may display the mobile terminal keypad 2250 on an opaque display.
Then, the wearable device 100 can recognize the same object as the actual object (the palm 2210 of fig. 22 b) on which the virtual input interface is disposed while executing the notepad application.
For example, as shown in fig. 23a, wearable device 100 may detect a palm 2210 of a user by using image sensor 111. At this time, the wearable device 100 may recognize, via image processing, that the palm 2210 is an actual object (the palm 2210 of fig. 22 b) on which the virtual input interface is disposed.
When an actual object is recognized, as shown in fig. 23b, wearable device 100 may provide the same virtual input interface as previously provided in the input region.
For example, the wearable device 100 may display a previously provided mobile terminal keypad 2250 on the optical display 121 to overlap with the input area 2270 viewed through the optical display 121, even when the user is not drawing a rectangle to set the input area by using the input tool.
As such, the user may enable wearable device 100 to identify the actual object on which the virtual input interface is displayed, such that wearable device 100 provides the virtual input interface provided.
FIG. 24 is a flowchart illustrating a method of providing a virtual input interface in an input area disposed over the air according to an exemplary embodiment.
Referring to fig. 24, in operation S2410, the wearable device 100 may set an input region in the air. For example, as described above with reference to fig. 3a, the wearable device 100 may recognize a diagram drawn in the air by a user using an input tool (such as a finger, a pen, a stylus, or a joystick), and set an area corresponding to the diagram as an input area.
In operation S2420, the wearable device 100 may determine a virtual input interface.
For example, wearable device 100 may determine a virtual input interface based on the properties of the input region. The wearable device 100 may determine a virtual input interface to be displayed on the optical display 121 based on at least one of a size of the input area, a shape of the input area, a distance between the input area and the wearable device 100 (a first depth value of the input area), and a gesture to set the input area.
Alternatively, wearable device 100 may determine the virtual input interface based on the type of application or content being executed. For example, when an application being executed requires text input, wearable device 100 may determine a virtual keyboard, such as a QWERTY keyboard or a mobile terminal keyboard, as the virtual input interface. Alternatively, wearable device 100 may determine a virtual dial as the virtual input interface when the application being executed requires digital input.
In operation S2430, the wearable device 100 may display the virtual input interface to overlap the input region.
At this time, wearable device 100 may display the virtual input interface in the form of AR, MR, or VR.
For example, when wearable device 100 displays a virtual input interface in the form of an AR or MR, wearable device 100 may display the virtual input interface on a transparent display such that the virtual input interface overlaps with an input region (a real-world 2D or 3D space) viewed through the transparent display.
Alternatively, when the virtual input interface is displayed in the form of VR, the wearable device 100 may capture a first image (real image) including an input region (2D or 3D space of the real world), and generate a second image by adding the virtual input interface (virtual image) to the input region of the first image. The wearable device 100 may display a second image on the opaque display, wherein in the second image the virtual input interface overlaps the input region.
In operation S2440, the wearable device 100 may acquire a first depth value of an input area and a second depth value of an input tool touching the virtual input interface.
The wearable device 100 may measure a distance from the wearable device 100 to the input area (a depth value of the input area, i.e., a first depth value) by using the depth sensor 112.
For example, when the input area is set in the air, the wearable device 100 may obtain the first depth value of the input area by measuring a depth value of an input tool that sets the input area in the air.
Meanwhile, if the input area is on an uneven surface and the input area does not exist on the same plane, there may be a plurality of depth values of the input area. When there are a plurality of depth values of the input area, the first depth value may be one of an average depth value of the plurality of depth values, a minimum depth value of the plurality of depth values, or a maximum depth value of the plurality of depth values, but is not limited thereto.
In addition, the wearable device 100 may measure a distance from the wearable device 100 to an input tool touching the virtual input interface (a depth value of the input tool, i.e., a second depth value) by using the depth sensor 112.
When the input tool is a 3D object, there may be multiple depth values of the input tool. When there are a plurality of depth values of the input tool, the second depth value may be one of an average depth value of the plurality of depth values, a minimum depth value of the plurality of depth values, or a maximum depth value of the plurality of depth values, but is not limited thereto.
For example, when the virtual input interface is touched by using the input tool, a point at which the input tool and the virtual input interface contact each other (an end point of the input tool) may be the second depth value.
In addition, the wearable device 100 may track the input tool that is moving in real-time by using the depth sensor 112 and calculate a second depth value that changes in real-time.
In operation S2450, the wearable device 100 may compare the first depth value and the second depth value.
For example, the wearable device 100 may determine whether the second depth value is greater than the first depth value, and when it is determined that the second depth value is greater than the first depth value, it is determined that the input is generated through the virtual input interface in operation S2460.
However, when it is determined that the second depth value is less than the first depth value, the wearable device 100 may determine that no input is generated through the virtual input interface in operation S2470.
Now, the determination of whether an input is generated will be described in detail with reference to fig. 25a and 25 b.
Fig. 25a and 25b are diagrams describing a method of determining whether an input is generated through a virtual input interface when an input area is set in the air.
Referring to fig. 25a and 25b, the wearable device 100 may display a virtual keyboard 2510 on a transparent or opaque display such that the virtual keyboard 2510 overlaps with an input region provided in the air.
The wearable device 100 may also measure a first depth value of the virtual keyboard 2510 by using the depth sensor 112.
Meanwhile, even when the user wearing the wearable device 100 moves, the wearable device 100 can display the virtual keyboard 2510 on the transparent or opaque display such that the virtual keyboard 2510 always overlaps with the input region having the first depth value. For example, even when the user is walking, wearable device 100 may adjust virtual keyboard 2510 to be continuously displayed in an area at a distance (first depth value) from wearable device 100 by using depth sensor 112.
Additionally, referring to fig. 25a and 25b, the user may input data by touching virtual keyboard 2510 in the air using finger 2520.
Here, the wearable device 100 may determine whether an input is generated through the virtual keyboard 2510 by measuring a depth value (second depth value) of the finger 2520 touching the virtual keyboard 2510.
For example, as shown in fig. 25a, finger 2520 may approach virtual keyboard 2510 to select a button displayed on virtual keyboard 2510. At this time, when the finger 2520 does not pass through the input area where the virtual keyboard 2510 is displayed, the second depth value of the finger 2520 is smaller than the first depth value.
When the second depth value of the finger 2520 is less than the first depth value, wearable device 100 may recognize that the user is not touching virtual keyboard 2510 and determine that no input was generated by virtual keyboard 2510.
On the other hand, as shown in FIG. 25b, when the finger 2520 crosses the input area where the virtual keyboard 2510 is displayed, the second depth value of the finger 2520 may be greater than the first depth value.
When the second depth value of the finger 2520 is greater than the first depth value, wearable device 100 may identify that the user is touching virtual keyboard 2510.
When it is determined that the user is touching virtual keyboard 2510, wearable device 100 may detect the location of finger 2520 on virtual keyboard 2510 by using image sensor 111. Wearable device 100 may determine input data for the user based on the detected position of finger 2520. For example, wearable device 100 may determine that the user selected the "enter" button while finger 2520 is passing through the "enter" button on virtual keyboard 2510.
According to an example embodiment, the wearable device 100 may accurately determine whether an input is generated through an over-the-air set virtual input interface by comparing a first depth value of an over-the-air set input area with a second depth value of an input tool (e.g., a finger or a pen) touching the virtual input interface.
Fig. 26 is a flowchart illustrating a method of providing a virtual input interface in an input area set on the air or a real object according to an exemplary embodiment.
Referring to fig. 26, in operation S2610, the wearable device 100 may set an input region in the air or on an actual object. For example, as described above with reference to fig. 3, the wearable device 100 may recognize a diagram drawn by a user in the air or on an actual object (such as a palm, a table, or a wall) using an input tool (such as a finger, a pen, a stylus, or a joystick), and set an area corresponding to the diagram as an input area.
Alternatively, as described above with reference to fig. 4, the wearable device 100 may recognize the preset object and set an area corresponding to the preset object as the input region.
Alternatively, as described above with reference to fig. 5, the wearable device 100 may recognize an operation in which the user touches the preset object using the input tool, and set an area corresponding to the touched preset object as the input region.
In operation S2620, the wearable device 100 may determine a virtual input interface.
For example, wearable device 100 may determine a virtual input interface based on the properties of the input region. The wearable device 100 may determine a virtual input interface to be displayed on the optical display 121 based on at least one of a size of the input area, a shape of the input area, a distance between the input area and the wearable device 100 (a first depth value of the input area), a type of an actual object provided with the input area setting, and a gesture to set the input area.
Alternatively, wearable device 100 may determine the virtual input interface based on the type of application or content being executed. For example, when an application being executed requires text input, wearable device 100 may determine a virtual keyboard, such as a QWERTY keyboard or a mobile terminal keyboard, as the virtual input interface. Alternatively, wearable device 100 may determine a virtual dial as the virtual input interface when the application being executed requires digital input.
In operation S2630, the wearable device 100 may display the virtual input interface to overlap with the input region.
At this time, wearable device 100 may display the virtual input interface in the form of AR, MR, or VR.
For example, when wearable device 100 displays a virtual input interface in the form of an AR or MR, wearable device 100 may display the virtual input interface on the transparent display such that the virtual input interface overlaps the input region.
Alternatively, when wearable device 100 displays the virtual input interface in the form of a VR, wearable device 100 may display the virtual input interface on an opaque display such that the virtual input interface overlaps the input region.
Since operation S2630 of fig. 26 is the same as operation S2430 of fig. 24, the details thereof are not repeated.
In operation S2640, the wearable device 100 may acquire a first depth value of the input area and a second depth value of the input tool touching the virtual input interface.
For example, when the input area is set in the air, the wearable device 100 may acquire a first depth value of the input area by measuring a depth value of the input tool while the input area is set in the air.
Alternatively, when the input area is set on the real object, the wearable device 100 may acquire the first depth value of the input area by measuring the depth value of the real object (the distance from the wearable device 100 to the real object).
In addition, the wearable device 100 may measure a distance from the wearable device 100 to an input tool touching the virtual input interface (a depth value of the input tool, i.e., a second depth value) by using the depth sensor 112.
In addition, the wearable device 100 may track the input tool that is moving in real-time and calculate the second depth value in real-time by using the depth sensor 112.
In operation S2650, the wearable device 100 may compare a difference between the first value and the second value to a threshold.
For example, in operation S2660, the wearable device 100 may determine whether the difference is less than a threshold, and determine that an input is generated through the virtual input interface when it is determined that the difference is less than the threshold.
In operation S2670, when it is determined that the difference is equal to or greater than the threshold, the wearable device 100 may determine that no input is generated through the virtual input interface.
Now, the determination of whether an input is generated will be described in detail with reference to fig. 27a and 27 b.
Fig. 27a and 27b are diagrams describing a method of determining whether an input is generated through a virtual input interface when an input area is set on an actual object.
Referring to fig. 27a and 27b, wearable device 100 may display virtual keyboard 2730 on optical display 121 such that virtual keyboard 2730 overlaps with an actual object, such as palm 2710, viewed through optical display 121.
In addition, the wearable device 100 may measure a first depth value of the palm 2710 by using the depth sensor 112.
Meanwhile, the wearable device 100 may track the palm 2710 in real time even if the position of the palm 2710 changes, and the wearable device 100 may continuously adjust the virtual keyboard 2730 by continuously calculating the first depth value in real time as the change of the first depth value causes the virtual keyboard 2730 to overlap the palm 2710 viewed through the optical display 121.
In addition, referring to fig. 27b, the user may input data by touching a virtual keyboard 2730 shown on the palm 2710 with a finger 2720.
At this time, the wearable device 100 may measure a depth value (second depth value) of the finger 2720 touching the virtual keyboard 2730 to determine whether an input is generated through the virtual keyboard 2730.
As shown in fig. 27a, when finger 2720 is at least a distance away from palm 2710, wearable device 100 may determine that no input was generated through virtual keyboard 2730.
For example, when it is shown that a difference between a first depth value of the palm 2710 of the virtual keyboard 2730 and a second depth value of the finger 2720 is equal to or greater than a threshold value, it may be determined that the user is not touching the virtual keyboard 2730, and it may be determined that an input is not generated through the virtual keyboard 2730.
As shown in fig. 27b, the user may bring a finger 2720 close to virtual keyboard 2730 to select a button displayed on virtual keyboard 2730. Here, when the difference between the first depth value and the second depth value is less than the threshold value, it may be determined that the user is touching the virtual keyboard 2730.
In addition, when the difference between the first depth value and the second depth value is less than the threshold, wearable device 100 may detect the location of finger 2720 on virtual keyboard 2730 by using image sensor 111. The wearable device 100 may determine input data based on the location of the finger 2720. For example, when finger 2720 passes through an "enter" button on virtual keyboard 2730, wearable device 100 may determine that the user selected the "enter" button.
According to an example embodiment, the wearable device 100 may accurately determine whether an input is generated through a virtual input interface set in the air or on a real object by comparing a first depth value of an input area set in the air or on the real object by a user with a second depth value of an input tool (e.g., a finger or a pen) touching the virtual input interface.
Fig. 28a and 28b are diagrams for describing a method of acquiring a first depth value of an input area and a second depth value of an input tool according to an exemplary embodiment.
As shown in fig. 28a and 28b, it is assumed that when keyboard input is required, a virtual keyboard is displayed by using the palm of the user's hand as an input area.
Referring to fig. 28a, a user may set an input region on a left palm 2820 while wearing a glasses type wearable device (first wearable device) 100, and may be wearing a second wearable device 2810 on a left wrist. Here, the second wearable device 2810 can be worn on a user's wrist (like a watch, bracelet, or band), but is not limited thereto.
The second wearable device 2810 may include a location sensor, and location information of the second wearable device 2810 may be sensed by using the location sensor. In addition, the first and second wearable devices 100 and 2810 may transceive data with each other by including a communicator, and the second wearable device 2810 may transmit the sensed location information of the second wearable device to the first wearable device 100.
Meanwhile, the first wearable device 100 may include a position sensor, and position information of the first wearable device 100 may be sensed by using the position sensor.
The first wearable device 100 may compare the sensed location information of the first wearable device 100 with the received location information of the second wearable device 2810 to calculate a distance between the first wearable device 100 and the second wearable device 2810.
The distance between the left wrist wearing the second wearable device 2810 and the first wearable device 100 may be similar to the distance between the left palm 2820 and the first wearable device 100, where the left palm 2820 is provided to display the input area of the virtual keyboard 2840. Thus, the first wearable device 100 may determine a distance between the first wearable device 100 and the second wearable device 2810 as a first depth value.
As such, the first wearable device 100 may accurately acquire the first depth value by using the location information of the second wearable device 2810.
In addition, the second wearable device 2810 may include a motion sensor, and recognize a touch input by detecting a motion (such as vibration) generated when the left palm 2820 is touched using the motion sensor. When a touch input is recognized, the second wearable device 2810 may transmit data regarding the touch input to the first wearable device 100 through the communicator. Accordingly, the first wearable device 100 can accurately recognize that the touch input is generated by using the sensing information of the second wearable device 2810.
Meanwhile, referring to fig. 28b, the user may set an input region on the left palm 2820 while wearing the glasses type first wearable device 100, and may wear the third wearable device 2850 on the right finger 2830. Here, the third wearable device 2850 can be worn on a finger (like a thimble or a ring), but is not limited thereto.
The third wearable device 2850 may include a position sensor, and position information of the third wearable device 2850 is sensed by using the position sensor.
In addition, the first wearable device 100 and the third wearable device 2850 may transceive data with each other by using the included communicator, and the third wearable device 2850 may transmit the sensed location information of the third wearable device 2850 to the first wearable device 100.
The first wearable device 100 may include a position sensor, and position information of the first wearable device 100 may be sensed by using the position sensor.
The first wearable device 100 may compare the sensed location information of the first wearable device 100 with the received location information of the third wearable device 2850 to calculate a distance between the first wearable device 100 and the third wearable device 2850.
As shown in fig. 28b, when the right finger 2830 wearing the third wearable device 2850 (such as a thimble) is used as an input tool for touching the virtual keyboard 2840, the depth value of the third wearable device 2850 may be the depth value of the right finger 2830, and the distance between the first wearable device 100 and the third wearable device 2850 may be determined to be the second depth value.
As such, the first wearable device 100 may accurately acquire the second depth value using the location information through the third wearable device 2850.
In addition, the third wearable device 2850 may include a pressure sensor, and may recognize a touch input by detecting a pressure generated when the left palm 2820 is touched using the pressure sensor. When a touch input is recognized, the third wearable device 2850 may transmit data regarding the touch input to the first wearable device 100 through the communicator. As such, the first wearable device 100 may accurately recognize whether the touch input is generated by using the sensing information of the third wearable device 2850.
Fig. 29 is a flowchart illustrating a method of providing feedback on whether an input is generated through a virtual input interface according to an exemplary embodiment.
Referring to fig. 29, in operation S2910, the wearable device 100 may set an input region.
When the input area is set, the wearable device 100 may determine a virtual input interface in operation S2920.
For example, wearable device 100 may determine a virtual input interface based on the properties of the input region. The wearable device 100 may determine a virtual input interface to be displayed on the optical display 121 based on at least one of a size of the input area, a shape of the input area, a distance between the input area and the wearable device 100 (a first depth value of the input area), and a gesture to set the input area.
Alternatively, wearable device 100 may determine the virtual input interface based on the type of application or content being executed. For example, when an application being executed requires text input, wearable device 100 may determine a virtual keyboard, such as a QWERTY keyboard or a mobile terminal keyboard, as the virtual input interface. Alternatively, wearable device 100 may determine a virtual dial as the virtual input interface when the application being executed requires digital input.
In operation S2930, the wearable device 100 may display the virtual input interface to overlap the input region.
At this time, wearable device 100 may display the virtual input interface in the form of AR, MR, or VR.
For example, when wearable device 100 displays a virtual input interface in the form of an AR or MR, the virtual input interface may be displayed on the transparent display to overlap with the input area.
Alternatively, when wearable device 100 displays the virtual input interface in the form of a VR, the virtual input interface may be displayed on an opaque display to overlap with the input area.
In operation S2940, the wearable device 100 may acquire a first depth value of the input area and a second depth value of the input tool touching the virtual input interface.
In operation S2950, the wearable device 100 may determine whether an input is generated through the virtual input interface by comparing the first depth value and the second depth value.
Since operations S2930 to S2950 of fig. 29 correspond to operations S230 to S250 of fig. 2, additional details thereof are not repeated.
In operation S2960, when it is determined that an input is generated through the virtual input interface, the wearable device 100 may output a notification signal corresponding to the generated input. Examples of the notification signal include, but are not limited to, a video signal, an audio signal, and a tactile signal.
The output of the notification signal will be described in detail with reference to fig. 30 to 32.
Fig. 30 and 31 are diagrams describing outputting a notification signal corresponding to whether an input is generated by the wearable device according to an exemplary embodiment.
As shown in fig. 30 and 31, the wearable device 100 may recognize a gesture of setting an input region on the palm 3010 and display a virtual keyboard 3030 on the optical display 121 so as to overlap the palm 3010 viewed through the optical display 121.
At this time, the user may generate an input by touching a button displayed on the virtual keyboard 3030 with the finger 3020.
Wearable device 100 may compare the depth value of finger 3020 (second depth value) with the depth value of finger 3010 (first depth value), and determine that an input was generated by finger 3020 when the difference between the first depth value and the second depth value is less than a threshold.
When an input is generated, the wearable device 100 may detect the position of the finger 3020 on the virtual keyboard 3030 and generate input data regarding the button 3040 at the position of the finger 3020. In addition, the wearable device 100 may provide feedback to the user so that the user easily recognizes the input.
For example, the color of the button 3040 may be changed. Alternatively, an alarm may be output when an input is generated through the virtual keyboard 3030.
Alternatively, the wearable device 100 may output a tactile signal by using a peripheral device when an input is generated through the virtual input interface.
As shown in fig. 31, the user may be wearing a second wearable device 3150 on a finger 3020 touching a virtual keyboard 3030. Here, the second wearable device 3150 can be worn on the finger 3020 (such as a thimble or ring), but is not limited thereto as long as the second wearable device 3150 is wearable.
Additionally, the second wearable device 3150 may include a haptic module. The haptic module may generate various haptic effects. Examples of the haptic effects generated by the haptic module include vibration effects. When the haptic module generates vibrations as haptic effects, the intensity and pattern of the vibrations may be changed, and different types of vibrations may be output in combination or sequentially.
When an input is generated on a button displayed on the virtual keyboard 3030, the wearable device 100 may request the second wearable device 3150 to output a haptic signal through the communicator.
Then, in response, the second wearable device 3150 may output a haptic signal through the haptic module.
Fig. 32 is a diagram describing outputting a notification signal corresponding to whether an input is generated through a virtual input interface according to an exemplary embodiment.
As shown in fig. 32, the wearable device 100 may recognize a gesture of a user setting an input region on a table 3210 and display a virtual piano keyboard 3220 on a transparent or opaque display so as to overlap the table 3210.
At this time, the user may generate an input by touching the virtual piano keyboard 3220 with the finger 3230.
The wearable device 100 may compare the depth value (second depth value) of the finger 3230 with the depth value (first depth value) of the table 3210, and determine that an input is generated by the finger 3230 when a difference between the first and second depth values is less than a threshold.
When it is determined that the input is generated, the wearable device 100 may detect the position of the finger 3230 on the virtual piano keyboard 3220 and display the virtual image 3250 on the virtual piano keyboard 3220 at the position of the finger 3230. In this way, the user can easily recognize that an input is generated at the position where the virtual image 3250 is displayed.
Fig. 33 and 34 are block diagrams of the wearable device 100 according to an example embodiment.
As shown in fig. 33, a wearable device 100 according to an example embodiment may include a sensor 110, an optical display 121, and a controller 130. However, not all of the components shown in fig. 33 are necessary. The wearable device 100 may include more or fewer components than those shown in fig. 33.
For example, as shown in fig. 34, the wearable device 100 according to an exemplary embodiment may further include a user input 140, a communicator 150, and a memory 160, as well as the sensor 110, the outputter 120, and the controller 130.
Now, the above components will be described in detail.
The sensor 110 may detect a state of the wearable device 100 or a state around the wearable device 100, and transmit information about the detected state to the controller 130.
The sensors 110 may include an image sensor 111 and a depth sensor 112. The wearable device 100 may acquire an image frame of a still image or a moving image through the image sensor 111. Here, the image captured by the image sensor 111 may be processed by the controller 130 or a separate image processor.
According to an exemplary embodiment, the image sensor 111 may recognize a gesture of setting an input region in the air or on an actual object. For example, the image sensor 111 may recognize a gesture of setting an input region in the air or on an actual object by using an input tool.
Alternatively, the image sensor 111 may recognize a preset object to be set as an input region and recognize a gesture of touching the preset object by using an input tool. Alternatively, the image sensor 111 may capture a first image including the input area.
According to an example embodiment, the depth sensor 112 may acquire a first depth value of the input area and a second depth value of the input tool touching the virtual input interface. For example, the depth sensor 112 may measure a distance from the wearable device 100 to the input area and a distance from the wearable device 100 to the input tool.
Alternatively, when the input area is disposed on the real object, the depth sensor 112 may measure a distance from the wearable device 100 to the real object, and acquire a first depth value of the input area by using the measured distance.
According to an example embodiment, the sensors 110 may include an acceleration sensor 113, a location sensor 114, such as a Global Positioning System (GPS), an atmospheric pressure sensor 115, a temperature/humidity sensor 116, a geomagnetic sensor 117, a gyro sensor 118, and a microphone 119, and at least one of an image sensor 111 and a depth sensor 112.
The microphone 119 receives an external sound signal and processes the external audio signal into electric voice data. For example, the microphone 119 may receive an external sound signal from an external device or a person. The microphone 119 may remove noise generated while receiving an external sound signal using any of various noise removal algorithms.
Since the functions of the acceleration sensor 113, the position sensor 114, the atmospheric pressure sensor 115, the temperature/humidity sensor 116, the geomagnetic sensor 117, and the gyro sensor 118 are intuitively inferred by those skilled in the art, details thereof are not provided herein.
The output 120 may output an audio signal, a video signal, or a vibration signal, and may include an optical display 121, a sound output 122, and a vibration motor 123.
Optical display 121 may display information processed by wearable device 100. For example, the optical display 121 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a phone call in a call mode, and display a virtual input interface in an input mode.
According to an exemplary embodiment, the optical display 121 may be a transparent display or an opaque display. The transparent display is an information display device in which the rear surface of a screen displaying information is transparent. The transparent display includes a transparent device, and the transparency may be adjusted by adjusting light transmittance of the transparent device or adjusting RGB values of respective pixels.
When the optical display 121 forms a touch screen by forming a layer structure with a touch panel, the optical display 121 may be used as an input device as well as an output device. The touch screen may detect a touch gesture of a user on the touch screen and transmit information about the touch gesture to the controller 130. Examples of touch gestures include tap, touch and hold, double tap, drag, pan, flick, drag and drop, and sweep (swipe).
The optical display 121 may include at least one of a liquid crystal display, a thin film transistor-liquid crystal display, an organic light emitting diode, a flexible display, a 3D display, and an electrophoretic display. In addition, the wearable device 100 may include at least two optical displays 121 according to the structure of the wearable device 100.
The sound output 122 outputs audio data received from the communicator 150 or audio data stored in the memory 160. In addition, the sound output 122 outputs a sound signal (such as a call signal reception sound or a message reception sound) related to a function performed by the wearable device 100. The sound output 122 may include a speaker or a buzzer.
According to an exemplary embodiment, when an input is generated through the virtual input interface, the sound output 122 may output an audio signal corresponding to the input.
The vibration motor 123 may output a vibration signal. For example, the vibration motor 123 may output a vibration signal (such as a call signal reception sound or a message reception sound) corresponding to the output of audio data or video data. In addition, the vibration motor 123 may output a vibration signal when an input is generated through the virtual input interface.
The controller 130 generally controls the overall operation of the wearable device 100. For example, the controller 130 may execute programs stored in the memory 160 to control the sensor 110, the outputter 120, the user input 140, the communicator 150, and the memory 160.
The controller 130 may set the input region based on the gesture recognized through the image sensor 111. For example, when the image sensor 111 recognizes a gesture in which a figure is drawn in the air or on an actual object, the controller 130 may set an area corresponding to the figure as an input region.
The controller 130 may determine a virtual input interface to be displayed on the optical display 121 based on the properties of the input area.
The controller 130 may determine a type of the virtual input interface based on the first depth value of the input area and display the virtual input interface on the optical display 121 to overlap the input area.
The controller 130 may determine the type of the virtual input interface based on the type of the actual object provided with the input area and display the virtual input interface on the optical display 121 so as to overlap the input area.
The controller 130 may determine the type of the virtual input interface based on the type of the gesture setting the input region or the size of the input region and display the virtual input interface on the optical display 121 to overlap the input region.
The controller 130 may determine a virtual input interface based on the type of application being executed by the wearable device 100 and display the virtual input interface on the optical display 121 such that it overlaps the input area.
The controller 130 may display the virtual input interface on the transparent display such that the virtual input interface is displayed on the input area viewed through the transparent display.
The controller 130 may generate a second image in which the virtual input interface overlaps with the input area included in the first image, and display the second image including the virtual input interface on the optical display 121.
The controller 130 may determine whether an input is generated through the virtual input interface based on a result of comparing the first depth value and the second depth value. For example, the controller 130 may determine that an input is generated through the virtual input interface when a difference between the first depth value and the second depth value is within a threshold.
When the second depth value is greater than the first depth value, the controller 130 may determine that the input is generated through the virtual input interface.
The controller 130 may control the outputter 120 to output a notification signal corresponding to the generation of the input.
The user enters data for controlling wearable device 100 via user input 130. For example, the user input 140 may be a keypad, a dome switch, a touch pad (a touch capacitive type, a pressure-resistant film type, an infrared light detection type, a surface acoustic wave conduction type, a bulk tension measurement type, or a piezoelectric effect type), a jog wheel, or a jog switch, but is not limited thereto. According to an exemplary embodiment, the user input 140 may include a virtual input interface.
Communicator 150 may include at least one component that enables wearable device 100 to communicate with an external device or server. For example, the communicator 150 may include a local area network communicator 151, a mobile communicator 152, and a broadcast receiver 153.
The local area network linker 151 may be a bluetooth communicator, a near field communication/radio frequency identification (NFC/RFID) unit, a wireless local area network (WiFi) communicator, a Zigbee communicator, an infrared data association (IrDA) communicator, an Ultra Wideband (UWB) communicator, or an Ant + communicator, but is not limited thereto.
For example, the local area network linker 151 may receive location information of a second wearable device or a third wearable device.
The mobile communicator 152 transmits and receives wireless signals to and from at least one of a base station, an external terminal, and a server over a mobile communication network. Here, the wireless signal may include various types of data according to transmission and reception of a voice call signal, an image call signal, or a text/multimedia message.
The broadcast receiver 152 receives broadcast signals and/or information related to a broadcast from an external source through a broadcast channel. The broadcast channel may be a satellite channel or a terrestrial wave channel. According to an example embodiment, the wearable device 100 may not include the broadcast receiver 153.
The memory 160 may store a program for processing and controlling the controller 130, and may store input/output data (such as gesture information corresponding to an input mode, a virtual input interface, data input through the virtual input interface, sensing information measured by a sensor, and contents).
The memory 160 may include at least one of a flash memory, a hard disk, a micro multimedia card, a card type memory such as a Secure Digital (SD) or extreme digital (XD) memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, and an optical disk. In addition, the wearable device 100 may operate a network memory or cloud server on the internet that performs the storage function of the memory 160.
The programs stored in the memory 160 may be classified into a plurality of modules (such as a UI module 161 and a notification module 162) based on functions.
UI module 161 may provide a dedicated UI or GUI that interacts with wearable device 100 depending on the application. In addition, according to an exemplary embodiment, the UI module 161 may select and provide a virtual input interface based on a situation.
Notification module 162 may generate a signal for notifying the generation of an event in wearable device 100. Examples of the event generated in the wearable device 100 may include call signal reception, message reception, input of a key signal through a virtual input interface, and schedule notification. The notification module 162 may output a notification signal in the form of a video signal through the optical display 121, an audio signal through the sound output 122, or a vibration signal through the vibration motor 123. Alternatively, notification module 162 may output the haptic signal by using an external wearable device (such as a ring, thimble, bracelet, or glove).
The above-described method is recorded on a computer-readable recording medium by being implemented with a computer program to be executed using various computers. The computer-readable recording medium may include at least one of a program command, a data file, and a data structure. The program command recorded in the computer-readable recording medium may be specially designed or well known to those skilled in the computer software field. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like. Examples of computer commands include machine code obtained with a compiler and high-level languages executable by a computer using an interpreter.
As described above, according to one or more exemplary embodiments, the wearable device 100 may accurately determine whether an input is generated through the virtual input interface by comparing a depth value of an input tool touching the virtual input interface with a reference depth value defined by a user.
Although one or more exemplary embodiments have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope defined by the following claims.

Claims (19)

1. An eyeglass-type wearable device, comprising:
an image sensor;
a display; and
a controller configured to:
controlling an image sensor to capture one or more images to sense a gesture of a user drawing a line on a physical object by using an input tool;
determining a user input area based on a line drawn by using the input tool;
identifying a type of the physical object via image processing;
controlling the display to display a virtual input interface based on the user input area and the type of the physical object,
wherein the gesture sensed from the one or more images corresponds to a graphic formed by the line, and the virtual input interface is displayed to correspond to the graphic.
2. The eyewear-type wearable device of claim 1, wherein the virtual input interface is displayed to correspond to a size of the user input area.
3. The eyewear-type wearable device of claim 1, wherein the virtual input interface is determined based on a type of application being executed by the eyewear-type wearable device.
4. The eyewear-type wearable device of claim 1, wherein the display comprises a transparent display, wherein the transparent display is configured to provide a virtual input interface on an area of the transparent display corresponding to a user input region viewed through the transparent display.
5. The eyewear-type wearable device of claim 1, wherein the image sensor is configured to capture a first image of a user input region, the display configured to display a second image of a virtual input interface over the user input region of the first image.
6. The eyeglass-type wearable device of claim 1, further comprising:
a depth sensor configured to sense a first depth value corresponding to a distance from the glasses-type wearable device to the user input area and a second depth value corresponding to a distance from the glasses-type wearable device to the input tool,
wherein the controller is further configured to determine whether an input is generated through the virtual input interface based on the first and second depth values.
7. The glasses-type wearable device of claim 6, wherein a display size of a virtual input interface is determined based on the first depth value.
8. The glasses-type wearable device of claim 6, wherein the controller is configured to determine that an input is generated through the virtual input interface when a difference between the first depth value and the second depth value is less than a threshold.
9. The glasses-type wearable device of claim 6, wherein the controller is configured to determine that an input is generated through the virtual input interface when the second depth value is greater than the first depth value.
10. The eyeglass-type wearable device of claim 1, wherein the controller is further configured to:
comparing the distance from the eyewear-type wearable device to the user input area to a threshold,
based on the distance being less than the threshold, control the display to provide a first type of virtual input interface on the display in the determined user input area, an
Based on the distance being greater than or equal to the threshold, control the display to provide a second type of virtual input interface on the display in the determined user input area.
11. The eyeglass-type wearable device of claim 1, wherein the controller is further configured to: a type of input tool is determined, and the display is controlled to provide a virtual input interface based on the user input area, the type of physical object, and the type of input tool.
12. A method of providing a virtual input interface by a glasses-type wearable device, the method comprising:
capturing one or more images to sense a gesture of a user drawing a line on a physical object by using an input tool;
determining a user input area based on a line drawn by using the input tool;
identifying a type of the physical object via image processing;
displaying a virtual input interface on a display of the glasses-type wearable device based on the user input area and the type of the physical object,
wherein the step of determining the user input area comprises:
identifying a pattern formed by the lines; and
determining an area corresponding to the graphic as a user input area.
13. The method of claim 12, wherein the virtual input interface is determined based on a size of the user input area.
14. The method of claim 12, wherein the virtual input interface is determined based on a type of application being executed by the eyewear-type wearable device.
15. The method of claim 12, wherein providing a virtual input interface comprises:
capturing a first image of a user input area by using an image sensor;
generating a second image of the virtual input interface; and
displaying the second image of a virtual input interface over a user input area of the first image.
16. The method of claim 12, further comprising:
obtaining a first depth value corresponding to a distance from the glasses-type wearable device to the user input area and a second depth value corresponding to a distance from the glasses-type wearable device to the input tool;
determining whether an input is generated through a virtual input interface based on the first depth value and the second depth value.
17. The method of claim 16, wherein a display size of a virtual input interface is determined based on a size of the first depth value.
18. The method of claim 16, wherein determining whether input has been generated through the virtual input interface comprises: determining that an input is generated through a virtual input interface when a difference between the first depth value and the second depth value is less than a threshold.
19. The method of claim 16, wherein determining whether input has been generated through the virtual input interface comprises: determining that an input is generated through a virtual input interface when the second depth value is greater than the first depth value.
CN201910757959.2A 2014-03-21 2015-03-17 Method and wearable device for providing virtual input interface Active CN110488974B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
KR20140033705 2014-03-21
KR10-2014-0033705 2014-03-21
KR1020140098653A KR20150110257A (en) 2014-03-21 2014-07-31 Method and wearable device for providing a virtual input interface
KR10-2014-0098653 2014-07-31
KR10-2014-0179354 2014-12-12
KR1020140179354A KR102360176B1 (en) 2014-03-21 2014-12-12 Method and wearable device for providing a virtual input interface
CN201580001071.6A CN105339870B (en) 2014-03-21 2015-03-17 For providing the method and wearable device of virtual input interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201580001071.6A Division CN105339870B (en) 2014-03-21 2015-03-17 For providing the method and wearable device of virtual input interface

Publications (2)

Publication Number Publication Date
CN110488974A CN110488974A (en) 2019-11-22
CN110488974B true CN110488974B (en) 2021-08-31

Family

ID=54341451

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201580001071.6A Active CN105339870B (en) 2014-03-21 2015-03-17 For providing the method and wearable device of virtual input interface
CN201910757959.2A Active CN110488974B (en) 2014-03-21 2015-03-17 Method and wearable device for providing virtual input interface

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201580001071.6A Active CN105339870B (en) 2014-03-21 2015-03-17 For providing the method and wearable device of virtual input interface

Country Status (2)

Country Link
KR (1) KR20150110257A (en)
CN (2) CN105339870B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230113663A (en) * 2016-05-20 2023-07-31 매직 립, 인코포레이티드 Contextual awareness of user interface menus
DE102016211494B4 (en) 2016-06-27 2020-10-01 Ford Global Technologies, Llc Control device for a motor vehicle
DE102016211495A1 (en) 2016-06-27 2017-12-28 Ford Global Technologies, Llc Control device for a motor vehicle
CN106331806B (en) * 2016-08-23 2019-11-19 青岛海信电器股份有限公司 A kind of implementation method and equipment of virtual remote controller
US10147243B2 (en) * 2016-12-05 2018-12-04 Google Llc Generating virtual notation surfaces with gestures in an augmented and/or virtual reality environment
KR20180080012A (en) * 2017-01-03 2018-07-11 주식회사 한국스포츠예술차세대플랫폼 The apparatus and method of musical contents creation and sharing system using social network service
CN106781841A (en) * 2017-01-20 2017-05-31 东莞市触梦网络科技有限公司 A kind of AR religion picture devices and its religion picture system
JPWO2018146922A1 (en) * 2017-02-13 2019-11-21 ソニー株式会社 Information processing apparatus, information processing method, and program
CN106951153B (en) 2017-02-21 2020-11-20 联想(北京)有限公司 Display method and electronic equipment
AU2018256365A1 (en) 2017-04-19 2019-10-31 Magic Leap, Inc. Multimodal task execution and text editing for a wearable system
CN108932100A (en) * 2017-05-26 2018-12-04 成都理想境界科技有限公司 A kind of operating method and head-mounted display apparatus of dummy keyboard
CN108700957B (en) * 2017-06-30 2021-11-05 广东虚拟现实科技有限公司 Electronic system and method for text entry in a virtual environment
CN107300975A (en) * 2017-07-13 2017-10-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107562205B (en) * 2017-09-15 2021-08-13 上海展扬通信技术有限公司 Projection keyboard of intelligent terminal and operation method of projection keyboard
US10902250B2 (en) * 2018-12-21 2021-01-26 Microsoft Technology Licensing, Llc Mode-changeable augmented reality interface
DE102020121415B3 (en) 2020-08-14 2021-12-02 Bayerische Motoren Werke Aktiengesellschaft Projection system for generating a graphical user interface, graphical user interface and method for operating a projection system
KR102286018B1 (en) * 2020-09-09 2021-08-05 주식회사 피앤씨솔루션 Wearable augmented reality device that inputs mouse events using hand gesture and method of mouse event input for wearable augmented reality device using hand gesture
CN112256121A (en) * 2020-09-10 2021-01-22 苏宁智能终端有限公司 Implementation method and device based on AR (augmented reality) technology input method
CN112716117B (en) * 2020-12-28 2023-07-14 维沃移动通信有限公司 Intelligent bracelet and control method thereof
CN116974435A (en) * 2022-04-24 2023-10-31 中兴通讯股份有限公司 Operation interface generation method, control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012124844A1 (en) * 2011-03-16 2012-09-20 Lg Electronics Inc. Method and electronic device for gesture-based key input
US20130016070A1 (en) * 2011-07-12 2013-01-17 Google Inc. Methods and Systems for a Virtual Input Device
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012124844A1 (en) * 2011-03-16 2012-09-20 Lg Electronics Inc. Method and electronic device for gesture-based key input
US20130016070A1 (en) * 2011-07-12 2013-01-17 Google Inc. Methods and Systems for a Virtual Input Device
CN103019377A (en) * 2012-12-04 2013-04-03 天津大学 Head-mounted visual display equipment-based input method and device

Also Published As

Publication number Publication date
CN110488974A (en) 2019-11-22
KR20150110257A (en) 2015-10-02
CN105339870A (en) 2016-02-17
CN105339870B (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN110488974B (en) Method and wearable device for providing virtual input interface
US10534442B2 (en) Method and wearable device for providing a virtual input interface
US11093045B2 (en) Systems and methods to augment user interaction with the environment outside of a vehicle
US11995774B2 (en) Augmented reality experiences using speech and text captions
KR102360176B1 (en) Method and wearable device for providing a virtual input interface
US10120454B2 (en) Gesture recognition control device
US10495878B2 (en) Mobile terminal and controlling method thereof
US9898865B2 (en) System and method for spawning drawing surfaces
US10776618B2 (en) Mobile terminal and control method therefor
US20180218545A1 (en) Virtual content scaling with a hardware controller
US20150379770A1 (en) Digital action in response to object interaction
US20170277259A1 (en) Eye tracking via transparent near eye lens
KR102499354B1 (en) Electronic apparatus for providing second content associated with first content displayed through display according to motion of external object, and operating method thereof
WO2016032892A1 (en) Navigating augmented reality content with a watch
JP2017146651A (en) Image processing method and image processing program
US11360550B2 (en) IMU for touch detection
KR20180097031A (en) Augmented reality system including portable terminal device and projection device
KR20240028897A (en) METHOD AND APPARATUS FOR DISPLAYING VIRTUAL KEYBOARD ON HMD(head mounted display) DEVICE
WO2024049463A1 (en) Virtual keyboard
KR20220151328A (en) Electronic device and control method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant