GB2494907A - A Head-mountable display with gesture recognition - Google Patents

A Head-mountable display with gesture recognition Download PDF

Info

Publication number
GB2494907A
GB2494907A GB201116467A GB201116467A GB2494907A GB 2494907 A GB2494907 A GB 2494907A GB 201116467 A GB201116467 A GB 201116467A GB 201116467 A GB201116467 A GB 201116467A GB 2494907 A GB2494907 A GB 2494907A
Authority
GB
United Kingdom
Prior art keywords
text
display
user
video signal
observer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB201116467A
Other versions
GB201116467D0 (en
Inventor
Stefan Lodeweyckx
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe BV United Kingdom Branch
Sony Corp
Original Assignee
Sony Europe Ltd
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Europe Ltd, Sony Corp filed Critical Sony Europe Ltd
Priority to GB201116467A priority Critical patent/GB2494907A/en
Publication of GB201116467D0 publication Critical patent/GB201116467D0/en
Publication of GB2494907A publication Critical patent/GB2494907A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A head-mountable display system 10 includes a frame 30 to be mounted onto an observer's head 20, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer; a display element 60 mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer without entirely obscuring the observer's view of the environment in line with the virtual image of the video display. One or more cameras 80 are directed away from the observer towards the position of the virtual image of the video display. There is provided a gesture detector for detecting gestures by the observer in the field of view of the one or more cameras; a processor for detecting the image position with respect to an image displayed on the virtual image of the video display which is coincident, from the observer's point of view, with the detected gesture; and a controller for controlling the video signal source to vary the video signal in dependence upon a detected gesture and the detected image position coincident with the detected gesture. Additional cameras may be used to track eye movement and/or identify a user. The gesture may be used to control video replay or a gaming or processing machine.

Description

HEAD-MOUNTABLE DISPLAY
This invention relates to head-mountable displays.
A head-mountable display (HMD) is an image or video display device which may be worn on the head or as part of a helmet. Either one eye or both eyes are provided with small electronic display devices.
Some HMDs allow the user only to see the displayed images, which is to say that they obscure the real world environment surrounding the user. This type of HMD can position the actual display devices in front of the user's eyes, in association with appropriate lenses which place a virtual displayed image at a suitable distance for the user to focus in a relaxed manner.
It is of course possible to display a real-world view on this type of HMD, for example by using a forward-facing camera to generate images for display on the display devices.
Other HMDs, however, allow a displayed image to be superimposed on a real-world view. This type of HMD can be referred to as an optical see-through HMD and generally requires the display devices to be positioned somewhere other than directly in front of the user's eyes. Some way of deflecting the displayed image so that the user may see it is then required.
This might be through the use of a partially reflective mirror placed in front of the user's eyes so as to allow the user to see through the mirror but also to see a reflection of the output of the display devices. In another arrangement, disclosed in EP-A-1 731 943 and U S-A- 2010/0157433, a waveguide arrangement employing total internal reflection is used to convey a displayed image from a display device disposed to the side of the user's head so that the user may see the displayed image but still see a view of the real world through the waveguide. Once again, in either of these types of arrangement, a virtual image of the display is created (using known techniques) so that the user sees the virtual image at an appropriate size and distance to allow relaxed viewing. For example, even though the physical display device may be tiny (for example, 10 mm x 10 mm) and may be just a few millimetres from the user's eye, the virtual image may be arranged so as to be perceived by the user at a distance of (for example) 20 m from the user, having a perceived size of 5 m x 5m.
Although the original development of HMDs was perhaps driven by the military and professional applications of these devices, HMDs are becoming more popular for use by casual users in, for example, computer game or domestic computing applications. An HMD can provide some control inputs to such a computer game or computing device, for example by tracking sensors allowing changes in the angle and orientation of the user's head to be detected, so allowing the displayed images to be changed in response to the detected angle and orientation. This can give an improved sense of "immersion" (a sense of deep user involvement) in an activity such as a computer game, because the user's view of the computer game environment can change according to which direction the user is looking. Another control input that can be provided by known HMDs involves eye tracking. In this arrangement, a small camera directed at the user's eye (that is to say, inside the HMD arrangement) can detect the point at which the user is looking, so allowing a computer to change the information displayed by the HMD in a similar way to the use of a computer mouse to point to different areas of a display screen.
Further previously proposed arrangements are disclosed in US 2010/0110368, US 2010/0164990 and WO-Al -20111044680.
It would be desirable to provide an improved method of controlling an underlying application using a control input from an HMD.
This invention provides a head-mountable display system comprising: a frame to be mounted onto an observer's head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer; a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer without entirely obscuring the observer's view of the environment in line with the virtual image of the video display; one or more cameras directed away from the observer towards the position of the virtual image of the video display; a gesture detector for detecting gestures by the observer in the field of view of the one or more cameras; a processor for detecting the image position, with respect to an image displayed on the virtual image of the video display, which is coincident, from the observer's point of view, with the detected gesture; and a controller for controlling the video signal source to vary the video signal in dependence upon a detected gesture and the detected image position coincident with the detected gesture.
Embodiments of the invention will now be described with reference to the accompanying drawings, in which: Figure 1 schematically illustrates a head-mountable display (HMD); Figure 2 schematically illustrates the HMD in a schematic plan view; Figure 3 schematically illustrates the positioning of display screens and display devices with respect to a user's eyes; Figures 4 and 5 schematically illustrate techniques for transferring images from a display device to the user's eye via a partially transparent display screen; Figure 6 is a schematic perspective view of an HMD; Figures 7A-7C schematically illustrate displayed images for use during an alignment process; Figure 8 schematically illustrates an electronic alignment arrangement; Figure 9 schematically illustrates a mechanical alignment arrangement; Figure 10 schematically illustrates the generation of a virtual image of a display screen; Figure 11 schematically illustrates the positioning of a user's finger or hand with respect to the virtual image of Figure 10; Figure 12 schematically illustrates an eye-hand separation; Figure 13 schematically illustrates a gesture-controlled text highlighting operation; Figure 14 schematically illustrates a gesture-controlled option selection operation; Figure 15 schematically illustrates a gesture-controlled scrolling operation; Figure 16 schematically illustrates a gesture-controlled page-turning operation; Figure 17 schematically illustrates a gesture-controlled video jog or shuttle control operation; Figure 18 schematically illustrates a gesture control system; and Figure 19 schematically illustrates a gesture detector.
Referring to the accompanying drawings, Figure 1 schematically illustrates a head-mountable display (HMD) 10 mounted on the head of a user 20.
The HMD 10 is based around a frame 30 similar to an eyeglass frame, in that it has ear supports which pass over the ears and a nose bridge 40. The user wears the HMD in a similar manner to wearing a pair of eyeglasses.
The frame is therefore to be mounted onto an observer's head and defines one or two eye display positions which, in use, are positioned in front of a respective eye of the observer.
The actual video display system is symmetrical in this example, which is to say that it is substantially identical (although a mirror image) for each of the user's eyes, though just one eye could be provided with a display in other embodiments. In front of each of the user's eyes 50 (in use) there is a near-transparent display screen 60. For example, the display screen 60 may have a 90% light transmission. The display screen 60 allows the user to see through the display screen to the real environment beyond. However, it also allows video images to be displayed to the user so that, in effect, the video images are superimposed on the real world environment. The way in which the video images are presented in this way will be discussed below, but for now it is sufficient to say that display devices 70, one for each eye, are provided in association with the display screens 60. The display devices 70 cause the required images to be displayed via the display screens 60 to the user. Accordingly, a display element is mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer without entirely obscuring the observer's view of the environment in line with the virtual image of the video display.
One or more cameras are directed away from the observer towards the position of the virtual image of the video display. In particular, a front-facing camera 80 is provided on a front surface of the frame 30. The camera 80 is arranged so as to capture images in a forward direction with respect to the user's head, so that the captured images represent at least a part of the field of view of the user looking forwards. In embodiments of the invention, captured images from the forward-facing camera are used to allow gestures by the user's hand to be detected and used to provide control information to a device to be controlled.
In alternative embodiments, more than one forward-facing camera (not shown) may be provided. This can give the advantage of acquiring depth (distance from the HMD) information about the gestures. To achieve this, the plural cameras are laterally separated, for example being mounted at or near each lateral end of the HMD.
Two inward-facing cameras 90 are provided on an inside edge of the frame 30, and are arranged to view the user's eyes 50. These cameras may be used as part of an identity detector (for example, using iris recognition of the current wearer), for eyeball tracking and other techniques. The use of the cameras 90 will be described in greater detail below.
Figure 2 is a schematic plan view of the HMD 10, this time not mounted on a user's head.
In this view, more details of the frame 30 may be seen, in particular a front section 32 linked by hinges 36 to the ear supports 34 which pass over the user's ears in normal use. The hinges 36 allow the frame to be folded in a conventional eyeglasses-like manner when not in use. A display device support arm 38 is connected, via a linkage 42, to the front section 32. The support arm 38 acts as a support for the display devices 70 and the inward-facing cameras 90.
The display devices 70 in tum support their respective display screen 60.
Cables 92 pass inside or along the ear supports 34 to processing sections 28. The cables 92 carry signals relating to the display devices 70, the inward facing cameras 90 and the forward facing camera 80. The amount of processing which takes place at the processing sections 28 can vary between embodiments of the present invention. At one extreme, the processing sections 28 simply provide a wired or wireless interface to a remote processing unit (not shown in Figure 2); at another extreme, the processing sections 28 can include all of the necessary data processing to carry out the operations to be described below. In between these extremes, there may be some local processing within the processing sections 28 and some remote processing via a connected processing device separate from the HMD.
Other sensors and inputs may optionally be provided. Examples shown in Figure 2 include a user pulse (heart rate) detector 12, a user skin temperature and/or conductivity sensor 13, a position sensor (such as a global positioning sensor) 11 and an accelerometer 14. Zero or more of these may be provided or used, as required.
Figure 3 is a schematic section across a user's head, showing the display screens 60 and display devices 70 in their positions brackets in use) relative to the user's eyes.
Figure 4 schematically illustrates an arrangement for transferring displayed images (as displayed by the display device 70) to the user via a partially reflective mirror 60' acting as the display screen 60.
The display device is arranged in a known manner to have suitable lenses and the like (not shown) so that light from the display device, when reflected to the user's eye position 51, by the mirror 60', appears to come from a virtual image position 461.
Figure 5 schematically illustrates another arrangement of the type disclosed in the documents referenced above, by which light from the display device 70 passes through a lens 72 onto a holographic reflector 62 which causes the light to be deflected along a planar optical waveguide 60" (shown in cross section and acting as the display screen 60) by total internal reflection, towards another holographic reflector 64 which deflects the light towards the user's eye 50.
In both cases, the mirror 60' and the waveguide 60" are partially transparent, for example having a 90% transmission, so the user may see the real world environment through the mirror 60' or the waveguide 60".
Figure 6 is a schematic perspective view of an HMD of the type discussed above. One of the processing sections 28 comprises a wireless interface, such as a Bluetooth (R) interface for communicating bidirectionally with another device, which in this case is shown as a mobile telephone or "smart phone" 200 which the user might carry, for example, in a pocket so as to keep it within Bluetooth communication range of the HMD.
The smartphone 200 comprises a communications interface 210 for communicating not only with the HMD 10 but also with Internet services, for example via a so-called 30 connection, a processor (CPU) 220 for carrying out processing operations such as those described with respect to Figures 18 and 19 below, memory storage (shown schematically as RAM 230) for storing software and/or firmware to control such operations and data relating to the implementation of such operations, and a user interface (UI) 240 having a display and user input keys or controls.
The software and/or firmware used by a data processor such as the CPU 220 to implement the techniques described here, and a storage, transmission or other medium (for example, a non-transitory storage medium such as an optical disk) by which such software is stored or otherwise provided, are considered as embodiments of the present invention.
The alignment of the two displays for a particular user is important so that the lateral separation of the displayed images corresponds to the distance between the User's eye pupils and so that the user does not experience false 3-D effects due to incorrect lateral displacement of image features displayed on the respective left and right eye display screens. It is also useful that the user's view is aligned to be straight ahead, because this will be assumed when images from the forward-facing camera 80 are used for gesture control, to be described below.
Figures 7A-7C schematically illustrates displayed images for use during an alignment process. In particular, Figure 7A schematically illustrates an image to be displayed to the user's left eye, and Figure lB schematically illustrates an image to be displayed to the user's right eye.
When the two images are displayed simultaneously, the user sees a composite image which is represented schematically in Figure 7C as an overlap of the left eye and right eye images. Note that Figure 7C illustrates an alignment error which the user would need to correct.
In order to carry out an alignment process, the user observes the respective alignment images and adjusts the lateral position of one or both of the display arrangements (the display screen and display device for each eye) so that the test patterns provided by the alignment images overlap at the plane of the virtual image of the display screen.
Figure 8 schematically illustrates an &ectronic alignment arrangement for carrying out an alignment operation in order to align the text images of Figures 7A and 7B. These operations could be implemented by the CPU 220 under appropriate software control.
In particular, Figure 8 provides an image buffer 400, a lateral image adjuster 410 and a user control 420 as part of the image pass to the display device 70. The image buffer 400 provides a temporary store for incoming image data, which is read out from the store by the lateral image adjuster 410 at an appropriate lateral (left-right) position according to a position established by the user control 420. Once the alignment calibration has been set up for a particular user, the user can ignore the user control 420.
The alignment process can be initiated automatically when it is detected that a new user has started to wear the HMD (see below for a discussion of this detection) or at the request of the user, or in response to a drift in time of the system (which can be detected, for example, by means of a deterioration in the error rate of gesture recognition) or any combination of these.
Alignment data set by the user can be stored against that user's identity and reinstated when that user is detected to start wearing the HMD on subsequent occasions. Techniques for detecting and using user identities will be described below.
Lateral image adjustment using the technique of Figure 8 relies on the display device 70 having a greater horizontal resolution than the horizontal size of an image for display, so that the image can be displayed at different lateral positions within the overall display range of the display device 70. If this is not the case, then lateral adjustment can still be carried out but may result in cropping of the left or right edge of the displayed image.
An alternative approach is illustrated schematically in Figure 9, which provides a mechanical lateral adjustment of the displayed image position. The arrangement for one eye is shown in Figure 9, but a corresponding system can be provided for the other eye as well.
In basic terms, the display device 70 and the display screen 60 are mounted with respect to the frame 30 of the HMD by means of an adjustable mounting such as a screw-threaded adjuster having a screw-threaded shaft 430, a thumbwheel 440 by which the user may turn the shaft, and a series of indentations 450 on the display device 70 and/or display screen to engage with the screw threads on the shaft. Suitable mounting bearings (not shown) are also provided. Rotation of the thumbwheel 440 by the user causes lateral motion of the display device 70 and the display screen 60 to allow lateral alignment to be carried out.
So, by either of these techniques, the two images displayed by the left eye and right eye display screens can be aligned so that the user sees them correctly when looking straight ahead.
Figure 10 is a plan view schematically illustrates the generation of a virtual image of a display screen by the HMD 10.
Using known techniques, the display by the display devices 70 is established so that the user is observing a virtual image 460 of a large display screen some considerable distance from the user. The separation 470 between the user and the virtual position of the virtual display screen 460 can be established by the design of the lens or other system used to transmit the displayed images to the user. For example, it may be more than 1 m, so that the user's finger or hand is positioned between the user and the virtual display. In one example, the virtual display screen 460 might be 20 m from the user. In general terms, those skilled in the art of HMD systems might suggest that any distance from 9 m upwards would be perceived by the user to be effectively "infinity"; at or above this distance, for most users, the user's eyes are at their most relaxed leading to a more pleasurable viewing experience. In an alternative, the viewing distances (the distance from the user's eyes to the virtual display screen) could be set at, say, 50-1 00cm. The user could switch between the two viewing distances by operating a mechanical control or the like, associated with the HMD, to change the characteristics of the HMD's optical system. For example, when reading a book or operating a virtual keyboard, the user might prefer to change to a closer virtual display, compared to the situation when the user is (for example) viewing a movie.
Figure 11 is a plan view schematically illustrates the positioning of a user's finger or hand with respect to the virtual display screen 460.
Referring also to Figure 12, the separation 480 between the user's HMD and the same user's finger or hand is likely, for most users, to be well under 1 m. In Figure 11, the position of the user's finger is schematically drawn as an arrow 490.
Various techniques to be described below rely on associating a position of the user's finger or hand with corresponding material displayed on the virtual display screen 460 The position of the user's finger or hand is detected by the forward-facing camera 80 mounted on the HMD. A gesture detector then detects gestures by the observer in the field of view of the one or more forward-facing cameras.
From the user's point of view, because the user's hand is much closer to the user than the virtual display screen 460 is to the user, if the user focuses on the finger or hand then the user might see a double image of the virtual display screen 460. However, if the user focuses on the virtual display screen 460 then the user might see a double image of the finger or hand.
There is therefore some potential for slight ambiguity when a detection is made of a particular position of the virtual display screen 460 which the user is pointing to or otherwise addressing with a finger or hand. In particular, assuming the user is focused on the virtual display screen for 60, then the user's finger or hand, as seen by the user's right eye will point to a position further to the left (with respect to the virtual display screen) than the position pointed to by the same finger or hand as seen by the user's left eye.
In an embodiment of the invention, this potential for slight ambiguity can be ignored altogether. However, n other embodiments, such as that shown in Figure 11, the use of a single, centrally positioned, forward-facing camera 80 to detect the position of the user's finger or hand with respect to the HMD 10 means that this left-right ambiguity is averaged and the detected position of the user's finger or hand, in a left-right direction, is half way between the position (relative to the virtual display screen 460) as perceived by the user's right eye and the user's left eye.
Of course, in other embodiments which provide a display to only one of the user's eyes, there is no potential for ambiguity in the detection of the position of the user's hand, finger or the like relative to material displayed on the virtual display screen 460.
Various ways will now be described in which a control input, for example to a data processing system, a video replay system, a gaming machine or the like, can be generated by a detection of the position of a gesture by the user's hand, finger or the Pike relative to the virtual display screen 460. Examples will be illustrated schematically in Figures 13-17, each of which shows a schematic representation of the captured position 500 of a user's hand, as captured by the forward-facing camera 80, overlaid on a representation of material displayed on the virtual display screen 460.
Figure 13 schematically illustrates a gesture-controlled text highlighting operation. The user may use his finger 510 to point to text (such as text 520 in the example shown) for highlighting. This operation can be carried out in a similar manner to a highlighting operation using a computer mouse, whereby the user may (for example) slide his finger along the text to be highlighted. The highlighted text may be changed in colour or could have a display box drawn around it or the like. Optionally, the highlighting of text in this way can prompt the underlying data processing system to display possible outcomes which can be selected by the user, such as asking the user whether the highlighted text should be saved (stored to a file) as a separate entity, for example by displaying a box with the word "save?" and the user-selectable options "Y" (yes) and "N" (no), either of which the user can select by pointing his finger 510 to the display position 530, 540 of the user-selectable options. In some embodiments, once highlighted, the text may be dragged or moved (with respect to its initial display position) by the user moving the finger through the air from the finger's initial position, defining a movement path with respect to the virtual display screen. This movement (dragging) action could be terminated by detection of a specific gesture, such as moving the finger towards or away form the user in a sudden movement. Alternatively, the data processing system can provide even more applicable menu/user feedback by taking extra contextual data into account. For example, selecting a text in a leisure book could cause the system automatically to store that selection in the user's social networking account (for example, in FacebookTM) whereas selecting text in a professional report could cause the system to trigger the data to be synchronised with co-workers or the like.
Figure 14 schematically illustrates another gesture-controlled option selection operation a data processing system offers a number of options for display on the virtual display screen 460, each option having an associated activation "button" 550, 560, 570, 580 and so on. Simply moving the user's finger to overlie the display position of one of the buttons (in the example shown, the button 560) causes the data processing option associated with the button to be selected.
Figure 15 schematically illustrates a gesture-controlled scrolling operation. Here, a list of options 590, or just a long text or image section, is too long for the display of the entire list at any one time on the virtual display screen 460. However, if the user moves the position of the user's hand in a vertical direction up or down, the user may scroll through the displayed list. So, if the user moves the user's hand vertically upwards, the list is displayed will move upwards so that the text "Option A" will tend to move off the top of the display screen 460, and new text will be enabled to be displayed below the wording "Option D". Scrolling the finger or hand left to right or right to left can cause an indenting operation to be performed on the selected text or at a text position defined by a separate pointing gesture.
Figure 16 schematically illustrates a gesture-controlled page-turning operation in respect of a set of pages 600, 610 of a book which is displayed on the virtual display screen 460. By moving the position of the user's hand in a right-to-left direction across the page 610 displayed on the right-hand side of the virtual display screen 460 the page can be turned in a forward direction through the book. Similarly, moving the user's hand in a left-to-right direction over the page 600 displayed on the left side of the virtual display screen 460 will cause the page 600 to be turned in a backwards direction through the book.
Figure 17 schematically illustrates a gesture-controlled video jog or shuttle control operation.
In Figure 17, video images 620 are being displayed on the virtual display screen 460. If the user's hand is detected within the area defined by the virtual display screen 460, then a jog or shuttle control 630 is displayed. In other words, the control 630 is not normally displayed during video replay, and is only displayed if the user's hand enters the area defined by the virtual display screen 460.
The user may initiate a fast-forward or fast-rewind operation with respect to the displayed video by placing the user's hand over the control 630 and moving the hand (which causes the control's display position to be moved with the hand) to the left or right. A movement to the left would initiate a rewind operation, potentially with the rewind speed being dependent on the displacement of the user's hand from the initial position of the control 630. Similarly, a movement to the right would initiate a fast-forward operation, again with the fast-forward speed potentially being dependent on the displacement to the right of the user's hand from the initial position of the control 630. This type of operation might be referred to as a "shuttle" operation.
In an alternative "jog" operation, a position within the video material is effectively mapped to the lateral position (as set by the user moving the user's hand position 500) of the control 630. In this arrangement, a movement (for example) to the left would cause the currently displayed position in the displayed video material to be moved back in time (with respect to the timeline of the video material) by an amount proportional to the lateral displacement of the control 630. The system may be set to use either the jog or the shuttle mode, but not the other, or alternatively the user could select either jog or shuttle mode by means of option buttons such as those described with respect to Figure 14 above.
The technical features common to each of these example arrangements are that the position of the user's hand, finger or the like is detected with respect to display positions on the virtual display screen 460 as perceived by the user. Different gestures such as pointing, drawing shapes "in the air" such as the shape of a question mark (?), jabbing the hand (making a sudden pointing motion in a particular direction), waving the hand (for example to cause a scrolling operation as described with reference to Figure 15 or a page turning operation as described with reference to Figure 16) can be detected as separate gestures. The combination of the detected gesture, optional additional sensor information (such as accelerometer data, body temperature data), image recognition (such as face detection or building detection), other contextual data (such as geographical position, connection speed, time) and the display position (relative to the virtual display screen 460) provides a control input to an associated data processing, gaming, video replay or other device. For example, the detection of a gesture representing a question mark in respect of a particular display position may indicate to the controlled device that the user requires help information or further detail about a data item displayed at that image position. Another example taking more information into account can be the user looking at a historic building in a known location at night; a "question mark" gesture by the user's hand may cause a story to be invoked, downloaded or otherwise displayed about that building specifically related to the nighttime. In another example, a sailor wearing the glasses might have dash-board information relating to his boat displayed; this information can be associated with current and predicted weather information. A further example is that the user drawing or tracing out a circle in the air could cause a social networking site to be invoked on a web browser, and displayed by the glasses. Drawing or tracing out a square could cause the system to capture and store a photograph of the area defined by the drawn square (or a regularised version of the drawn square). The photograph can be captured at an instant defined by, for example, a double blink of the user's eyes (as captured by the cameras 90).
In some embodiments of the invention, a gesture control system associated with an HMD can provide raw gesture data to a controlled device or other processor, for example specifying that a hand moved from position A (with respect to the virtual display screen 460) to position B, leaving it to the processor to determine what control operation (if any) such a hand movement would initiate. In other embodiments, a gesture control system associated with an HMD can provide for the generation of control menus and the like (depending on the control function to be handled) and can provide analysis of the user gestures so as to provide a higher level of control information to the processor, such as "initiate operation X on data Y". The embodiments to be described with respect to Figure 18 operates at this higher level, but by omitting features from the system of Figure 18, the raw movement data can be passed to the processor as an altemative.
Accordingly, Figure 18 schematically illustrates a gesture control system which uses information captured by the forward-facing camera (or plural forward-facing cameras as the case may be) to detect user gestures relative to display positions on the virtual display screen 460 and to provide high level controls to a controlled device or processor 700 in dependence upon the detected gestures and display positions.
Note that the processor is shown separately in Figure 18, and could indeed be a separate device to the device carrying out the gesture detection. Alternatively, the two devices could be the same, such that the gesture detection and related processing is carried out by a device acting as a video signal source, such as the smartphone 200.
Referring to Figure 18, captured images from the camera 80 are passed to a gesture detector 710. Optionally, the gesture detector 710 can operate in conjunction with an identity detector 720 for detecting the identity of a current wearer of the head mountable display, so that the detection of user gestures can be made dependent upon which user is currently operating the HMD. In this arrangement, the gesture detector is configured so as to compare data derived from images captured by the one or more cameras with gesture data stored in respect of the detected user identity.
Identity detection by the identity detector 720 can be carried out in various ways. For example, the user might indicate his identity by means of a user input such as pressing one of a set of control buttons (one for each user) mounted on or associated with the HMD. In another alternative, the inward-facing cameras 90 can be used to examine the irises of the current user's eyes. It is known that people can be distinguished from one another by patterns associated with their iris, and so the identity detector 720 can retain a store of iris patterns from previous users and compared the current iris images with those patterns. Alternatively a display can be projected with, for example, a virtual key pad, allowing the user to enter a personal identification number or the like by gestures. Neither of these options necessarily allows an absolute detection of a user's identity, but they do allow users to be distinguished from other users within a group of users associated with a particular HMD.
The identity detector 720 can have one or more associated pressure sensors (not shown). The pressure sensors to detect whether the HMD is being worn by a user. So. when we HMD is first mounted to the user's head, the identity detector 720 operates to detect the identity of the current user, at least from within a group of known users associated with that HMD. There is then no need for the identity detector 720 to operate again until the pressure sensors detect the HMD has been removed from that user and once again mounted to a user's head.
If the identity detector 720 is not able to establish the identity of the current user, a new identity could be set up in association with identification data obtained from the user (such as the pressing of a particular control button or the detection of the current user's iris patterns).
When that user is encountered again, the new identity set up at the first encounter can be reused for that user.
So, the gesture detector 710 has image information from the forward-facing camera 80 and, optionally, user identification information defining which user is currently operating the HMD. The gesture detector acts to detect gestures by the observer in the field of view of the one or more forward facing cameras, and to detect the image position, with respect to an image displayed on the virtual image of the video display, which is coincident, from the observer's point of view, with the detected gesture.
A third source of information for the gesture detector 710 is a display controller 740 which indicates to the gesture detector 710 what is currently being displayed on the displays 60, as seen by the user.
The operation of the gesture detector 710 will be described in greater detail with reference to Figure 19 below. In short, however, the gesture detector 710 provides control information to a control interface 750 and to a menu generator 760. The control interface acts to control the video signal source (the processor 700 in this example) to vary the video signal in dependence upon a detected gesture and the detected image position coincident with the detected gesture.
The flow of control is therefore as follows. The processor 700 provides the basic information, data or images to be displayed to the user. So, the processor 700 passes display information to the display controller 740 for passing to the display devices 70. The processor 700 also passes dated to the control interface 750 defining the type of controls which the processor 700 can receive in its current mode of operation.
Depending on the type of control operations which can be carried out in the processor's current mode of operation, the control interface 750 can indicate to the gesture detector 710 the types of gestures to be detected. For example, if the processor 700 is currently displaying book pages, then the type of gestures to be detected in the current mode of operation would be generally horizontal movements of the user's hand in either the left half of the right half of the displayed image. Alternatively, if the processor 700 is currently displaying a list of menu options, then the type of gestures to be detected could include sweeping vertical motions of the user's hand (to cause scrolling of the list) and pointing motions directed at particular items in the list to cause selection of the respective item.
The operation of the menu generator 760 depends on the nature of the processor 700.
In some instances, the menu generator 760 may not be required at all because the gesture detection is used to navigate through menus or other controls generated by the processor 700 itself. In these circumstances, the HMD and the gesture detection system are being used simply as an alternative user interface to the processor 700, for example in place of a mouse or similar control. In other situations, however, the menu generator 760 can generate information to be superimposed over the display information received from the processor 700. In this context, the term "menu" is a broad meaning, and refers to any type of command or similar information which might be superimposed over the display information received from the processor 700. An example is shown in Figure 17 described above, in which a jog/shuttle control 630 is superimposed over video material provided by a video replay device as the processor 700. The menu generator 760 can be operable to superimpose the control 630, so that the processor 700 simply receives forwards or backwards replay controls from the control interface 750 but does not need to generate on-screen material such as the control 630.
In this way, the gesture detector 710 is guided as to the type of gestures to be detected in the current context. The menu generator 760 operates, if needed, to generate representations of user controls to be displayed on the virtual display screen 460. The control interface 750 provides data to the gesture detector 710 and the menu generator 760 defining the types of controls relevant to the processor 700 in its current mode of operation. The control interface 750 receives information from the gesture detector and, possibly, from the menu generator 760, to generate high level controls to be passed to the processor 700.
Figure 19 schematically illustrates the operation of the gesture detector 710 and the identity detector 720 in more detail.
Image data received from the camera 80 is passed to a feature point detector 800 which, using known techniques, extracts so-called feature points from the captured images. The feature points relate to image positions at which significant features of the user's hand are located in the captured image data. Examples of the significant features may be the fingertips and other joints.
The detected feature points are passed to a feature point tracker 810 which detects the time-dependence of the motion of the detected feature points. This is achieved by comparing the positions of corresponding feature points in successive images captured by the camera 80.
A gesture comparator 820 compares the tracked feature point data with information either from a local gesture store 830 or a remote gesture store 840 which may be accessed via an Internet connection 850. The use of a remote gesture store 840 allows a so-called "cloud" based approach to be used for the recognition of different types of gestures, but more importantly for "multi-point" gestures such as two-hands connecting to one another, two fingers pointing to one another and the like, in which the gesture detector is configured to detect gestures by more than one finger or hand moving relative to one another. The tracked feature point data relating to particular gestures can be compared with corresponding data relating to a large group of other users, potentially leading to a more accurate detection of a particular gesture based on the feature point data.
In either case (a local gesture store or a remote gesture store) the identity detector 720 can provide the gesture comparator 820 with information defining the current user of the HMD.
The user information can be obtained from the source as discussed above, and compared with data stored in a local identity store 860 or a remote identity store 870 accessed via the Internet connection 850.
An image location comparator 880 compares the detected location of a detected gesture with data received from the display controller 740 so as to detect which item of displayed information the current gesture relates to. A gesture output 890 provides the gesture data (that is to say, data defining which gesture has been used) and associated location information and/or information defining which displayed text or other material the gesture related to. This forms the output of the gesture detector 710 and is passed to the menu generator 760 and the control interface 750.
As described above, the processor 700 receives control information from the control interface 750. The processor 700 can also pass metadata back to the control interface 750 defining the type of material that is being displayed by the displays 60, 70. So, for example, the metadata can define areas of the display that are to be made sensitive to gesture based controls. In this way, the metadata can define the areas associated with each of the selection icons 550.. .580 discussed above, for example. Therefore, the video signal source can be configured to provide data to the gesture detector to define areas, relative to the image displayed on the virtual image of the virtual display, relating to control commands for implementation by the video signal source. The areas could define, for example, a virtual keyboard having two or more input keys each operable by the user's gestures at a corresponding one of the defined areas. The display areas sensitive to gesture control can potentially be made larger than the actual displayed items themselves, to allow for user inaccuracies in the precise position he/she is pointing to.
Accordingly, the video signal source can be configured to indicate, to the controller, a set of control functions applicable to a video signal currently being generated by the video signal source.
A context sensitive avatar can be displayed in response to user gestures. The avatar can learn gestures employed by a particular user and respond to them.
The system can be made to respond to a laser pointer. This would work as follows. The user positions himself in front of a surface from which a laser pointer can be reflected. The user shines the laser pointer at the surface, and the reflected light is captured by the forward facing camera and, using the techniques described above, is translated to a position with respect to the virtual display screen and used to initiate controls with respect to that detected position.
The glasses could display material which depends on the user's location and/or the user's "mood", as detected by (for example) the user's heart rate (by the pulse sensor described above), skin temperature or skin conductivity.
The forward facing camera could be used to take photographs. For example, a panoramic picture can be captured by the user turning his head. The last n seconds of video material from the forward facing camera could be captured and stored (for example, in a remote or cloud based store) as a rolling buffer, and preserved if the user issues a command to do so.
An example of the processor 700 is a computer games machine. Here, the user's gestures could control a computer game. The controls (passed from the control interface 750 to the processor 700) could be arranged so as to mimic the normal control data provided by a computer game controller, in which case the computer games machine need not be aware that the glasses are being used as the display and controller. Alternatively, the computer game could be arranged to cooperate specifically with the glasses, in which case metadata could be passed from the processor 700 to the control interface 750 as described above to define control functions to be provided. Therefore, in this example, in which the video signal source is a video gaming or data processing machine, the system is configured to provide control inputs to the gaming or data processing machine in dependence upon detected gestures and the detected image position coincident with the detected gestures.
Another example of the processor 700 is the video signal source is a video replay device, in which case the system comprises a control display generator for superimposing one or more control displays over the video signal provided by the video signal source; and the system is configured to control the replay of video signals by the video signal source in response to detected gestures coincident with the one or more control displays.
As mentioned above, the head mountable display can comprise one or more further detectors selected from the list consisting of a position detector; an acceleration detector; a user skin temperature detector; a user skin conductivity detector; and a user pulse rate detector; and the video signal source can be responsive to detections by one or more of the further detectors.

Claims (1)

  1. <claim-text>CLAIMS1. A head-mountable display system comprising: a frame to be mounted onto an observer's head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer; a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer without entirely obscuring the observer's view of the environment in line with the virtual image of the video display; one or more cameras directed away from the observer towards the position of the virtual image of the video display; a gesture detector for detecting gestures by the observer in the field of view of the one or more cameras; a processor for detecting the image position, with respect to an image displayed on the virtual image of the video display, which is coincident, from the observer's point of view, with the detected gesture; and a controller for controlling the video signal source to vary the video signal in dependence upon a detected gesture and the detected image position coincident with the detected gesture.</claim-text> <claim-text>2. A system according to claim 1, in which the display element comprises a partially transparent screen disposed, in use, in front of the user's eye.</claim-text> <claim-text>3. A system according to claim 1, in which: the video signal source is a video replay device; the system comprises a control display generator for superimposing one or more control displays over the video signal provided by the video signal source; and the system is configured to control the replay of video signals by the video signal source in response to detected gestures coincident with the one or more control displays.</claim-text> <claim-text>4. A system according to claim 1 or claim 2, in which the video signal source is a video gaming or data processing machine, and the system is configured to provide control inputs to the gaming or data processing machine in dependence upon detected gestures and the detected image position coincident with the detected gestures.</claim-text> <claim-text>5. A system according to any one of the preceding claims, in which, in use, the virtual image is generated at a distance of more than one metre from the frame.</claim-text> <claim-text>6. A system according to any one of the preceding claims, comprising an identity detector for detecting the identity of a current wearer of the head mountable display.</claim-text> <claim-text>7. A system according to claim 6, in which the gesture detector is configured so as to compare data derived from images captured by the one or more cameras with gesture data stored in respect of the detected user identity.</claim-text> <claim-text>8. A system according to claim 6 or claim 7, in which the identity detector comprises one or more further cameras arranged to capture images of the current wearer's eyes.</claim-text> <claim-text>9. A system according to any one of the preceding claims, in which the video signal source is configured to indicate, to the controller, a set of control functions applicable to a video signal currently being generated by the video signal source.</claim-text> <claim-text>10. A system according to any one of the preceding claims, in which the gesture detector is configured to detect gestures by more than one finger or hand moving relative to one another.</claim-text> <claim-text>11. A system according to any one of the preceding claims, in which the video signal source is configured to provide data to the gesture detector to define areas, relative to the image displayed on the virtual image of the virtual display, relating to control commands for implementation by the video signal source.</claim-text> <claim-text>12. A system according to claim 11, in which the areas define a virtual keyboard having two or more input keys each operable by the user's gestures at a corresponding one of the defined areas.</claim-text> <claim-text>13. A system according to any one of the preceding claims, in which the head moUntable display comprises one or more further detectors selected from the list consisting of: a position detector; an acceleration detector; a user skin temperature detector; a user skin conductivity detector; and a user pulse rate detector; and the video signal source is responsive to detections by one or more of the further detectors.</claim-text> <claim-text>14. A display method for a head-mountable display system having a frame to be mounted S onto an observer's head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer, a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer without entirely obscuring the observers view of the environment in line with the virtual image of the video display; the method comprising the steps of: capturing images using one or more cameras directed away from the observer towards the position of the virtual image of the video display; detecting gestures by the observer in the field of view of the one or more cameras; detecting the image position, with respect to an image displayed on the virtual image of the video display, which is coincident, from the observer's point of view, with the detected gesture; and controlling the video signal source to vary the video signal in dependence upon a detected gesture and the detected image position coincident with the detected gesture.</claim-text> <claim-text>15. Computer software for carrying out a method according to claim 14.</claim-text> <claim-text>16. A storage medium by which computer software according to claim 15 is stored.</claim-text>
GB201116467A 2011-09-23 2011-09-23 A Head-mountable display with gesture recognition Withdrawn GB2494907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201116467A GB2494907A (en) 2011-09-23 2011-09-23 A Head-mountable display with gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201116467A GB2494907A (en) 2011-09-23 2011-09-23 A Head-mountable display with gesture recognition

Publications (2)

Publication Number Publication Date
GB201116467D0 GB201116467D0 (en) 2011-11-09
GB2494907A true GB2494907A (en) 2013-03-27

Family

ID=44993299

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201116467A Withdrawn GB2494907A (en) 2011-09-23 2011-09-23 A Head-mountable display with gesture recognition

Country Status (1)

Country Link
GB (1) GB2494907A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013207528A1 (en) * 2013-04-25 2014-10-30 Bayerische Motoren Werke Aktiengesellschaft A method for interacting with an object displayed on a data goggle
WO2015020888A1 (en) * 2013-08-05 2015-02-12 Microsoft Corporation Two-hand interaction with natural user interface
DE102014222194A1 (en) * 2014-10-30 2016-05-04 Bayerische Motoren Werke Aktiengesellschaft Vehicle with three-dimensional user interface
WO2016138178A1 (en) * 2015-02-25 2016-09-01 Brian Mullins Visual gestures for a head mounted device
CN106575151A (en) * 2014-06-17 2017-04-19 奥斯特豪特集团有限公司 External user interface for head worn computing
US9910501B2 (en) 2014-01-07 2018-03-06 Toshiba Global Commerce Solutions Holdings Corporation Systems and methods for implementing retail processes based on machine-readable images and user gestures
US10019149B2 (en) 2014-01-07 2018-07-10 Toshiba Global Commerce Solutions Holdings Corporation Systems and methods for implementing retail processes based on machine-readable images and user gestures
EP3438782A1 (en) * 2017-08-01 2019-02-06 Leapsy International Ltd. Wearable device with thermal imaging function
WO2019030467A1 (en) * 2017-08-08 2019-02-14 Sony Interactive Entertainment Inc. Head-mountable apparatus and methods
USD864959S1 (en) 2017-01-04 2019-10-29 Mentor Acquisition One, Llc Computer glasses
US10684687B2 (en) 2014-12-03 2020-06-16 Mentor Acquisition One, Llc See-through computer display systems
DE102020100072A1 (en) * 2020-01-03 2021-07-08 Bayerische Motoren Werke Aktiengesellschaft Method and system for displaying a list as augmented reality
EP4235259A3 (en) * 2014-07-31 2023-09-20 Samsung Electronics Co., Ltd. Wearable glasses and a method of displaying image via the wearable glasses

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0679984A1 (en) * 1994-04-22 1995-11-02 Canon Kabushiki Kaisha Display apparatus
DE10054242A1 (en) * 2000-11-02 2002-05-16 Visys Ag Method of inputting data into a system, such as a computer, requires the user making changes to a real image by hand movement
WO2002041069A1 (en) * 2000-11-14 2002-05-23 Siemens Aktiengesellschaft Method for visually representing and interactively controlling virtual objects on an output visual field
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0679984A1 (en) * 1994-04-22 1995-11-02 Canon Kabushiki Kaisha Display apparatus
DE10054242A1 (en) * 2000-11-02 2002-05-16 Visys Ag Method of inputting data into a system, such as a computer, requires the user making changes to a real image by hand movement
WO2002041069A1 (en) * 2000-11-14 2002-05-23 Siemens Aktiengesellschaft Method for visually representing and interactively controlling virtual objects on an output visual field
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9910506B2 (en) 2013-04-25 2018-03-06 Bayerische Motoren Werke Aktiengesellschaft Method for interacting with an object displayed on data eyeglasses
DE102013207528A1 (en) * 2013-04-25 2014-10-30 Bayerische Motoren Werke Aktiengesellschaft A method for interacting with an object displayed on a data goggle
WO2015020888A1 (en) * 2013-08-05 2015-02-12 Microsoft Corporation Two-hand interaction with natural user interface
US9529513B2 (en) 2013-08-05 2016-12-27 Microsoft Technology Licensing, Llc Two-hand interaction with natural user interface
US10019149B2 (en) 2014-01-07 2018-07-10 Toshiba Global Commerce Solutions Holdings Corporation Systems and methods for implementing retail processes based on machine-readable images and user gestures
US9910501B2 (en) 2014-01-07 2018-03-06 Toshiba Global Commerce Solutions Holdings Corporation Systems and methods for implementing retail processes based on machine-readable images and user gestures
CN106575151A (en) * 2014-06-17 2017-04-19 奥斯特豪特集团有限公司 External user interface for head worn computing
EP3180676A4 (en) * 2014-06-17 2018-01-10 Osterhout Group, Inc. External user interface for head worn computing
EP4235259A3 (en) * 2014-07-31 2023-09-20 Samsung Electronics Co., Ltd. Wearable glasses and a method of displaying image via the wearable glasses
DE102014222194A1 (en) * 2014-10-30 2016-05-04 Bayerische Motoren Werke Aktiengesellschaft Vehicle with three-dimensional user interface
US11262846B2 (en) 2014-12-03 2022-03-01 Mentor Acquisition One, Llc See-through computer display systems
US11809628B2 (en) 2014-12-03 2023-11-07 Mentor Acquisition One, Llc See-through computer display systems
US10684687B2 (en) 2014-12-03 2020-06-16 Mentor Acquisition One, Llc See-through computer display systems
US9652047B2 (en) 2015-02-25 2017-05-16 Daqri, Llc Visual gestures for a head mounted device
WO2016138178A1 (en) * 2015-02-25 2016-09-01 Brian Mullins Visual gestures for a head mounted device
USD864959S1 (en) 2017-01-04 2019-10-29 Mentor Acquisition One, Llc Computer glasses
USD918905S1 (en) 2017-01-04 2021-05-11 Mentor Acquisition One, Llc Computer glasses
USD947186S1 (en) 2017-01-04 2022-03-29 Mentor Acquisition One, Llc Computer glasses
EP3438782A1 (en) * 2017-08-01 2019-02-06 Leapsy International Ltd. Wearable device with thermal imaging function
US11061240B2 (en) 2017-08-08 2021-07-13 Sony Interactive Entertainment Inc. Head-mountable apparatus and methods
WO2019030467A1 (en) * 2017-08-08 2019-02-14 Sony Interactive Entertainment Inc. Head-mountable apparatus and methods
DE102020100072A1 (en) * 2020-01-03 2021-07-08 Bayerische Motoren Werke Aktiengesellschaft Method and system for displaying a list as augmented reality

Also Published As

Publication number Publication date
GB201116467D0 (en) 2011-11-09

Similar Documents

Publication Publication Date Title
GB2494907A (en) A Head-mountable display with gesture recognition
US10082940B2 (en) Text functions in augmented reality
US9442567B2 (en) Gaze swipe selection
CN103858073B (en) Augmented reality device, method of operating augmented reality device, computer-readable medium
US20200004401A1 (en) Gesture-based content sharing in artifical reality environments
US9035878B1 (en) Input system
US9058054B2 (en) Image capture apparatus
US9405977B2 (en) Using visual layers to aid in initiating a visual search
CN110018736B (en) Object augmentation via near-eye display interface in artificial reality
EP3788459B1 (en) Creating interactive zones in virtual environments
US8970452B2 (en) Imaging method
CN110546601B (en) Information processing device, information processing method, and program
US20130021374A1 (en) Manipulating And Displaying An Image On A Wearable Computing System
JP2017102768A (en) Information processor, display device, information processing method, and program
US20190026589A1 (en) Information processing device, information processing method, and program
KR102110208B1 (en) Glasses type terminal and control method therefor
US11567569B2 (en) Object selection based on eye tracking in wearable device
US20230060453A1 (en) Electronic device and operation method thereof
KR20160055407A (en) Holography touch method and Projector touch method
KR20240028897A (en) METHOD AND APPARATUS FOR DISPLAYING VIRTUAL KEYBOARD ON HMD(head mounted display) DEVICE
KR20240030881A (en) Method for outputting a virtual content and an electronic device supporting the same
KR101591038B1 (en) Holography touch method and Projector touch method
CN111061372A (en) Equipment control method and related equipment
KR20160002620U (en) Holography touch method and Projector touch method
KR20160014091A (en) Holography touch technology and Projector touch technology

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)