CN204347750U - head-mounted display apparatus - Google Patents
head-mounted display apparatus Download PDFInfo
- Publication number
- CN204347750U CN204347750U CN201420603263.7U CN201420603263U CN204347750U CN 204347750 U CN204347750 U CN 204347750U CN 201420603263 U CN201420603263 U CN 201420603263U CN 204347750 U CN204347750 U CN 204347750U
- Authority
- CN
- China
- Prior art keywords
- sensor
- head
- display apparatus
- mounted display
- instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002452 interceptive effect Effects 0.000 claims abstract description 11
- 230000033001 locomotion Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 4
- 210000001508 eye Anatomy 0.000 description 24
- 230000009471 action Effects 0.000 description 14
- 230000005059 dormancy Effects 0.000 description 5
- 238000000034 method Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 210000005252 bulbus oculi Anatomy 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Landscapes
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The utility model discloses a kind of head-mounted display apparatus, comprise processor, first sensor, the second sensor, display device and image acquiring device; First sensor is arranged at the outside of described head-mounted display apparatus, for obtaining the status information of the opaque object being positioned at head-mounted display apparatus side; Second sensor setting in the inner side of described head-mounted display apparatus, for obtaining the status information of user's eye; Processor is used for generating corresponding interactive instruction according to the status information of described opaque object, and obtain instruction according to the status information synthetic image of described eye, interactive instruction is for controlling interactive interface corresponding to display device display, and Image Acquisition instruction obtains image for controlling described image acquiring device; Wherein, processor not synthetic image acquisition instruction when first sensor detects opaque object.Thus, gesture instruction can be received and eye active instruction controls it simultaneously, and ensure two kinds of instruction non-interference.
Description
Technical field
The utility model relates to technical field of electronic equipment, be specifically related to a kind of possess simultaneously gesture identification and nictation camera function head-mounted display apparatus.
Background technology
Along with the processor chips of computing equipment and making rapid progress of sensor chip development, people are interested in these two parts of the augmented reality of physical world perception the information and user that merge electronic product generation, expedited the emergence of the electronic product based on integration technology or equipment thus.The miniaturization trend of computing hardware, peripheral hardware and sensor, detecting device and image and audio process and some other technology has helped to open the field being called as " wearable computing ".Wearable display occurs in this area, very little image-displaying member is placed on enough near the position of at a glance or two of wearer by these wearable display, the image shown is made to fill up or almost fill up the visual field, and be revealed as the image of common size, as being presented on traditional image display.Correlation technique is called as " near-to-eye ".Near-to-eye is the basic module of wearable display, and wearable display is also sometimes referred to as " head mounted display " (head-mounted display, HMD).One or more graphic alphanumeric display is placed near at a glance or two of wearer by head mounted display.In order to synthetic image over the display, computer processing system can be used.This display can occupy the whole visual field of wearer, or only occupies the part in the visual field of wearer.In addition, head mounted display can be so little or so large as the helmet as a pair of glasses.Wearable device can use camera and/or photoelectric sensor detecting user visual field internal object, use microphone and/or sensor to detect that user is listening, that say and to use other sensors various information of collecting about user surrounding environment can also be detected the vital sign of user oneself in real time by the wearable device of integral biosensor, then analyzes the data that collect to present to the result detected by user.Along with the development of technology, the integrated function of wearable display device gets more and more, and the complexity coordinating these functions is increasing.
Utility model content
In view of this, provide a kind of head-mounted display apparatus, gesture instruction can be received and eye active instruction controls it simultaneously, and ensure two kinds of instruction non-interference.
Head-mounted display apparatus described in the utility model comprises processor, first sensor, the second sensor, display device and image acquiring device;
Described first sensor, the second sensor, display device and image acquiring device are all connected to described processor;
Described first sensor is arranged at the outside of described head-mounted display apparatus, for obtaining the status information of the opaque object being positioned at head-mounted display apparatus side;
Described second sensor setting in the inner side of described head-mounted display apparatus, for obtaining the status information of user's eye;
Described processor is used for generating corresponding interactive instruction according to the status information of described opaque object, and obtain instruction according to the status information synthetic image of described eye, described interactive instruction is for controlling interactive interface corresponding to display device display, and described Image Acquisition instruction obtains image for controlling described image acquiring device;
Wherein, described processor not synthetic image acquisition instruction when first sensor detects opaque object.
Preferably, described second sensor is infrared ray sensor, comprises infrared emitting end and infrared receiver end.
Preferably, described processor the status information of the eye that described second sensor detects represent user's eye be kept closed exceed very first time threshold value and return to open state time, judge whether first sensor detects opaque object, if opaque object do not detected, then generate described Image Acquisition instruction.
Preferably, the status information of the eye that described processor detects at described second sensor represents that user's eye is continuous at closure state with open between state and switch pre-determined number within a predetermined period of time, judge whether first sensor detects opaque object, if opaque object do not detected, then generate described Image Acquisition instruction.
Preferably, in the status information of the eye that described second sensor detects, described processor represents that user's eye is kept closed or opens state and generates dormancy instruction more than during the second time threshold, described dormancy instruction enters dormant state for controlling described head-mounted display apparatus.
Preferably, described opaque object is the hand of human body.
Preferably, the status information of described opaque object is moving direction and the velocity information of described opaque object, and described processor generates interactive instruction according to the moving direction of described opaque correspondence and velocity information and controls the movement of cursor on interactive interface.
Preferably, described first sensor is video camera, infrared sensor or optical sensor.
Dissimilar sensor is all set by the inner side and outer side at head-mounted display apparatus, gesture instruction can be received and eye active instruction controls it simultaneously, and ensure two kinds of instruction non-interference.
Accompanying drawing explanation
By referring to the description of accompanying drawing to the utility model embodiment, above-mentioned and other objects, features and advantages of the present utility model will be more clear, in the accompanying drawings:
Fig. 1 is the circuit structure block diagram of head-mounted display apparatus described in the utility model embodiment.
Embodiment
Based on embodiment, the utility model is described below, but the utility model is not restricted to these embodiments.In hereafter details of the present utility model being described, detailedly describe some specific detail sections.Do not have the description of these detail sections can understand the utility model completely for a person skilled in the art yet.In order to avoid obscuring essence of the present utility model, known method, process, flow process, element and circuit do not describe in detail.
In addition, it should be understood by one skilled in the art that the accompanying drawing provided at this is all for illustrative purposes, and accompanying drawing is not necessarily drawn in proportion.
Unless the context clearly requires otherwise, similar words such as " comprising ", " comprising " otherwise in whole instructions and claims should be interpreted as the implication that comprises instead of exclusive or exhaustive implication; That is, be the implication of " including but not limited to ".
In description of the present utility model, it is to be appreciated that term " first ", " second " etc. are only for describing object, and instruction or hint relative importance can not be interpreted as.In addition, in description of the present utility model, except as otherwise noted, the implication of " multiple " is two or more.
Fig. 1 is the circuit structure block diagram of the head-mounted display apparatus of the utility model first embodiment.As shown in Figure 1, head-mounted display apparatus 10 comprises processor 11, first sensor 12, second sensor 13, display device 14 and image acquiring device 15.
First sensor 12, second sensor 13, display device 14 can be connected with processor 11 by any suitable wired or wireless communication system with image acquiring device 15, thus the information transmission realized each other and command.
In a preferred embodiment, processor 11 can be arranged on the side of head-mounted display apparatus.
First sensor 12 for detecting opaque object in the angular field of view of wearer front and this opaque object move situation, and will detect that the state of opaque object is sent to processor 11.Described opaque object can be the hand of people, also can be to be exclusively used in the device carrying out controlling, such as, control pen.First sensor 12 can be arranged on outside the frame front end of head-mounted display apparatus.It can be any equipment of the suitable characteristic being used for detected object.Its plant characteristic entering to detect can comprise the position of object, color and size etc.First sensor 12 can adopt following form: video camera, infrared sensor, optical sensor or existing or any imageing sensor of Future Development or the combination of imageing sensor.
Processor 11 receives the status signal of the opaque object detected by first sensor 12, and judges whether prearranged gesture to be detected according to this status signal, and then generates corresponding gesture identification signal.
Such as, when head-mounted display apparatus 10 is using the action of staff as the detection target of first sensor 12.If first sensor 12 brandish about staff being detected image time, according to color and style characteristic, processor 11 can judge whether the opaque object detected is staff, if be judged as staff, then follow the tracks of the moving direction of staff image further, judging the form of brandishing (as brandished by predetermined direction) of staff according to the moving direction of staff image, generating with it corresponding gesture recognition information (as by the picture of predetermined direction mobile display or menu) thus.
Certainly, one of ordinary skill in the art will readily recognize that and utilize existing Gesture Recognition, processor 11 based on the more complicated gesture instruction of the image recognition of the opaque object detected by first sensor 12, such as, can amplify, reduces, clicks etc.
Processor 11, after generating corresponding gesture recognition information, controls display device 14 correspondence according to gesture recognition information and shows, and such as, carries out movement showing image etc.
Display device 14 is installed to the frame front end of head-mounted display apparatus, is positioned at user at the moment.Display device 14 can be used for the interactive interface showing head-mounted display apparatus, and carries out man-machine interaction according to the control of processor 11.
Second sensor 13 can be arranged on the inner side of the frame front end of head-mounted display apparatus 10, for the eye motion of monitor user '.
Second sensor 13 can adopt following form: gray-scale sensor, any sensor of infrared sensor, optical sensor or known or Future Development now, and the combination of sensor.
In the present embodiment, the second sensor 13 is infrared sensor, and it comprises a transmitting terminal and a receiving end, and transmitting terminal continues to launch faint infrared-ray to the eyeball of user, and receiving end is for receiving the infrared-ray reflected.When the eyes of user are opened, infrared-ray is absorbed by eyeball, does not have infrared-ray to reflect; When user closes one's eyes time, eyeball is wrapped up by eyelid, and a part of infrared-ray is reflected also detected by receiving end.When receiving end detects the infrared ray of reflection, illustrate that the eyes of user are in closure state, thus, infrared sensor can be detected that the state of the infrared-ray of reflection and duration transmissions are to processor 11, thus, processor 11 can know the eye state of user.
Certainly, the second sensor 13 also can adopt the sensor of other type, such as gray-scale sensor, by judging the state of user's eye to the change of eye locations image.
The duration of the reflective infrared ray of processor 11 detected by the receiving end of the second sensor 13 judges that action nictation is instinct nictation or autotelic action nictation of user of user's state of nature, if the duration of reflective infrared ray exceedes the first threshold time, this action nictation is considered to autotelic action nictation of user, if the duration of reflective infrared ray is less than threshold time, this action nictation is considered to the instinct nictation of user's state of nature.The first threshold time can be set by user oneself.
Processor 11, when judgement second sensor 13 detects autotelic action nictation of user, generates corresponding steering order and controls display device 14 or image acquiring device 15 performs corresponding operating.
Such as, can by action nictation with determine that instruction is corresponding, now, moved the cursor to by gesture motion need click object time, by nictation instruction carry out clicking or determination operation.Thus, the control for head-mounted display apparatus is realized by the combination of gesture and eye motion.
Again such as, can be corresponding with action of taking pictures by action nictation, thus, when processor 11 judges that the second sensor 13 detects autotelic action nictation of user, it generates photographing instruction and controls image acquiring device 15 and carry out image acquisition operations.
Meanwhile, the second sensor 13 can also be used for detecting head-mounted display apparatus and whether is in the state that user wears.As mentioned above, under wearing state, second sensor 13 can detect action nictation of user, if the second sensor 13 passes to the state of processor 11 for opening eyes for a long time or closing one's eyes for a long time, user then illustrates that head-mounted display apparatus is not worn by the user, because can not be in long-time eye opening and long-time state of closing one's eyes when wearing head-mounted display apparatus usually.
Thus, if the eyes-open state signal that processor 11 transmits according to the second sensor 13 or closed-eye state exceed Second Threshold signal duration, then judge that head-mounted display apparatus is in not by the state worn, and generates dormancy instruction, control head-mounted display apparatus dormancy.
When dormancy, can by closing first sensor 12, second sensor 13, display device 14 and image acquiring device 15 to save electric energy.
Image acquiring device 15 is installed on the framework of head-mounted display apparatus, and the visual angle, front being directed to follow the trail of wearer is reference system.After image acquiring device 15 receives Image Acquisition instruction from processor 11, perform image acquisition operations.
Thus, by arranging first sensor 12 and the second sensor 13, by can realize controlling head-mounted display apparatus based on gesture and eye motion with processor 11 interaction simultaneously.
But, when carrying out with action nictation taking pictures control, if also carrying out gesture control simultaneously, then likely making hand or other control device shielded image acquisition device 15 of movement, making photographing operation failure.
Thus, processor 11 when the second sensor 13 detects action nictation satisfied condition, can judge whether first sensor 12 detects opaque object, if so, does not then generate photographing instruction.Preferably, can also point out on a display device 14 and barrier be detected, can not take pictures.
Dissimilar sensor is all set by the inner side and outer side at head-mounted display apparatus, gesture instruction can be received and eye active instruction controls it simultaneously, and ensure two kinds of instruction non-interference.
The foregoing is only preferred embodiment of the present utility model, be not limited to the utility model, to those skilled in the art, the utility model can have various change and change.All do within spirit of the present utility model and principle any amendment, equivalent replacement, improvement etc., all should be included within protection domain of the present utility model.
Claims (5)
1. a head-mounted display apparatus, comprises processor, first sensor, the second sensor, display device and image acquiring device;
Described first sensor, the second sensor, display device and image acquiring device are all connected to described processor;
Described first sensor is arranged at the outside of described head-mounted display apparatus, for obtaining the status information of the opaque object being positioned at head-mounted display apparatus side;
Described second sensor setting in the inner side of described head-mounted display apparatus, for obtaining the status information of user's eye.
2. head-mounted display apparatus according to claim 1, is characterized in that, described second sensor is infrared ray sensor, comprises infrared emitting end and infrared receiver end.
3. head-mounted display apparatus according to claim 1, is characterized in that, described opaque object is the hand of human body.
4. head-mounted display apparatus according to claim 1, it is characterized in that, the status information of described opaque object is moving direction and the velocity information of described opaque object, and described processor generates interactive instruction according to the moving direction of described opaque correspondence and velocity information and controls the movement of cursor on interactive interface.
5. head-mounted display apparatus according to claim 1, is characterized in that, described first sensor is video camera, infrared sensor or optical sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201420603263.7U CN204347750U (en) | 2014-10-17 | 2014-10-17 | head-mounted display apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201420603263.7U CN204347750U (en) | 2014-10-17 | 2014-10-17 | head-mounted display apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN204347750U true CN204347750U (en) | 2015-05-20 |
Family
ID=53231008
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201420603263.7U Active CN204347750U (en) | 2014-10-17 | 2014-10-17 | head-mounted display apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN204347750U (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106681514A (en) * | 2017-01-11 | 2017-05-17 | 广东小天才科技有限公司 | Virtual reality equipment and implementation method thereof |
CN109828660A (en) * | 2018-12-29 | 2019-05-31 | 深圳云天励飞技术有限公司 | A kind of method and device of the control application operating based on augmented reality |
-
2014
- 2014-10-17 CN CN201420603263.7U patent/CN204347750U/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106681514A (en) * | 2017-01-11 | 2017-05-17 | 广东小天才科技有限公司 | Virtual reality equipment and implementation method thereof |
CN109828660A (en) * | 2018-12-29 | 2019-05-31 | 深圳云天励飞技术有限公司 | A kind of method and device of the control application operating based on augmented reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104281266A (en) | Head-mounted display equipment | |
CN110647237B (en) | Gesture-based content sharing in an artificial reality environment | |
US10686972B2 (en) | Gaze assisted field of view control | |
CN106873778B (en) | Application operation control method and device and virtual reality equipment | |
EP3469458B1 (en) | Six dof mixed reality input by fusing inertial handheld controller with hand tracking | |
US9442567B2 (en) | Gaze swipe selection | |
EP1928296B1 (en) | A device for controlling an external unit | |
EP2984541B1 (en) | Near-plane segmentation using pulsed light source | |
CN104090659B (en) | Operating pointer based on eye image and Eye-controlling focus indicates control device | |
US20170359569A1 (en) | Camera Based Safety Mechanisms for Users of Head Mounted Displays | |
US10474411B2 (en) | System and method for alerting VR headset user to real-world objects | |
CN106291930A (en) | Head mounted display | |
US20170186236A1 (en) | Image display device, image display method, and computer program | |
CN104898406A (en) | Electronic device and acquisition control method | |
CN112034977A (en) | Method for MR intelligent glasses content interaction, information input and recommendation technology application | |
US20130241927A1 (en) | Computer device in form of wearable glasses and user interface thereof | |
US9442571B2 (en) | Control method for generating control instruction based on motion parameter of hand and electronic device using the control method | |
CN104473717A (en) | Wearable guide apparatus for totally blind people | |
EP2650756A2 (en) | Skin input via tactile tags | |
US20130265300A1 (en) | Computer device in form of wearable glasses and user interface thereof | |
CN111258420B (en) | Information interaction method, head-mounted device and medium | |
WO2021073743A1 (en) | Determining user input based on hand gestures and eye tracking | |
CN106681509A (en) | Interface operating method and system | |
CN105786163A (en) | Display processing method and display processing device | |
CN106201284B (en) | User interface synchronization system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20160513 Address after: 518000 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A Patentee after: Shenzhen Ding Technology Co., Ltd. Address before: 100022, 1303, building 2, building 13, building 2, Chaoyang District Yuanyang Beijing Patentee before: Li Yan |