WO2019139289A1 - Auxiliary device for virtual environment - Google Patents
Auxiliary device for virtual environment Download PDFInfo
- Publication number
- WO2019139289A1 WO2019139289A1 PCT/KR2018/016766 KR2018016766W WO2019139289A1 WO 2019139289 A1 WO2019139289 A1 WO 2019139289A1 KR 2018016766 W KR2018016766 W KR 2018016766W WO 2019139289 A1 WO2019139289 A1 WO 2019139289A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- unit
- virtual environment
- recognition unit
- gesture
- Prior art date
Links
- 238000004891 communication Methods 0.000 claims description 29
- 210000003128 head Anatomy 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 12
- 210000005069 ears Anatomy 0.000 claims description 3
- 210000000887 face Anatomy 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002567 electromyography Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
Definitions
- the present invention relates to a virtual environment auxiliary device.
- VR virtual reality
- AR augmented reality
- MR Mixed Reality
- a kinetic sensor which is a static gesture sensor and a wearable type Myo Armband electromyography sensor. do.
- an apparatus for use in providing a virtual environment to a user comprising: a body part worn around the user's neck or head; A location recognition unit for recognizing a location of the user to reflect the virtual environment; And a gesture recognition unit for recognizing a gesture of the user for input to the virtual environment, wherein the position recognition unit is a stereotype provided on both sides of the body part.
- the body portion may have a U-shaped necklace shape, which is open at the front side of the user's neck, and the position recognition portion may be provided at both ends of the body portion.
- the body part may have the gesture recognition part formed on at least one of the both ends, and the gesture recognition part may be provided so as to face downward at both ends of the body part.
- the display unit may further include a display unit formed on at least one of the both ends, for displaying the virtual environment to the user with the naked eye.
- the body portion may further include a pair of switching portions for adjusting the both end portions so as to be tilted up and down, respectively, and the pair of switching portions may be provided on the body portion, And may be formed between the center portion and the both ends.
- a pair of sound parts extend from the body part and are connected to the user's ear and allow the sound to be transmitted to the user.
- a communication terminal formed at the central portion of the body portion to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside;
- a microphone unit formed at both ends of the body unit to receive a voice signal of the user.
- the position recognizing unit is formed of a camera
- the gesture recognizing unit is formed of an image sensor and an LED light
- the display unit may be a projector that illuminates the virtual environment to the outside have.
- the body portion has a U-shaped headphone shape to be worn on the user's head, and is formed to cover the user's ears at both ends formed by the U-shaped shape,
- the position recognizing unit may be provided in front of each of the pair of sound units so that the position recognizing unit faces forward.
- the gesture recognizing unit may be provided so as to face downward at least in front of at least one of the pair of sound parts.
- the display device may further include a display unit having a 'U' shape to be worn on the user's head in the pair of sound parts, and displaying the virtual environment to the user with the naked eye, U "shape to be worn on the user's head, and can be formed to be rotatable from the upper end of the user's head to the user's eyes with the sound unit as a rotation reference point.
- a display unit having a 'U' shape to be worn on the user's head in the pair of sound parts, and displaying the virtual environment to the user with the naked eye, U "shape to be worn on the user's head, and can be formed to be rotatable from the upper end of the user's head to the user's eyes with the sound unit as a rotation reference point.
- a communication terminal formed on the body portion and transmitting / receiving signals recognized from the position recognition unit and the gesture recognition unit to the outside; And a microphone unit extending from both ends of the body unit to the mouth of the user and receiving the voice signal of the user.
- the position recognizing unit is formed of a camera
- the gesture recognizing unit is formed of an image sensor and an LED light
- the display unit directs the virtual environment to the eyes of the user Can be a screen.
- the virtual environment assisting apparatus has an effect of enabling the user to wear the stereotyped position recognizing unit so that the position of the user can be recognized even in a space in which the camera is not provided in advance.
- the virtual environment assisting apparatus has an effect that the gesture recognition unit capable of recognizing the gesture is provided toward the lower side of the user's head so that the gesture input can be performed without raising the hand, thereby enhancing convenience.
- FIG. 1 is a view showing a user wearing a virtual environment auxiliary apparatus according to an embodiment of the present invention.
- FIG. 2 is a side view of a virtual environment auxiliary apparatus according to an embodiment of the present invention.
- FIG 3 is a plan view of a virtual environment assisting apparatus according to an embodiment of the present invention.
- FIG. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention.
- FIG. 5 is a view showing an operation range after the user wears the virtual environment auxiliary device according to an embodiment of the present invention.
- FIG. 6 is a view showing a user wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention.
- FIG. 7 is a diagram illustrating an operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
- FIG. It should be noted that, in the present specification, the reference numerals are added to the constituent elements of the drawings, and the same constituent elements are assigned the same number as much as possible even if they are displayed on different drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
- the virtual environment described in the present invention is a concept including both a virtual reality, an augmented reality, and a mixed reality in which the mixed reality is mixed.
- preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. .
- FIG. 2 is a side view of a virtual environment assisting apparatus according to an embodiment of the present invention.
- FIG. 3 is a perspective view of a virtual environment assisting apparatus according to an embodiment of the present invention.
- FIG. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention,
- FIG. 5 is a view showing an operation range after wearing the virtual environment assisting apparatus according to an embodiment of the present invention .
- a virtual reality auxiliary apparatus 10 is an apparatus used when providing a virtual environment to a user U and includes a body part 11, A gesture recognition unit 13, a display unit 14, an switching unit 15, a sound unit 16, a communication terminal 17, and a microphone unit 18, as shown in FIG.
- FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
- FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
- FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
- FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
- the body part 11 is worn around the neck N of the user U and may have the form of a necklace of 'U', opening the neck N of the user U and opening forward.
- the body portion 11 may include a central portion 111 abutting against the back of the neck N of the user U and both end portions 112 extending forward from the central portion 111 to the left and right, have. That is, the center portion 111 can be formed at a position based on the center between the both end portions 112.
- the virtual environment assisting apparatus 10 is configured such that the virtual environment assisting apparatus 10 is formed in the form of a necklace that can be hung on the neck N, so that the load H applied to the head H of the user U Can be reduced.
- the position recognizing unit 12 recognizes the position of the user U for reflecting in the virtual environment and is formed as a stereotype provided on both right and left sides of the body 11.
- the position recognizing unit 12 may be formed to face forward at both ends 112 of the body part 11 and may be formed as a pair.
- the position recognizing unit 12 may have a shape of a camera.
- the position recognizing unit 12 may be formed of an RGB camera and may be formed as a stereotype to recognize the range of C1 shown in FIG. 5, The position of the user U can be accurately recognized by widely recognizing the environment.
- the position recognizing unit 12 obtains the external environment of the user U obtained from the left and right sides in the form of an image and transfers it to the communication terminal 17 and the communication terminal 17 can analyze it and supply it to the outside .
- the communication terminal 17 may include an image analysis unit (not shown), an image correction unit (not shown), and a depth map generation unit.
- the image analyzing unit analyzes the external environment of the user (U) obtained from the left and right sides in the form of an image.
- the image analyzing unit analyzes the image of the user U based on an image obtained in the external environment of the user U, that is, in the range of C1, such as Roll, Tilt, Height, Convergence, Optical axis, Zoom, Focus, Iris, Depth, Luminance, Chrominance, Gamut differences and so on can be analyzed.
- C1 such as Roll, Tilt, Height, Convergence, Optical axis, Zoom, Focus, Iris, Depth, Luminance, Chrominance, Gamut differences and so on can be analyzed.
- the analyzed contents are transmitted to the image correction unit, and the image matching unit geometrically matches the colors based on the analysis contents and matches the colors.
- the matched contents are received by the depth map generating unit converted into the three-dimensional coordinates, and a dense map or a semi-dense disparity map is generated to accurately grasp the position of the user U.
- the present invention has an effect that the position recognizing unit 12 can be formed as a stereotype so that the position of the user U can be recognized without providing a separate camera in advance in the space.
- the position information of the user U recognized in this manner can be output to the outside (e.g., a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
- the outside e.g., a large-screen display or a head-mounted type of head-mounted display
- the gesture recognition unit 13 recognizes the gesture of the user U for input to the virtual environment.
- the gesture recognition unit 13 is formed on at least one of the opposite end portions 112 of the body portion 11, and may be provided so as to face downward at both ends 112 of the body portion 11.
- the gesture recognition section 13 is provided so as to face downward at the both ends 112 of the body section 11 so as to recognize the gesture of the user U very effectively by recognizing the range of the C2 shown in Fig. have.
- the gesture recognizing unit 13 is formed so as to face downward in front of the body part 11, in which the user U can not easily recognize the gesture even when his / her hands are easily lowered There is an effect that can be.
- the present invention can use the virtual environment comfortably and comfortably while relieving fatigue.
- the gesture recognition unit 13 may be formed of an image sensor 131 and an LED light 132. [ At this time, the gesture recognition unit 13 can recognize the region outside the visible light region, that is, the ultraviolet or infrared region.
- the gesture recognition unit 13 can recognize the near-field gesture of approximately 1.2 m from the user U, and in one case, the recognition range can be formed to approximately 180 degrees.
- the gesture recognition unit 13 may be provided at both ends 112 of the body 1 in the present invention and may extend the recognition range to approximately 180 degrees or more.
- the gesture recognition unit 13 may include a camera capable of generating three-dimensional depth information of the movement of the hand of the user U for a predetermined period of time.
- the gesture recognizing unit 13 can recognize an image photographed by the camera. It is possible to take a picture without missing a hand motion of the user U.
- the gesture recognition unit 13 transmits hand motion information of the user U photographed as described above to the communication terminal 17 and communicates the communication terminal 17 to the outside to transmit the hand of the user U to the multi- A virtual gesture image can be generated.
- the recognized gesture information of the user U can be outputted to the outside (for example, a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
- the display unit 14 is formed on at least one of both ends 112 of the body part 11 and allows the user U to visually display the virtual environment.
- the display unit 14 may be a projector that illuminates the virtual environment to the outside.
- the switching portion 15 adjusts the angle so that both end portions 112 of the body portion 11 can be tilted upward and downward and the center portion of the body portion 11 A pair can be formed between each end 111 and each end 112.
- the switching portion 15 adjusts the both end portions 112 of the body portion 11 to be vertically inclined so that the position recognition portion 12, the gesture recognition portion 13, The recognition range or the projection range of the display unit 14 can be freely adjusted.
- the both ends 112 of the body part 11 can be adjusted to be inclined downward, and the body parts 11 can be adjusted to be inclined in the direction A2,
- the end portions 112 can be adjusted to be inclined upward.
- the switching portion 15 can be angularly adjusted so as to be tilted using a hydraulic system, a mechanical system using elasticity, or an electronic system.
- the sound unit 16 may be formed as a pair extending from the body unit 11 and connected to the user's ear and transmitting sound (sound) to the user U.
- the sound unit 16 may have the form of a wire earphone, but the present invention is not limited thereto.
- the sound unit 16 may have the form of a wireless earphone through wireless communication formed in the body unit 11.
- the wireless communication method may be a Wi-Fi and a Bluetooth (BlueTooth) method.
- the sound unit 16 may be connected to the communication terminal 17 through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
- the communication terminal 17 is formed at the central portion 111 of the body portion 11 and is capable of transmitting and receiving signals recognized from the position recognition unit 12 and the gesture recognition unit 13 to the outside.
- the communication terminal 17 can be connected to the position recognizing unit 12 and the gesture recognizing unit 13 wirelessly or by wire to transmit and receive information and to communicate with the outside (for example, a head mount display (HMD) And can transmit and receive information through a wired or wireless connection.
- HMD head mount display
- the communication terminal 17 may further include a feedback information generating unit in addition to the image analyzing unit, the image correcting unit, and the depth map generating unit described above.
- the feedback information generating unit may generate feedback information on touch or click when the virtual user U touches or clicks the 3D content again.
- the feedback information may refer to information on a certain control command generated by touching or clicking when a virtual user U touches or clicks the 3D content again.
- a feedback signal when a hand of a virtual user U touches or clicks a play icon of a specific image implemented as a multi-point 3D content, information about a control command for playing a specific image may be referred to as a feedback signal.
- the microphone section 18 is formed at both ends 112 on the body section 11 and can receive a voice signal of the user U.
- the microphone unit 18 can be connected to the communication terminal 17 by wire or wirelessly and can transmit the voice signal of the user U to the communication terminal 17.
- the embodiment of the present invention may further include a power supply unit B1 and an operation button B2.
- the power supply unit B1 may be formed at one side of the body part 11 to store and supply the power for driving the virtual environment assisting apparatus 10 of the present invention.
- the operation button B2 can be provided on one side of the body part 11 to turn on and off the power supply to receive the power supplied from the power supply part B1, Can be driven and controlled.
- the operation of raising or lowering the volume of the sound of the sounder 16 may be implemented, but is not limited thereto.
- FIG. 6 is a view of a wearer wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention
- FIG. 7 is an operational view illustrating operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
- the virtual reality auxiliary device 20 is an apparatus used when providing a virtual environment to a user U and includes a body 21, A recognition unit 22, a gesture recognition unit 23, a sound unit 26, and a display unit 29.
- FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
- FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
- FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
- FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
- the body portion 21 may have a U-shaped headphone shape to be worn on the head H of the user U.
- the body portion 21 includes a central portion 211 abutting against the crown of the user when worn on the head H of the user U and both end portions 212 extending downward from the central portion 211 to the left and right can do. That is, the center portion 211 can be formed at a position based on the center between the both end portions 212.
- the position recognizing section 22 may be provided in front of each of the pair of sound sections 26 toward the front.
- the location recognizing unit 22 is the same as the location recognizing unit 12 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
- the gesture recognition section 23 may be provided so as to face forward and downward in front of at least one of the pair of sound sections 26.
- the gesture recognition unit 23 may be formed of an image sensor 231 and an LED light 232.
- the gesture recognition unit 23 is the same as the gesture recognition unit 23 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
- the gesture recognition unit 23 is provided to the lower side of the head of the user U, unlike the conventional case where the user U's hand is lifted up and the joints become clumsy even in the case of using the virtual environment for a long time, It is possible to use the virtual environment very conveniently and comfortably while relieving fatigue.
- the sound section 26 is formed in a pair so as to cover the ears of the user U at both ends 212 formed by the U-shaped form of the body section 21, Sound).
- the sound unit 26 may be connected to a communication terminal through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
- the display unit 29 is formed so as to have a U-shaped shape to be worn on the head H of the user U in a pair of sound units 26.
- the user U is allowed to visually display the virtual environment And can be formed to be rotatable in the eye I of the user U at the upper end of the head H of the user U with the sound unit 26 as a rotation reference point.
- the display unit 29 is formed so as to at least partially overlap with the body unit 21, and the body unit 21 is rotated by the rotation as in (b) to (c) (I) of the user (U) out of the camera (21).
- the display unit 29 may be a screen that directly illuminates the virtual environment to the eyes of the user U.
- the display unit 29 may be connected to the communication terminal through a wired or wireless connection and may receive the position recognition unit 22 and the gesture recognition unit 23), the information can be received from the microphone unit, reflected on the virtual environment, and then reflected directly on the user U's eyes.
- the virtual environment assisting apparatus 20 of the present invention can be equipped with a head mount device (HMD) together, thereby reducing the construction cost and allowing the user U to easily wear the virtual environment assistant apparatus 20, have.
- HMD head mount device
- the virtual reality auxiliary device 20 further includes a communication terminal (not shown), a microphone unit (not shown), a power source unit (not shown), and an operation button can do.
- the communication terminal is formed in the body portion 21 and can transmit and receive a signal recognized by the position recognition unit 22 and the gesture recognition unit 23.
- the communication terminal is the same as the communication terminal described in (17) in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5 within the range not incompatible with the above contents, The contents are omitted.
- the microphone section is formed extending from both ends 212 of the body section 21 to the mouth of the user U to receive a voice signal of the user U.
- the microphone unit can be connected to the communication terminal by wire or wirelessly, and can transmit the voice signal of the user U to the communication terminal.
- the power supply unit and the operation button are the same as the power supply unit B1 and the operation button B2 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Omit it.
- Image sensor 132 LED light
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
Description
Claims (12)
- 사용자에게 가상환경을 제공할 때 사용되는 장치로서, As a device used when providing a virtual environment to a user,상기 사용자의 목 또는 머리에 둘러 착용되는 몸체부;A body portion to be worn around the user's neck or head;상기 가상환경에 반영하기 위한 상기 사용자의 위치를 인식하는 위치 인식부; 및A location recognition unit for recognizing a location of the user to reflect the virtual environment; And상기 가상환경에 입력하기 위한 상기 사용자의 제스처를 인식하는 제스처 인식부를 포함하고, And a gesture recognition unit for recognizing a gesture of the user for input to the virtual environment,상기 위치 인식부는, The position recognizing unit,상기 몸체부의 좌우 양측에 마련되는 스테레오 타입인 것을 특징으로 하는 가상환경 보조장치.Wherein the virtual environment auxiliary device is a stereotype provided on both right and left sides of the body part.
- 제 1 항에 있어서, 상기 몸체부는, [2] The apparatus of claim 1,상기 사용자의 목을 두르며 전방이 개방되는 'U'자 목걸이 형태를 가지며,A U-shaped necklace in which the user's neck is opened and the front is opened,상기 위치 인식부는, 상기 몸체부의 양 단부에 전방을 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치.Wherein the position recognizing unit is provided so as to face forward at both ends of the body part.
- 제 2 항에 있어서, 상기 몸체부는, [3] The apparatus of claim 2,상기 양 단부 중 적어도 하나에 상기 제스처 인식부가 형성되되,Wherein the gesture recognition unit is formed on at least one of the two ends,상기 제스처 인식부는, 상기 몸체부의 양 단부에서 전방 아래를 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치. Wherein the gesture recognizing unit is provided so as to face downward at both ends of the body part.
- 제 3 항에 있어서, The method of claim 3,상기 양 단부 중 적어도 하나에 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.And a display unit formed on at least one of the two ends to display the virtual environment to the user with the naked eye.
- 제 4 항에 있어서, 상기 몸체부는, [5] The apparatus of claim 4,상기 양 단부가 각각 상하로 기울어질 수 있도록 조절하는 한 쌍의 절환부를 더 포함하고, Further comprising a pair of switching portions for adjusting the both end portions to be tilted up and down, respectively,상기 한 쌍의 절환부는, Wherein the pair of switching units comprises:상기 몸체부 상의 상기 양 단부 사이의 중앙을 기준으로 한 중앙부로부터 상기 양 단부 사이에 각각 형성되는 것을 특징으로 하는 가상환경 보조장치.Wherein the first and second end portions are formed between a center portion of the body portion and a center portion between the both end portions.
- 제 5 항에 있어서, 6. The method of claim 5,상기 몸체부에서 연장 형성되어 상기 사용자의 귀에 연결되며 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부;A pair of sound parts extending from the body part to be connected to the user's ear and capable of transmitting sound to the user;상기 몸체부의 상기 중앙부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및A communication terminal formed at the central portion of the body portion to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside; And상기 몸체부 상의 양 단부에 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.Further comprising a microphone unit formed at both ends of the body unit and receiving voice signals of the user.
- 제 6 항에 있어서,The method according to claim 6,상기 위치 인식부는, 카메라로 형성되고,The position recognition unit may be formed of a camera,상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, The gesture recognition unit may be formed of an image sensor and an LED light,상기 디스플레이부는, 상기 가상환경을 외부에 비춰주는 프로젝터인 것을 특징으로 하는 가상환경 보조장치.Wherein the display unit is a projector for illuminating the virtual environment to the outside.
- 제 1 항에 있어서, 상기 몸체부는, [2] The apparatus of claim 1,상기 사용자의 머리에 착용되도록 'U'자의 헤드폰 형태를 가지며, U 'shaped headphone to be worn on the head of the user,상기 'U'자 형태에 의해 형성되는 양 단부에 상기 사용자의 귀를 덮도록 형성되되 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부가 형성되고, A pair of sound parts formed to cover the user's ears at both ends formed by the 'U' shape and capable of transmitting sound to the user,상기 한 쌍의 사운드부의 전방 각각에 상기 위치 인식부가 각각 전방을 향하도록 구비되는 것을 특징으로 하는 가상환경 보조장치.Wherein the position recognition unit is disposed in front of each of the pair of sound units such that the position recognition unit faces forward.
- 제 8 항에 있어서, 9. The method of claim 8,상기 제스처 인식부는, 상기 한 쌍의 사운드부 중 적어도 하나의 전방에서 전방 아래를 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치. Wherein the gesture recognizing unit is provided so as to face downward in front of at least one of the pair of sound parts.
- 제 9 항에 있어서, 10. The method of claim 9,상기 한 쌍의 사운드부에 상기 사용자의 머리에 착용되도록 'U'자 형태를 가지도록 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함하고,Further comprising a display unit having a 'U' shape to be worn on the user's head in the pair of sound units, and displaying the virtual environment to the user with the naked eye,상기 디스플레이부는, The display unit includes:상기 사운드부를 회전 기준점으로 하여 상기 사용자의 머리 상단에서 상기 사용자의 눈으로 회전 가능하게 형성되는 것을 특징으로 하는 가상환경 보조장치.Wherein the virtual environment supporting device is rotatably supported by the user's eye at an upper end of the user's head with the sound unit serving as a rotation reference point.
- 제 10 항에 있어서, 11. The method of claim 10,상기 몸체부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및A communication terminal formed on the body portion and adapted to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside; And상기 몸체부의 양 단부에서 상기 사용자의 입까지 연장 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.Further comprising a microphone unit extending from both ends of the body portion to the mouth of the user and receiving a voice signal of the user.
- 제 11 항에 있어서, 12. The method of claim 11,상기 위치 인식부는, 카메라로 형성되고,The position recognition unit may be formed of a camera,상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, The gesture recognition unit may be formed of an image sensor and an LED light,상기 디스플레이부는, 상기 가상환경을 상기 사용자의 눈에 직접 비춰주는 스크린인 것을 특징으로 하는 가상환경 보조장치.Wherein the display unit is a screen that directly illuminates the virtual environment to the user's eyes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0003349 | 2018-01-10 | ||
KR1020180003349A KR102039728B1 (en) | 2018-01-10 | 2018-01-10 | Virtual environments auxiliary device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019139289A1 true WO2019139289A1 (en) | 2019-07-18 |
Family
ID=67219743
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/016766 WO2019139289A1 (en) | 2018-01-10 | 2018-12-27 | Auxiliary device for virtual environment |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102039728B1 (en) |
WO (1) | WO2019139289A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011215920A (en) * | 2010-03-31 | 2011-10-27 | Namco Bandai Games Inc | Program, information storage medium and image generation system |
KR20140132278A (en) * | 2013-05-07 | 2014-11-17 | 배영식 | Head mounted display |
KR20150041453A (en) * | 2013-10-08 | 2015-04-16 | 엘지전자 주식회사 | Wearable glass-type image display device and control method thereof |
KR101700767B1 (en) * | 2015-06-02 | 2017-01-31 | 엘지전자 주식회사 | Head mounted display |
KR20170016192A (en) * | 2015-08-03 | 2017-02-13 | 엘지전자 주식회사 | Wareable device |
-
2018
- 2018-01-10 KR KR1020180003349A patent/KR102039728B1/en active IP Right Grant
- 2018-12-27 WO PCT/KR2018/016766 patent/WO2019139289A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011215920A (en) * | 2010-03-31 | 2011-10-27 | Namco Bandai Games Inc | Program, information storage medium and image generation system |
KR20140132278A (en) * | 2013-05-07 | 2014-11-17 | 배영식 | Head mounted display |
KR20150041453A (en) * | 2013-10-08 | 2015-04-16 | 엘지전자 주식회사 | Wearable glass-type image display device and control method thereof |
KR101700767B1 (en) * | 2015-06-02 | 2017-01-31 | 엘지전자 주식회사 | Head mounted display |
KR20170016192A (en) * | 2015-08-03 | 2017-02-13 | 엘지전자 주식회사 | Wareable device |
Also Published As
Publication number | Publication date |
---|---|
KR102039728B1 (en) | 2019-11-04 |
KR20190085340A (en) | 2019-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016126110A1 (en) | Electrically stimulating head-mounted display device for reducing virtual reality motion sickness | |
WO2017061677A1 (en) | Head mount display device | |
WO2016003078A1 (en) | Glasses-type mobile terminal | |
WO2015076531A1 (en) | Head-mounted display apparatus | |
WO2015122566A1 (en) | Head mounted display device for displaying augmented reality image capture guide and control method for the same | |
WO2022080548A1 (en) | Augmented reality interactive sports device using lidar sensors | |
EP3072009A1 (en) | Head-mounted display apparatus | |
WO2018052231A1 (en) | Electronic device including flexible display | |
WO2018088730A1 (en) | Display apparatus and control method thereof | |
WO2022220658A1 (en) | Mixed reality industrial helmet linked with digital twin and virtual image | |
WO2022050668A1 (en) | Method for detecting hand motion of wearable augmented reality device by using depth image, and wearable augmented reality device capable of detecting hand motion by using depth image | |
WO2019139289A1 (en) | Auxiliary device for virtual environment | |
WO2020197134A1 (en) | Optical device for augmented reality using multiple augmented reality images | |
WO2019074228A2 (en) | Head-mounted display for reducing virtual-reality motion sickness and operating method thereof | |
WO2022050742A1 (en) | Method for detecting hand motion of wearable augmented reality device by using depth image and wearable augmented reality device capable of detecting hand motion by using depth image | |
WO2022014952A1 (en) | Augmented reality display device | |
WO2022149829A1 (en) | Wearable electronic device, and input structure using motion sensor | |
EP2625561A1 (en) | Glasses | |
WO2021045386A1 (en) | Helper system using cradle | |
WO2021029448A1 (en) | Electronic device | |
WO2022231224A1 (en) | Augmented reality glasses providing panoramic multi-screens and panoramic multi-screen provision method for augmented reality glasses | |
WO2022196869A1 (en) | Head mounted display device, operating method for device, and storage medium | |
WO2023210970A1 (en) | Augmented-reality texture display method using augmented reality-dedicated writing tool | |
WO2018212437A1 (en) | Calibration method for matching of augmented reality objects and head mounted display for performing same | |
WO2022260272A1 (en) | Head-mounted display device adjusting interpupillary distance by moving binocular lenses simultaneously in rack-and-pinion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18900291 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18900291 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/02/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18900291 Country of ref document: EP Kind code of ref document: A1 |