WO2019139289A1 - Auxiliary device for virtual environment - Google Patents

Auxiliary device for virtual environment Download PDF

Info

Publication number
WO2019139289A1
WO2019139289A1 PCT/KR2018/016766 KR2018016766W WO2019139289A1 WO 2019139289 A1 WO2019139289 A1 WO 2019139289A1 KR 2018016766 W KR2018016766 W KR 2018016766W WO 2019139289 A1 WO2019139289 A1 WO 2019139289A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
unit
virtual environment
recognition unit
gesture
Prior art date
Application number
PCT/KR2018/016766
Other languages
French (fr)
Korean (ko)
Inventor
안길재
홍준표
김상훈
Original Assignee
주식회사 동우 이앤씨
안길재
홍준표
김상훈
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 동우 이앤씨, 안길재, 홍준표, 김상훈 filed Critical 주식회사 동우 이앤씨
Publication of WO2019139289A1 publication Critical patent/WO2019139289A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication

Definitions

  • the present invention relates to a virtual environment auxiliary device.
  • VR virtual reality
  • AR augmented reality
  • MR Mixed Reality
  • a kinetic sensor which is a static gesture sensor and a wearable type Myo Armband electromyography sensor. do.
  • an apparatus for use in providing a virtual environment to a user comprising: a body part worn around the user's neck or head; A location recognition unit for recognizing a location of the user to reflect the virtual environment; And a gesture recognition unit for recognizing a gesture of the user for input to the virtual environment, wherein the position recognition unit is a stereotype provided on both sides of the body part.
  • the body portion may have a U-shaped necklace shape, which is open at the front side of the user's neck, and the position recognition portion may be provided at both ends of the body portion.
  • the body part may have the gesture recognition part formed on at least one of the both ends, and the gesture recognition part may be provided so as to face downward at both ends of the body part.
  • the display unit may further include a display unit formed on at least one of the both ends, for displaying the virtual environment to the user with the naked eye.
  • the body portion may further include a pair of switching portions for adjusting the both end portions so as to be tilted up and down, respectively, and the pair of switching portions may be provided on the body portion, And may be formed between the center portion and the both ends.
  • a pair of sound parts extend from the body part and are connected to the user's ear and allow the sound to be transmitted to the user.
  • a communication terminal formed at the central portion of the body portion to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside;
  • a microphone unit formed at both ends of the body unit to receive a voice signal of the user.
  • the position recognizing unit is formed of a camera
  • the gesture recognizing unit is formed of an image sensor and an LED light
  • the display unit may be a projector that illuminates the virtual environment to the outside have.
  • the body portion has a U-shaped headphone shape to be worn on the user's head, and is formed to cover the user's ears at both ends formed by the U-shaped shape,
  • the position recognizing unit may be provided in front of each of the pair of sound units so that the position recognizing unit faces forward.
  • the gesture recognizing unit may be provided so as to face downward at least in front of at least one of the pair of sound parts.
  • the display device may further include a display unit having a 'U' shape to be worn on the user's head in the pair of sound parts, and displaying the virtual environment to the user with the naked eye, U "shape to be worn on the user's head, and can be formed to be rotatable from the upper end of the user's head to the user's eyes with the sound unit as a rotation reference point.
  • a display unit having a 'U' shape to be worn on the user's head in the pair of sound parts, and displaying the virtual environment to the user with the naked eye, U "shape to be worn on the user's head, and can be formed to be rotatable from the upper end of the user's head to the user's eyes with the sound unit as a rotation reference point.
  • a communication terminal formed on the body portion and transmitting / receiving signals recognized from the position recognition unit and the gesture recognition unit to the outside; And a microphone unit extending from both ends of the body unit to the mouth of the user and receiving the voice signal of the user.
  • the position recognizing unit is formed of a camera
  • the gesture recognizing unit is formed of an image sensor and an LED light
  • the display unit directs the virtual environment to the eyes of the user Can be a screen.
  • the virtual environment assisting apparatus has an effect of enabling the user to wear the stereotyped position recognizing unit so that the position of the user can be recognized even in a space in which the camera is not provided in advance.
  • the virtual environment assisting apparatus has an effect that the gesture recognition unit capable of recognizing the gesture is provided toward the lower side of the user's head so that the gesture input can be performed without raising the hand, thereby enhancing convenience.
  • FIG. 1 is a view showing a user wearing a virtual environment auxiliary apparatus according to an embodiment of the present invention.
  • FIG. 2 is a side view of a virtual environment auxiliary apparatus according to an embodiment of the present invention.
  • FIG 3 is a plan view of a virtual environment assisting apparatus according to an embodiment of the present invention.
  • FIG. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention.
  • FIG. 5 is a view showing an operation range after the user wears the virtual environment auxiliary device according to an embodiment of the present invention.
  • FIG. 6 is a view showing a user wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
  • FIG. It should be noted that, in the present specification, the reference numerals are added to the constituent elements of the drawings, and the same constituent elements are assigned the same number as much as possible even if they are displayed on different drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
  • the virtual environment described in the present invention is a concept including both a virtual reality, an augmented reality, and a mixed reality in which the mixed reality is mixed.
  • preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. .
  • FIG. 2 is a side view of a virtual environment assisting apparatus according to an embodiment of the present invention.
  • FIG. 3 is a perspective view of a virtual environment assisting apparatus according to an embodiment of the present invention.
  • FIG. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention,
  • FIG. 5 is a view showing an operation range after wearing the virtual environment assisting apparatus according to an embodiment of the present invention .
  • a virtual reality auxiliary apparatus 10 is an apparatus used when providing a virtual environment to a user U and includes a body part 11, A gesture recognition unit 13, a display unit 14, an switching unit 15, a sound unit 16, a communication terminal 17, and a microphone unit 18, as shown in FIG.
  • FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
  • FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
  • FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
  • FIG. 1 A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5.
  • the body part 11 is worn around the neck N of the user U and may have the form of a necklace of 'U', opening the neck N of the user U and opening forward.
  • the body portion 11 may include a central portion 111 abutting against the back of the neck N of the user U and both end portions 112 extending forward from the central portion 111 to the left and right, have. That is, the center portion 111 can be formed at a position based on the center between the both end portions 112.
  • the virtual environment assisting apparatus 10 is configured such that the virtual environment assisting apparatus 10 is formed in the form of a necklace that can be hung on the neck N, so that the load H applied to the head H of the user U Can be reduced.
  • the position recognizing unit 12 recognizes the position of the user U for reflecting in the virtual environment and is formed as a stereotype provided on both right and left sides of the body 11.
  • the position recognizing unit 12 may be formed to face forward at both ends 112 of the body part 11 and may be formed as a pair.
  • the position recognizing unit 12 may have a shape of a camera.
  • the position recognizing unit 12 may be formed of an RGB camera and may be formed as a stereotype to recognize the range of C1 shown in FIG. 5, The position of the user U can be accurately recognized by widely recognizing the environment.
  • the position recognizing unit 12 obtains the external environment of the user U obtained from the left and right sides in the form of an image and transfers it to the communication terminal 17 and the communication terminal 17 can analyze it and supply it to the outside .
  • the communication terminal 17 may include an image analysis unit (not shown), an image correction unit (not shown), and a depth map generation unit.
  • the image analyzing unit analyzes the external environment of the user (U) obtained from the left and right sides in the form of an image.
  • the image analyzing unit analyzes the image of the user U based on an image obtained in the external environment of the user U, that is, in the range of C1, such as Roll, Tilt, Height, Convergence, Optical axis, Zoom, Focus, Iris, Depth, Luminance, Chrominance, Gamut differences and so on can be analyzed.
  • C1 such as Roll, Tilt, Height, Convergence, Optical axis, Zoom, Focus, Iris, Depth, Luminance, Chrominance, Gamut differences and so on can be analyzed.
  • the analyzed contents are transmitted to the image correction unit, and the image matching unit geometrically matches the colors based on the analysis contents and matches the colors.
  • the matched contents are received by the depth map generating unit converted into the three-dimensional coordinates, and a dense map or a semi-dense disparity map is generated to accurately grasp the position of the user U.
  • the present invention has an effect that the position recognizing unit 12 can be formed as a stereotype so that the position of the user U can be recognized without providing a separate camera in advance in the space.
  • the position information of the user U recognized in this manner can be output to the outside (e.g., a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
  • the outside e.g., a large-screen display or a head-mounted type of head-mounted display
  • the gesture recognition unit 13 recognizes the gesture of the user U for input to the virtual environment.
  • the gesture recognition unit 13 is formed on at least one of the opposite end portions 112 of the body portion 11, and may be provided so as to face downward at both ends 112 of the body portion 11.
  • the gesture recognition section 13 is provided so as to face downward at the both ends 112 of the body section 11 so as to recognize the gesture of the user U very effectively by recognizing the range of the C2 shown in Fig. have.
  • the gesture recognizing unit 13 is formed so as to face downward in front of the body part 11, in which the user U can not easily recognize the gesture even when his / her hands are easily lowered There is an effect that can be.
  • the present invention can use the virtual environment comfortably and comfortably while relieving fatigue.
  • the gesture recognition unit 13 may be formed of an image sensor 131 and an LED light 132. [ At this time, the gesture recognition unit 13 can recognize the region outside the visible light region, that is, the ultraviolet or infrared region.
  • the gesture recognition unit 13 can recognize the near-field gesture of approximately 1.2 m from the user U, and in one case, the recognition range can be formed to approximately 180 degrees.
  • the gesture recognition unit 13 may be provided at both ends 112 of the body 1 in the present invention and may extend the recognition range to approximately 180 degrees or more.
  • the gesture recognition unit 13 may include a camera capable of generating three-dimensional depth information of the movement of the hand of the user U for a predetermined period of time.
  • the gesture recognizing unit 13 can recognize an image photographed by the camera. It is possible to take a picture without missing a hand motion of the user U.
  • the gesture recognition unit 13 transmits hand motion information of the user U photographed as described above to the communication terminal 17 and communicates the communication terminal 17 to the outside to transmit the hand of the user U to the multi- A virtual gesture image can be generated.
  • the recognized gesture information of the user U can be outputted to the outside (for example, a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
  • the display unit 14 is formed on at least one of both ends 112 of the body part 11 and allows the user U to visually display the virtual environment.
  • the display unit 14 may be a projector that illuminates the virtual environment to the outside.
  • the switching portion 15 adjusts the angle so that both end portions 112 of the body portion 11 can be tilted upward and downward and the center portion of the body portion 11 A pair can be formed between each end 111 and each end 112.
  • the switching portion 15 adjusts the both end portions 112 of the body portion 11 to be vertically inclined so that the position recognition portion 12, the gesture recognition portion 13, The recognition range or the projection range of the display unit 14 can be freely adjusted.
  • the both ends 112 of the body part 11 can be adjusted to be inclined downward, and the body parts 11 can be adjusted to be inclined in the direction A2,
  • the end portions 112 can be adjusted to be inclined upward.
  • the switching portion 15 can be angularly adjusted so as to be tilted using a hydraulic system, a mechanical system using elasticity, or an electronic system.
  • the sound unit 16 may be formed as a pair extending from the body unit 11 and connected to the user's ear and transmitting sound (sound) to the user U.
  • the sound unit 16 may have the form of a wire earphone, but the present invention is not limited thereto.
  • the sound unit 16 may have the form of a wireless earphone through wireless communication formed in the body unit 11.
  • the wireless communication method may be a Wi-Fi and a Bluetooth (BlueTooth) method.
  • the sound unit 16 may be connected to the communication terminal 17 through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
  • the communication terminal 17 is formed at the central portion 111 of the body portion 11 and is capable of transmitting and receiving signals recognized from the position recognition unit 12 and the gesture recognition unit 13 to the outside.
  • the communication terminal 17 can be connected to the position recognizing unit 12 and the gesture recognizing unit 13 wirelessly or by wire to transmit and receive information and to communicate with the outside (for example, a head mount display (HMD) And can transmit and receive information through a wired or wireless connection.
  • HMD head mount display
  • the communication terminal 17 may further include a feedback information generating unit in addition to the image analyzing unit, the image correcting unit, and the depth map generating unit described above.
  • the feedback information generating unit may generate feedback information on touch or click when the virtual user U touches or clicks the 3D content again.
  • the feedback information may refer to information on a certain control command generated by touching or clicking when a virtual user U touches or clicks the 3D content again.
  • a feedback signal when a hand of a virtual user U touches or clicks a play icon of a specific image implemented as a multi-point 3D content, information about a control command for playing a specific image may be referred to as a feedback signal.
  • the microphone section 18 is formed at both ends 112 on the body section 11 and can receive a voice signal of the user U.
  • the microphone unit 18 can be connected to the communication terminal 17 by wire or wirelessly and can transmit the voice signal of the user U to the communication terminal 17.
  • the embodiment of the present invention may further include a power supply unit B1 and an operation button B2.
  • the power supply unit B1 may be formed at one side of the body part 11 to store and supply the power for driving the virtual environment assisting apparatus 10 of the present invention.
  • the operation button B2 can be provided on one side of the body part 11 to turn on and off the power supply to receive the power supplied from the power supply part B1, Can be driven and controlled.
  • the operation of raising or lowering the volume of the sound of the sounder 16 may be implemented, but is not limited thereto.
  • FIG. 6 is a view of a wearer wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention
  • FIG. 7 is an operational view illustrating operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
  • the virtual reality auxiliary device 20 is an apparatus used when providing a virtual environment to a user U and includes a body 21, A recognition unit 22, a gesture recognition unit 23, a sound unit 26, and a display unit 29.
  • FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
  • FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
  • FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
  • FIG. 6 the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7.
  • the body portion 21 may have a U-shaped headphone shape to be worn on the head H of the user U.
  • the body portion 21 includes a central portion 211 abutting against the crown of the user when worn on the head H of the user U and both end portions 212 extending downward from the central portion 211 to the left and right can do. That is, the center portion 211 can be formed at a position based on the center between the both end portions 212.
  • the position recognizing section 22 may be provided in front of each of the pair of sound sections 26 toward the front.
  • the location recognizing unit 22 is the same as the location recognizing unit 12 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
  • the gesture recognition section 23 may be provided so as to face forward and downward in front of at least one of the pair of sound sections 26.
  • the gesture recognition unit 23 may be formed of an image sensor 231 and an LED light 232.
  • the gesture recognition unit 23 is the same as the gesture recognition unit 23 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
  • the gesture recognition unit 23 is provided to the lower side of the head of the user U, unlike the conventional case where the user U's hand is lifted up and the joints become clumsy even in the case of using the virtual environment for a long time, It is possible to use the virtual environment very conveniently and comfortably while relieving fatigue.
  • the sound section 26 is formed in a pair so as to cover the ears of the user U at both ends 212 formed by the U-shaped form of the body section 21, Sound).
  • the sound unit 26 may be connected to a communication terminal through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
  • the display unit 29 is formed so as to have a U-shaped shape to be worn on the head H of the user U in a pair of sound units 26.
  • the user U is allowed to visually display the virtual environment And can be formed to be rotatable in the eye I of the user U at the upper end of the head H of the user U with the sound unit 26 as a rotation reference point.
  • the display unit 29 is formed so as to at least partially overlap with the body unit 21, and the body unit 21 is rotated by the rotation as in (b) to (c) (I) of the user (U) out of the camera (21).
  • the display unit 29 may be a screen that directly illuminates the virtual environment to the eyes of the user U.
  • the display unit 29 may be connected to the communication terminal through a wired or wireless connection and may receive the position recognition unit 22 and the gesture recognition unit 23), the information can be received from the microphone unit, reflected on the virtual environment, and then reflected directly on the user U's eyes.
  • the virtual environment assisting apparatus 20 of the present invention can be equipped with a head mount device (HMD) together, thereby reducing the construction cost and allowing the user U to easily wear the virtual environment assistant apparatus 20, have.
  • HMD head mount device
  • the virtual reality auxiliary device 20 further includes a communication terminal (not shown), a microphone unit (not shown), a power source unit (not shown), and an operation button can do.
  • the communication terminal is formed in the body portion 21 and can transmit and receive a signal recognized by the position recognition unit 22 and the gesture recognition unit 23.
  • the communication terminal is the same as the communication terminal described in (17) in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5 within the range not incompatible with the above contents, The contents are omitted.
  • the microphone section is formed extending from both ends 212 of the body section 21 to the mouth of the user U to receive a voice signal of the user U.
  • the microphone unit can be connected to the communication terminal by wire or wirelessly, and can transmit the voice signal of the user U to the communication terminal.
  • the power supply unit and the operation button are the same as the power supply unit B1 and the operation button B2 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Omit it.
  • Image sensor 132 LED light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The present invention relates to an auxiliary device for a virtual environment, which is used when a virtual environment is provided to a user, the auxiliary device comprising: a body part which is worn around the neck or head of the user; a position recognition unit for recognizing the user's position to be reflected in the virtual environment; and a gesture recognition unit for recognizing the user's gesture to be input into the virtual environment, wherein the position recognition unit is disposed on both right and left sides of the body part so as to form a stereoscopic type position recognition unit.

Description

가상환경 보조장치Virtual environment auxiliary device
본 발명은 가상환경 보조장치에 관한 것이다.The present invention relates to a virtual environment auxiliary device.
최근 3차원 영상 처리 기술이 급속도로 발전하면서, 영화, 게임, 인테리어 등 다양한 분야에 가상현실(Virtual Reality; VR) 또는 증강현실(Augmented Reality; AR)을 이용한 서비스가 제공되고 있다. 증강 현실 이후에 등장한 혼합현실(Mixed Reality; MR)은 현실의 이미지와 3차원 모델링 데이터로 구현된 가상의 이미지를 합성하여 표시함으로써, 사용자에게 현실과 가상 이미지의 경계가 없는 이미지를 제공한다. Recently, as 3D image processing technology has rapidly developed, services using virtual reality (VR) or augmented reality (AR) have been provided in various fields such as movies, games, and interiors. Mixed Reality (MR), which emerged after the Augmented Reality, combines the real image and the virtual image implemented with 3D modeling data to provide the user with an image that does not have a boundary between the real and the virtual image.
한편, 건축물의 설계 분야에서도 설계 데이터를 3차원 모델링 데이터화하여 보다 입체감 있게 건축물의 구조를 파악할 수 있도록 하는 기술이 개발되고 있다. On the other hand, in the field of the design of the building, techniques for making the structure data of the building more stereoscopically by converting the design data into three-dimensional modeling data are being developed.
이러한 다시점 디스플레이의 특성상 시점 사이에 발생하는 크로스톡(crosstalk) 문제를 최소화하기 위해서는 사용자는 일정한 거리(약 1.5m ~ 2m)를 유지하고 시청해야 하는 제약이 존재하므로 이런 다시점 콘텐츠와 상호작용하기 위한 다양한 방법들이 제안되고 있다. In order to minimize the crosstalk problem occurring between the viewpoints due to the characteristics of the multi-view display, there is a restriction that the user must maintain a certain distance (about 1.5 m to 2 m) A variety of methods have been proposed.
다시점 실감 콘텐츠에 대해 사용자와 상호작용을 할 수 있는 방법은 거치 방식의 제스처 센서인 kinect 센서를 이용하는 방법과 웨어러블 방식의 Myo Armband 근전도 센서를 이용 사용자의 근육의 움직임 정보를 인식하는 방법 등이 존재한다.There are two ways of interacting with the user about the multi-view contents: a kinetic sensor which is a static gesture sensor and a wearable type Myo Armband electromyography sensor. do.
따라서, 현재에는 이러한 다시점 콘텐츠들의 정밀한 제어를 위해 이를 구현하는 다양한 장치들에 대한 다양한 연구 및 개발이 이루어지고 있는 실정이다. Therefore, in order to precisely control the contents of such multi-point contents, various researches and developments have been made on various devices implementing the same.
본 발명은 상기와 같은 종래기술의 문제점을 해결하고자 창출된 것으로서, 본 발명의 목적은, 카메라가 미리 마련되지 않은 공간 상에서도 사용자의 위치 인식이 가능하도록 하는 가상환경 보조장치를 제공하기 위한 것이다.SUMMARY OF THE INVENTION It is an object of the present invention to provide a virtual environment assisting apparatus capable of recognizing a location of a user even in a space where a camera is not provided in advance.
본 발명의 일 측면에 따른 가상환경 보조장치는, 사용자에게 가상환경을 제공할 때 사용되는 장치로서, 상기 사용자의 목 또는 머리에 둘러 착용되는 몸체부; 상기 가상환경에 반영하기 위한 상기 사용자의 위치를 인식하는 위치 인식부; 및 상기 가상환경에 입력하기 위한 상기 사용자의 제스처를 인식하는 제스처 인식부를 포함하고, 상기 위치 인식부는, 상기 몸체부의 좌우 양측에 마련되는 스테레오 타입인 것을 특징으로 한다. According to an aspect of the present invention, there is provided an apparatus for use in providing a virtual environment to a user, comprising: a body part worn around the user's neck or head; A location recognition unit for recognizing a location of the user to reflect the virtual environment; And a gesture recognition unit for recognizing a gesture of the user for input to the virtual environment, wherein the position recognition unit is a stereotype provided on both sides of the body part.
구체적으로, 상기 몸체부는, 상기 사용자의 목을 두르며 전방이 개방되는 'U'자 목걸이 형태를 가지며, 상기 위치 인식부는, 상기 몸체부의 양 단부에 전방을 향하도록 마련될 수 있다.Specifically, the body portion may have a U-shaped necklace shape, which is open at the front side of the user's neck, and the position recognition portion may be provided at both ends of the body portion.
구체적으로, 상기 몸체부는, 상기 양 단부 중 적어도 하나에 상기 제스처 인식부가 형성되되, 상기 제스처 인식부는, 상기 몸체부의 양 단부에서 전방 아래를 향하도록 마련될 수 있다. Specifically, the body part may have the gesture recognition part formed on at least one of the both ends, and the gesture recognition part may be provided so as to face downward at both ends of the body part.
구체적으로, 상기 양 단부 중 적어도 하나에 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함할 수 있다. Specifically, the display unit may further include a display unit formed on at least one of the both ends, for displaying the virtual environment to the user with the naked eye.
구체적으로, 상기 몸체부는, 상기 양 단부가 각각 상하로 기울어질 수 있도록 조절하는 한 쌍의 절환부를 더 포함하고, 상기 한 쌍의 절환부는, 상기 몸체부 상의 상기 양 단부 사이의 중앙을 기준으로 한 중앙부로부터 상기 양 단부 사이에 각각 형성될 수 있다. Specifically, the body portion may further include a pair of switching portions for adjusting the both end portions so as to be tilted up and down, respectively, and the pair of switching portions may be provided on the body portion, And may be formed between the center portion and the both ends.
구체적으로, 상기 몸체부에서 연장 형성되어 상기 사용자의 귀에 연결되며 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부; 상기 몸체부의 상기 중앙부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및 상기 몸체부 상의 양 단부에 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함할 수 있다. Specifically, a pair of sound parts extend from the body part and are connected to the user's ear and allow the sound to be transmitted to the user. A communication terminal formed at the central portion of the body portion to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside; And a microphone unit formed at both ends of the body unit to receive a voice signal of the user.
구체적으로, 상기 위치 인식부는, 카메라로 형성되고, 상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, 상기 디스플레이부는, 상기 가상환경을 외부에 비춰주는 프로젝터일 수 있다. Specifically, the position recognizing unit is formed of a camera, the gesture recognizing unit is formed of an image sensor and an LED light, and the display unit may be a projector that illuminates the virtual environment to the outside have.
구체적으로, 상기 몸체부는, 상기 사용자의 머리에 착용되도록 'U'자의 헤드폰 형태를 가지며, 상기 'U'자 형태에 의해 형성되는 양 단부에 상기 사용자의 귀를 덮도록 형성되되 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부가 형성되고, 상기 한 쌍의 사운드부의 전방 각각에 상기 위치 인식부가 각각 전방을 향하도록 구비될 수 있다. Specifically, the body portion has a U-shaped headphone shape to be worn on the user's head, and is formed to cover the user's ears at both ends formed by the U-shaped shape, And the position recognizing unit may be provided in front of each of the pair of sound units so that the position recognizing unit faces forward.
구체적으로, 상기 제스처 인식부는, 상기 한 쌍의 사운드부의 적어도 하나의 전방에서 전방 아래를 향하도록 마련될 수 있다. Specifically, the gesture recognizing unit may be provided so as to face downward at least in front of at least one of the pair of sound parts.
구체적으로, 상기 한 쌍의 사운드부에 상기 사용자의 머리에 착용되도록 'U'자 형태를 가지도록 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함하고, 상기 디스플레이부는, 상기 사용자의 머리에 착용되도록 'U'자 형태를 가지되, 상기 사운드부를 회전 기준점으로 하여 상기 사용자의 머리 상단에서 상기 사용자의 눈으로 회전 가능하게 형성될 수 있다. Specifically, the display device may further include a display unit having a 'U' shape to be worn on the user's head in the pair of sound parts, and displaying the virtual environment to the user with the naked eye, U "shape to be worn on the user's head, and can be formed to be rotatable from the upper end of the user's head to the user's eyes with the sound unit as a rotation reference point.
구체적으로, 상기 몸체부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및 상기 몸체부의 양 단부에서 상기 사용자의 입까지 연장 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함할 수 있다. Specifically, a communication terminal formed on the body portion and transmitting / receiving signals recognized from the position recognition unit and the gesture recognition unit to the outside; And a microphone unit extending from both ends of the body unit to the mouth of the user and receiving the voice signal of the user.
구체적으로, 상기 위치 인식부는, 카메라로 형성되고, 상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, 상기 디스플레이부는, 상기 가상환경을 상기 사용자의 눈에 직접 비춰주는 스크린일 수 있다.Specifically, the position recognizing unit is formed of a camera, the gesture recognizing unit is formed of an image sensor and an LED light, and the display unit directs the virtual environment to the eyes of the user Can be a screen.
본 발명에 따른 가상환경 보조장치는, 스테레오 타입의 위치 인식부를 사용자가 착용하도록 하여 카메라가 미리 마련되지 않은 공간에서도 사용자의 위치 인식이 가능하도록 하는 효과가 있다. The virtual environment assisting apparatus according to the present invention has an effect of enabling the user to wear the stereotyped position recognizing unit so that the position of the user can be recognized even in a space in which the camera is not provided in advance.
또한, 본 발명에 따른 가상환경 보조장치는, 제스처를 인식할 수 있는 제스처 인식부가 사용자의 머리 아래쪽을 향해 마련되도록 하여 손을 들어 올리지 않아도 제스처 입력이 가능해 편의성을 높일 수 있는 효과가 있다. In addition, the virtual environment assisting apparatus according to the present invention has an effect that the gesture recognition unit capable of recognizing the gesture is provided toward the lower side of the user's head so that the gesture input can be performed without raising the hand, thereby enhancing convenience.
도 1은 본 발명의 일 실시예에 따른 가상환경 보조장치를 사용자가 착용한 모습을 나타내는 도면이다. 1 is a view showing a user wearing a virtual environment auxiliary apparatus according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 가상환경 보조장치의 측면도이다. 2 is a side view of a virtual environment auxiliary apparatus according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 가상환경 보조장치의 평면도이다. 3 is a plan view of a virtual environment assisting apparatus according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 가상환경 보조장치의 정면도이다. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 가상환경 보조장치를 사용자가 착용한 후 작동 범위를 도시한 도면이다. 5 is a view showing an operation range after the user wears the virtual environment auxiliary device according to an embodiment of the present invention.
도 6은 본 발명의 다른 실시예에 따른 가상환경 보조장치를 사용자가 착용한 모습을 나타내는 도면이다. 6 is a view showing a user wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention.
도 7은 본 발명의 다른 실시예에 따른 가상환경 보조장치에서 디스플레이부의 작동을 나타낸 도면이다. 7 is a diagram illustrating an operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
본 발명의 목적, 특정한 장점들 및 신규한 특징들은 첨부된 도면들과 연관되는 이하의 상세한 설명과 바람직한 실시예로부터 더욱 명백해질 것이다. 본 명세서에서 각 도면의 구성요소들에 참조번호를 부가함에 있어서, 동일한 구성 요소들에 한해서는 비록 다른 도면상에 표시되더라도 가능한 한 동일한 번호를 가지도록 하고 있음에 유의하여야 한다. 또한, 본 발명을 설명함에 있어서, 관련된 공지 기술에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우 그 상세한 설명은 생략한다.BRIEF DESCRIPTION OF THE DRAWINGS The objects, particular advantages and novel features of the present invention will become more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which: FIG. It should be noted that, in the present specification, the reference numerals are added to the constituent elements of the drawings, and the same constituent elements are assigned the same number as much as possible even if they are displayed on different drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.
이하 본 발명에서 기술되는 가상환경은, 가상현실, 증강현실, 이를 혼합한 혼합현실 모두를 포함하는 개념으로 기술된 것임을 주지바라며, 이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예를 상세히 설명하기로 한다. Hereinafter, it will be appreciated that the virtual environment described in the present invention is a concept including both a virtual reality, an augmented reality, and a mixed reality in which the mixed reality is mixed. Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. .
도 1은 본 발명의 일 실시예에 따른 가상환경 보조장치를 착용한 착용도, 도 2는 본 발명의 일 실시예에 따른 가상환경 보조장치의 측면도, 도 3은 본 발명의 일 실시예에 따른 가상환경 보조장치의 평면도, 도 4는 본 발명의 일 실시예에 따른 가상환경 보조장치의 정면도이고, 도 5는 본 발명의 일 실시예에 따른 가상환경 보조장치를 착용한 후 작동 범위를 도시한 도이다. FIG. 2 is a side view of a virtual environment assisting apparatus according to an embodiment of the present invention. FIG. 3 is a perspective view of a virtual environment assisting apparatus according to an embodiment of the present invention. FIG. 4 is a front view of a virtual environment assisting apparatus according to an embodiment of the present invention, FIG. 5 is a view showing an operation range after wearing the virtual environment assisting apparatus according to an embodiment of the present invention .
도 1 내지 도 5에 도시된 바와 같이 본 발명의 일 실시예에 따른 가상현실 보조장치(10)는, 사용자(U)에게 가상환경을 제공할 때 사용되는 장치로서, 몸체부(11), 위치 인식부(12), 제스처 인식부(13), 디스플레이부(14), 절환부(15), 사운드부(16), 통신 단자(17) 및 마이크부(18)를 포함한다. 1 to 5, a virtual reality auxiliary apparatus 10 according to an embodiment of the present invention is an apparatus used when providing a virtual environment to a user U and includes a body part 11, A gesture recognition unit 13, a display unit 14, an switching unit 15, a sound unit 16, a communication terminal 17, and a microphone unit 18, as shown in FIG.
이하 도 1 내지 도 5를 참조하여 본 발명의 일 실시예에 따른 가상현실 보조장치(10)에 대해서 상세히 설명하도록 한다. A virtual reality auxiliary apparatus 10 according to an embodiment of the present invention will now be described in detail with reference to FIGS. 1 to 5. FIG.
몸체부(11)는, 사용자(U)의 목(N)에 둘러 착용되며, 사용자(U)의 목(N)을 두르며 전방이 개방되는 'U'자의 목걸이 형태를 가질 수 있다. The body part 11 is worn around the neck N of the user U and may have the form of a necklace of 'U', opening the neck N of the user U and opening forward.
몸체부(11)는, 사용자(U)의 목(N)의 후방에 맞닿는 중앙부(111)와 중앙부(111)로부터 좌우로 전방으로 연장 형성되어 종단에 위치하는 양 단부(112)를 포함할 수 있다. 즉, 중앙부(111)는, 양 단부(112) 사이의 중앙을 기준으로 한 위치에 형성될 수 있다.The body portion 11 may include a central portion 111 abutting against the back of the neck N of the user U and both end portions 112 extending forward from the central portion 111 to the left and right, have. That is, the center portion 111 can be formed at a position based on the center between the both end portions 112.
이와 같이 본 발명에 따른 가상환경 보조장치(10)는, 가상환경 보조장치(10)를 목(N)에 걸 수 있는 목걸이 형태로 제작함으로써, 사용자(U)의 머리(H)에 가해지는 하중을 줄일 수 있는 효과가 있다. As described above, the virtual environment assisting apparatus 10 according to the present invention is configured such that the virtual environment assisting apparatus 10 is formed in the form of a necklace that can be hung on the neck N, so that the load H applied to the head H of the user U Can be reduced.
위치 인식부(12)는, 가상환경에 반영하기 위한 사용자(U)의 위치를 인식하며, 몸체부(11)의 좌우 양측에 마련되는 스테레오 타입으로 형성된다.The position recognizing unit 12 recognizes the position of the user U for reflecting in the virtual environment and is formed as a stereotype provided on both right and left sides of the body 11. [
구체적으로 위치 인식부(12)는, 몸체부(11)의 양 단부(112)에 전방을 향하도록 마련되며 한 쌍 형성될 수 있다. Specifically, the position recognizing unit 12 may be formed to face forward at both ends 112 of the body part 11 and may be formed as a pair.
위치 인식부(12)는, 카메라(Camera)의 형태를 가질 수 있으며, 일례로 RGB 카메라로 형성되되 스테레오 타입으로 형성되어 도 5에 도시된 C1의 범위를 인지함으로써, 사용자(U)의 전방의 환경을 폭넓게 인식하여 사용자(U)의 위치를 정확하게 인식할 수 있다.The position recognizing unit 12 may have a shape of a camera. For example, the position recognizing unit 12 may be formed of an RGB camera and may be formed as a stereotype to recognize the range of C1 shown in FIG. 5, The position of the user U can be accurately recognized by widely recognizing the environment.
위치 인식부(12)는, 좌측과 우측에서 얻어지는 사용자(U)의 외부 환경을 이미지의 형태로 얻어 이를 통신 단자(17)에 전달하고, 통신 단자(17)는 이를 분석하여 외부로 공급할 수 있다.The position recognizing unit 12 obtains the external environment of the user U obtained from the left and right sides in the form of an image and transfers it to the communication terminal 17 and the communication terminal 17 can analyze it and supply it to the outside .
여기서 통신 단자(17)는, 이미지 분석부(Image Analysis; 도시하지 않음), 이미지 교정부(Image Correction; 도시하지 않음) 및 깊이맵 발생부(Depth Map Generation)를 포함할 수 있다. Here, the communication terminal 17 may include an image analysis unit (not shown), an image correction unit (not shown), and a depth map generation unit.
이미지 분석부는, 좌측과 우측에서 얻어지는 사용자(U)의 외부 환경을 이미지의 형태로 얻어 이를 분석한다. The image analyzing unit analyzes the external environment of the user (U) obtained from the left and right sides in the form of an image.
이미지 분석부는, 사용자(U)의 외부 환경 즉 C1의 범위에서 얻어지는 이미지를 토대로 회전(Roll), 기울기(Tilt), 높이(Height), 수렴(Convergence), 광축(Optical axis), 줌(Zoom), 초점(Focus), 아이리스(Iris), 깊이(Depth), 밝기(Luminance), 색차(Chrominance), 가무트 디퍼런스(Gamut differences) 등을 분석할 수 있다.The image analyzing unit analyzes the image of the user U based on an image obtained in the external environment of the user U, that is, in the range of C1, such as Roll, Tilt, Height, Convergence, Optical axis, Zoom, Focus, Iris, Depth, Luminance, Chrominance, Gamut differences and so on can be analyzed.
이후 분석된 내용을 이미지 교정부로 전달하고, 이미지 교정부에서 상기 분석용 내용을 토대로 기하학적으로 매칭하고 색을 매칭시킨다. Then, the analyzed contents are transmitted to the image correction unit, and the image matching unit geometrically matches the colors based on the analysis contents and matches the colors.
이후 매칭된 내용을 3차원으로 변환된 깊이맵 발생부에서 받아들여 밀도 맵(dense map) 또는 반밀도차이 맵(Semi-dense disparity map)을 발생시켜 사용자(U)의 위치를 정확하게 파악할 수 있다. Then, the matched contents are received by the depth map generating unit converted into the three-dimensional coordinates, and a dense map or a semi-dense disparity map is generated to accurately grasp the position of the user U.
이러한 상기 일련의 과정을 통해서 본 발명은 위치 인식부(12)를 스테레오 타입으로 형성하여 별도의 카메라가 공간 상에 미리 마련되지 않고도 사용자(U)의 위치 인식이 가능하도록 하는 효과가 있다. Through the above-described series of processes, the present invention has an effect that the position recognizing unit 12 can be formed as a stereotype so that the position of the user U can be recognized without providing a separate camera in advance in the space.
이렇게 인식된 사용자(U)의 위치 정보는 통신 단자(17)에 의해 외부(일례로 대화면 디스플레이 또는 안경형 헤드 마운트 장치)로 송출되어 출력될 수 있다. The position information of the user U recognized in this manner can be output to the outside (e.g., a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
제스처 인식부(13)는, 가상환경에 입력하기 위해 사용자(U)의 제스처를 인식한다. The gesture recognition unit 13 recognizes the gesture of the user U for input to the virtual environment.
구체적으로 제스처 인식부(13)는, 몸체부(11)의 양 단부(112) 중 적어도 하나에 형성되되, 몸체부(11)의 양 단부(112)에서 전방 아래를 향하도록 마련될 수 있다. Specifically, the gesture recognition unit 13 is formed on at least one of the opposite end portions 112 of the body portion 11, and may be provided so as to face downward at both ends 112 of the body portion 11. [
제스처 인식부(13)는 몸체부(11)의 양 단부(112)에서 전방 아래를 향하도록 마련되어, 도 5에 도시된 C2의 범위를 인지함으로써, 사용자(U)의 제스처를 매우 효과적으로 인식할 수 있다. The gesture recognition section 13 is provided so as to face downward at the both ends 112 of the body section 11 so as to recognize the gesture of the user U very effectively by recognizing the range of the C2 shown in Fig. have.
즉, 기존에는 손을 내린 상태에서 인식이 불가능했던 것을 제스처 인식부(13)에서 몸체부(11)의 전방 아래를 향하도록 형성함으로써, 사용자(U)가 편하게 손을 내린 상태에서도 제스처를 인식할 수 있는 효과가 있다.That is, the gesture recognizing unit 13 is formed so as to face downward in front of the body part 11, in which the user U can not easily recognize the gesture even when his / her hands are easily lowered There is an effect that can be.
따라서, 장시간 가상환경을 이용하는 경우에도 사용자(U)의 손을 들어올려 관절에 무리가 갔던 종래와 달리, 본 발명은 피로감을 덜어줄 수 있으면서 매우 편리하고 쾌적하게 가상환경의 이용이 가능하다.Therefore, unlike the conventional case where the user U's hand is lifted and the joints become clumsy even in the case of using the virtual environment for a long time, the present invention can use the virtual environment comfortably and comfortably while relieving fatigue.
제스처 인식부(13)는, 이미지 센서(Image Sensor; 131) 및 엘이디 라이트(LED Light; 132)로 형성될 수 있다. 이때, 제스처 인식부(13)는 가시광선 영역 외의 영역 즉 자외선 또는 적외선의 영역에서도 인식이 가능하다.The gesture recognition unit 13 may be formed of an image sensor 131 and an LED light 132. [ At this time, the gesture recognition unit 13 can recognize the region outside the visible light region, that is, the ultraviolet or infrared region.
제스처 인식부(13)는 사용자(U)로부터 대략 1.2m의 근거리 제스처를 인식할 수 있도록 하며, 하나의 경우 인식 범위가 대략 180도로 형성될 수 있다. The gesture recognition unit 13 can recognize the near-field gesture of approximately 1.2 m from the user U, and in one case, the recognition range can be formed to approximately 180 degrees.
제스처 인식부(13)는 본 발명에서 몸체부(1)의 양 단부(112)에 각각 모두 구비될 수 있으며, 인식 범위를 대략 180도 이상으로 넓힐 수 있다. The gesture recognition unit 13 may be provided at both ends 112 of the body 1 in the present invention and may extend the recognition range to approximately 180 degrees or more.
제스처 인식부(13)는, 일정한 시간 동안의 사용자(U)의 손의 움직임을 3차원 깊이 정보를 생성할 수 있는 카메라를 구비하여 카메라로 촬영한 영상을 인식할 수 있으며, 일정한 시간의 촬영구간을 연속하여 촬영함으로써 사용자(U)의 손 움직임의 누락 없이 촬영할 수 있다. The gesture recognition unit 13 may include a camera capable of generating three-dimensional depth information of the movement of the hand of the user U for a predetermined period of time. The gesture recognizing unit 13 can recognize an image photographed by the camera. It is possible to take a picture without missing a hand motion of the user U.
제스처 인식부(13)는, 상기와 같이 촬영된 사용자(U)의 손 움직임 정보를 통신 단자(17)로 전달하고 통신 단자(17)는 외부로 전달하여 사용자(U)의 손을 다시점 영상으로 구현한 가상 제스처 영상을 생성할 수 있다. The gesture recognition unit 13 transmits hand motion information of the user U photographed as described above to the communication terminal 17 and communicates the communication terminal 17 to the outside to transmit the hand of the user U to the multi- A virtual gesture image can be generated.
이렇게 인식된 사용자(U)의 제스처 정보는 통신 단자(17)에 의해 외부(일례로 대화면 디스플레이 또는 안경형 헤드 마운트 장치)로 송출되어 출력될 수 있다. The recognized gesture information of the user U can be outputted to the outside (for example, a large-screen display or a head-mounted type of head-mounted display) by the communication terminal 17 and output.
디스플레이부(14)는, 몸체부(11)의 양 단부(112) 중 적어도 하나에 형성되되, 사용자(U)에게 가상환경을 육안으로 표시해줄 수 있다. The display unit 14 is formed on at least one of both ends 112 of the body part 11 and allows the user U to visually display the virtual environment.
여기서 디스플레이부(14)는, 가상환경을 외부에 비춰주는 프로젝터일 수 있다. Here, the display unit 14 may be a projector that illuminates the virtual environment to the outside.
절환부(15)는, 몸체부(11)의 양 단부(112)가 각각 상하로 기울어질 수 있도록 각도를 조절하며, 몸체부(11) 상의 양 단부(112) 사이의 중앙을 기준으로 한 중앙부(111)로부터 양 단부(112) 사이에 각각 한 쌍 형성될 수 있다. The switching portion 15 adjusts the angle so that both end portions 112 of the body portion 11 can be tilted upward and downward and the center portion of the body portion 11 A pair can be formed between each end 111 and each end 112.
절환부(15)는, 몸체부(11)의 양 단부(112)가 각각 상하로 기울어질 수 있도록 조절하여, 양 단부(112)에 설치된 위치 인식부(12), 제스처 인식부(13) 및 디스플레이부(14)의 인식 범위 또는 투사 범위를 자유롭게 조절할 수 있는 효과가 있다. The switching portion 15 adjusts the both end portions 112 of the body portion 11 to be vertically inclined so that the position recognition portion 12, the gesture recognition portion 13, The recognition range or the projection range of the display unit 14 can be freely adjusted.
일례로 도 2에 도시된 바와 같이 A1 방향으로 기울어지도록 조절되어 몸체부(11)의 양 단부(112)가 각각 하방으로 기울어지도록 조절할 수 있으며, A2 방향으로 기울어지도록 조절되어 몸체부(11)의 양 단부(112)가 각각 상방으로 기울어지도록 조절할 수 있다. For example, as shown in FIG. 2, the both ends 112 of the body part 11 can be adjusted to be inclined downward, and the body parts 11 can be adjusted to be inclined in the direction A2, The end portions 112 can be adjusted to be inclined upward.
절환부(15)는 유압 방식, 탄성을 이용한 기계적 방식 또는 전자적 방식을 이용하여 기울어지도록 각도가 조절될 수 있다. The switching portion 15 can be angularly adjusted so as to be tilted using a hydraulic system, a mechanical system using elasticity, or an electronic system.
사운드부(16)는, 몸체부(11)에서 연장 형성되어 사용자(U)의 귀에 연결되며 사용자(U)에게 사운드(소리)를 전달할 수 있도록 형성되며 한 쌍 형성될 수 있다. The sound unit 16 may be formed as a pair extending from the body unit 11 and connected to the user's ear and transmitting sound (sound) to the user U.
사운드부(16)는 유선 이어폰의 형태를 가질 수 있으나 이에 한정되지 않고 몸체부(11)에 형성된 무선 통신을 통해 무선 이어폰의 형태를 가질 수 있다. 이때, 무선 통신 방식은 와이파이(Wi-Fi) 및 블루투스(BlueTooth) 방식일 수 있다. The sound unit 16 may have the form of a wire earphone, but the present invention is not limited thereto. The sound unit 16 may have the form of a wireless earphone through wireless communication formed in the body unit 11. [ At this time, the wireless communication method may be a Wi-Fi and a Bluetooth (BlueTooth) method.
사운드부(16)는, 통신 단자(17)와 유선 또는 무선으로 연결되어 외부로부터 사운드를 전달받아 사용자(U)의 귀로 전달할 수 있다. The sound unit 16 may be connected to the communication terminal 17 through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
통신 단자(17)는, 몸체부(11)의 중앙부(111)에 형성되어 위치 인식부(12) 및 제스처 인식부(13)로부터 인식되는 신호를 외부로 송수신할 수 있다. The communication terminal 17 is formed at the central portion 111 of the body portion 11 and is capable of transmitting and receiving signals recognized from the position recognition unit 12 and the gesture recognition unit 13 to the outside.
통신 단자(17)는, 위치 인식부(12), 제스처 인식부(13)와 무선 또는 유선으로 연결되어 정보를 송수신할 수 있으며, 또한 외부(일례로 헤드 마운트 장치(Head Mount Display; HMD))와 유선 또는 무선으로 연결되어 정보를 송수신할 수 있다. The communication terminal 17 can be connected to the position recognizing unit 12 and the gesture recognizing unit 13 wirelessly or by wire to transmit and receive information and to communicate with the outside (for example, a head mount display (HMD) And can transmit and receive information through a wired or wireless connection.
통신 단자(17)는 상기 기술한 이미지 분석부, 이미지 교정부 및 깊이맵 발생부 외에 피드백 정보 생성부를 더 포함할 수 있다. The communication terminal 17 may further include a feedback information generating unit in addition to the image analyzing unit, the image correcting unit, and the depth map generating unit described above.
피드백 정보 생성부는, 가상의 사용자(U) 손이 다시점 3D 콘텐츠를 터치 또는 클릭하면, 터치 또는 클릭에 대한 피드백 정보를 생성할 수 있다. 여기서 피드백 정보는 가상의 사용자(U) 손이 다시점 3D 콘텐츠를 터치 또는 클릭했을 때, 터치 도는 클릭으로 발생하는 일정한 제어 명령에 대한 정보를 의미할 수 있다. The feedback information generating unit may generate feedback information on touch or click when the virtual user U touches or clicks the 3D content again. Here, the feedback information may refer to information on a certain control command generated by touching or clicking when a virtual user U touches or clicks the 3D content again.
예를 들어 가상의 사용자(U)의 손이 다시점 3D 콘텐츠로 구현된 특정 영상의 플레이 아이콘을 터치 또는 클릭했을 때, 특정 영상을 플레이 하는 제어 명령에 대한 정보를 피드백 신호라고 할 수 있다. For example, when a hand of a virtual user U touches or clicks a play icon of a specific image implemented as a multi-point 3D content, information about a control command for playing a specific image may be referred to as a feedback signal.
이러한 피드백 신호를 통해서 다시점 3D 콘텐츠를 제어할 수 있으며 제어된 다시점 3D 콘텐츠를 사용자(U)에게 제공할 수 있다. Through this feedback signal, it is possible to control multi-point 3D contents and to provide controlled multi-point 3D contents to the user (U).
마이크부(18)는, 몸체부(11) 상의 양 단부(112)에 형성되어 사용자(U)의 음성 신호를 수신할 수 있다. The microphone section 18 is formed at both ends 112 on the body section 11 and can receive a voice signal of the user U.
마이크부(18)는 통신 단자(17)와 유선 또는 무선으로 연결되어 사용자(U)의 음성 신호를 통신 단자(17)로 송신할 수 있다.The microphone unit 18 can be connected to the communication terminal 17 by wire or wirelessly and can transmit the voice signal of the user U to the communication terminal 17. [
또한, 본 발명의 실시예에서는, 전원부(B1)와 조작버튼(B2)을 더 포함할 수 있다. In addition, the embodiment of the present invention may further include a power supply unit B1 and an operation button B2.
전원부(B1)는 몸체부(11)의 일측에 형성되어 본 발명의 가상환경 보조장치(10)를 구동할 수 있는 동력을 저장하여 공급할 수 있다. The power supply unit B1 may be formed at one side of the body part 11 to store and supply the power for driving the virtual environment assisting apparatus 10 of the present invention.
조작버튼(B2)은, 몸체부(11)의 일측에 형성되어 전원부(B1)로부터 공급되는 동력을 전달받도록 전원을 온-오프(On-Off)시킬 수 있으며, 또한, 가상환경 보조장치(10)의 다양한 구동장치들을 구동제어할 수 있다. 일레로 사운드부(16)의 소리 볼륨을 높이거나 낮추는 조작을 구현할 수도 있으며 이에 한정되지 않는다. The operation button B2 can be provided on one side of the body part 11 to turn on and off the power supply to receive the power supplied from the power supply part B1, Can be driven and controlled. The operation of raising or lowering the volume of the sound of the sounder 16 may be implemented, but is not limited thereto.
도 6은 본 발명의 다른 실시예에 따른 가상환경 보조장치를 착용한 착용도이고, 도 7은 본 발명의 다른 실시예에 따른 가상환경 보조장치에서 디스플레이부의 작동을 나타낸 작동도이다. FIG. 6 is a view of a wearer wearing a virtual environment auxiliary apparatus according to another embodiment of the present invention, and FIG. 7 is an operational view illustrating operation of a display unit in a virtual environment auxiliary apparatus according to another embodiment of the present invention.
도 6 및 도 7에 도시된 바와 같이 본 발명의 일 실시예에 따른 가상현실 보조장치(20)는, 사용자(U)에게 가상환경을 제공할 때 사용되는 장치로서, 몸체부(21), 위치 인식부(22), 제스처 인식부(23), 사운드부(26) 및 디스플레이부(29)를 포함한다. 6 and 7, the virtual reality auxiliary device 20 according to an exemplary embodiment of the present invention is an apparatus used when providing a virtual environment to a user U and includes a body 21, A recognition unit 22, a gesture recognition unit 23, a sound unit 26, and a display unit 29.
이하 도 6 및 도 7를 참조하여 본 발명의 일 실시예에 따른 가상현실 보조장치(20)에 대해서 상세히 설명하도록 한다. Hereinafter, the virtual reality auxiliary device 20 according to an embodiment of the present invention will be described in detail with reference to FIGS. 6 and 7. FIG.
몸체부(21)는, 사용자(U)의 머리(H)에 착용되도록 'U'자의 헤드폰 형태를 가질 수 있다. The body portion 21 may have a U-shaped headphone shape to be worn on the head H of the user U.
몸체부(21)는, 사용자(U)의 머리(H)에 착용 시 정수리에 맞닿는 중앙부(211)와 중앙부(211)로부터 좌우로 하방으로 연장 형성되어 종단에 위치하는 양 단부(212)를 포함할 수 있다. 즉, 중앙부(211)는, 양 단부(212) 사이의 중앙을 기준으로 한 위치에 형성될 수 있다.The body portion 21 includes a central portion 211 abutting against the crown of the user when worn on the head H of the user U and both end portions 212 extending downward from the central portion 211 to the left and right can do. That is, the center portion 211 can be formed at a position based on the center between the both end portions 212.
위치 인식부(22)는, 한 쌍의 사운드부(26)의 전방 각각에 전방을 향하도록 구비될 수 있다. The position recognizing section 22 may be provided in front of each of the pair of sound sections 26 toward the front.
*위치 인식부(22)는 상기 내용과 상충되지 않는 범위에서 도 1 내지 도 5에서 기재한 본 발명의 일 실시예에 따른 가상환경 보조장치(10)에서 기술한 위치 인식부(12)와 동일하므로, 그 외의 내용은 생략하도록 한다. The location recognizing unit 22 is the same as the location recognizing unit 12 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
제스처 인식부(23)는, 한 쌍의 사운드부(26) 중 적어도 하나의 전방에서 전방 아래를 향하도록 마련될 수 있다. The gesture recognition section 23 may be provided so as to face forward and downward in front of at least one of the pair of sound sections 26. [
제스처 인식부(23)는, 이미지 센서(Image Sensor; 231) 및 엘이디 라이트(LED Light; 232)로 형성될 수 있다. The gesture recognition unit 23 may be formed of an image sensor 231 and an LED light 232.
제스처 인식부(23)는, 상기 내용과 상충되지 않는 범위에서 도 1 내지 도 5에서 기재한 본 발명의 일 실시예에 따른 가상환경 보조장치(10)에서 기술한 제스처 인식부(23)와 동일하므로, 그 외의 내용은 생략하도록 한다. The gesture recognition unit 23 is the same as the gesture recognition unit 23 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Therefore, other contents shall be omitted.
본 발명에서는 제스처 인식부(23)를 사용자(U)의 머리 아래쪽을 향해 마련되도록 함으로써, 장시간 가상환경을 이용하는 경우에도 사용자(U)의 손을 들어올려 관절에 무리가 갔던 종래와 달리, 본 발명은 피로감을 덜어줄 수 있으면서 매우 편리하고 쾌적하게 가상환경의 이용이 가능하다.In the present invention, the gesture recognition unit 23 is provided to the lower side of the head of the user U, unlike the conventional case where the user U's hand is lifted up and the joints become clumsy even in the case of using the virtual environment for a long time, It is possible to use the virtual environment very conveniently and comfortably while relieving fatigue.
사운드부(26)는, 몸체부(21)의 'U'자 형태에 의해 형성되는 양 단부(212)에 사용자(U)의 귀를 덮도록 각각 한 쌍 형성되되, 사용자(U)에게 사운드(소리)를 전달할 수 있도록 한다.The sound section 26 is formed in a pair so as to cover the ears of the user U at both ends 212 formed by the U-shaped form of the body section 21, Sound).
사운드부(26)는, 통신 단자와 유선 또는 무선으로 연결되어 외부로부터 사운드를 전달받아 사용자(U)의 귀로 전달할 수 있다. The sound unit 26 may be connected to a communication terminal through a wired or wireless connection, and may receive sound from the outside and transmit the sound to the user U's ear.
디스플레이부(29)는, 한 쌍의 사운드부(26)에 사용자(U)의 머리(H)에 착용되도록 'U'자 형태를 가지도록 형성되되, 사용자(U)에게 가상환경을 육안으로 표시해주며, 사운드부(26)를 회전 기준점으로 하여 사용자(U)의 머리(H) 상단에서 사용자(U)의 눈(I)으로 회전 가능하게 형성될 수 있다. The display unit 29 is formed so as to have a U-shaped shape to be worn on the head H of the user U in a pair of sound units 26. The user U is allowed to visually display the virtual environment And can be formed to be rotatable in the eye I of the user U at the upper end of the head H of the user U with the sound unit 26 as a rotation reference point.
즉, 도 7을 살펴보면, (a)에서와 같이 디스플레이부(29)는, 몸체부(21)와 적어도 일부 겹쳐지도록 형성되고, (b)에서 (c)로 변환되는 것과 같이 회전에 의해서 몸체부(21)로부터 벗어나 사용자(U)의 눈(I)을 감싸게 변환될 수 있다. 7, the display unit 29 is formed so as to at least partially overlap with the body unit 21, and the body unit 21 is rotated by the rotation as in (b) to (c) (I) of the user (U) out of the camera (21).
이때, 디스플레이부(29)는, 가상환경을 사용자(U)의 눈에 직접 비춰주는 스크린일 수 있으며, 통신단자와 유선 또는 무선으로 연결되어 통신단자로부터 위치 인식부(22), 제스처 인식부(23), 마이크부로부터 정보를 전달받아 가상환경에 반영한 후 사용자(U)의 눈에 직접 비춰줄 수 있다. The display unit 29 may be a screen that directly illuminates the virtual environment to the eyes of the user U. The display unit 29 may be connected to the communication terminal through a wired or wireless connection and may receive the position recognition unit 22 and the gesture recognition unit 23), the information can be received from the microphone unit, reflected on the virtual environment, and then reflected directly on the user U's eyes.
이를 통해서 본 발명의 가상환경 보조장치(20)는, 헤드 마운트 장치(HMD)를 함께 구비할 수 있어 구축 비용을 절감하고 사용자(U)가 손쉽게 착용할 수 있어 사용자의 활용성이 증대되는 효과가 있다. Accordingly, the virtual environment assisting apparatus 20 of the present invention can be equipped with a head mount device (HMD) together, thereby reducing the construction cost and allowing the user U to easily wear the virtual environment assistant apparatus 20, have.
본 발명의 다른 실시예에 따른 가상현실 보조장치(20)는, 통신 단자(도시하지 않음), 마이크부(도시하지 않음) 및 전원부(도시하지 않음)와 조작버튼(도시하지 않음)을 더 포함할 수 있다. The virtual reality auxiliary device 20 according to another embodiment of the present invention further includes a communication terminal (not shown), a microphone unit (not shown), a power source unit (not shown), and an operation button can do.
통신 단자는, 몸체부(21)에 형성되어 위치 인식부(22) 및 제스처 인식부(23)로부터 인식되는 신호를 송수신할 수 있다. The communication terminal is formed in the body portion 21 and can transmit and receive a signal recognized by the position recognition unit 22 and the gesture recognition unit 23. [
통신 단자는, 상기 내용과 상충되지 않는 범위에서 도 1 내지 도 5에서 기재한 본 발명의 일 실시예에 따른 가상환경 보조장치(10)에서 기술한 통신 단자는(17)와 동일하므로, 그 외의 내용은 생략하도록 한다. The communication terminal is the same as the communication terminal described in (17) in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5 within the range not incompatible with the above contents, The contents are omitted.
마이크부는, 몸체부(21)의 양 단부(212)에서 사용자(U)의 입까지 연장 형성되어 사용자(U)의 음성 신호를 수신할 수 있다. The microphone section is formed extending from both ends 212 of the body section 21 to the mouth of the user U to receive a voice signal of the user U.
마이크부는, 통신 단자와 유선 또는 무선으로 연결되어 사용자(U)의 음성 신호를 통신 단자로 송신할 수 있다.The microphone unit can be connected to the communication terminal by wire or wirelessly, and can transmit the voice signal of the user U to the communication terminal.
전원부와 조작버튼은, 도 1 내지 도 5에서 기재한 본 발명의 일 실시예에 따른 가상환경 보조장치(10)에서 기술한 전원부(B1) 및 조작버튼(B2)과 동일하므로, 그 외의 내용은 생략하도록 한다. The power supply unit and the operation button are the same as the power supply unit B1 and the operation button B2 described in the virtual environment assisting apparatus 10 according to the embodiment of the present invention described in Figs. 1 to 5, Omit it.
이상 본 발명을 구체적인 실시예를 통하여 상세히 설명하였으나, 이는 본 발명을 구체적으로 설명하기 위한 것으로, 본 발명은 이에 한정되지 않으며, 본 발명의 기술적 사상 내에서 당해 분야의 통상의 지식을 가진 자에 의해 그 변형이나 개량이 가능함은 명백하다고 할 것이다.While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the same is by way of illustration and example only and is not to be construed as limiting the present invention. It is obvious that the modification and the modification are possible.
본 발명의 단순한 변형 내지 변경은 모두 본 발명의 영역에 속하는 것으로 본 발명의 구체적인 보호 범위는 첨부된 특허청구범위에 의하여 명확해질 것이다.It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
[부호의 설명][Description of Symbols]
10,20: 가상환경 보조장치 11: 몸체부10, 20: virtual environment auxiliary device 11:
111: 중앙부 112: 단부111: central portion 112: end portion
12: 위치 인식부 13: 제스처 인식부12: position recognition unit 13: gesture recognition unit
131: 이미지 센서 132: 엘이디 라이트131: Image sensor 132: LED light
14: 디스플레이부(프로젝터) 15: 절환부14: display unit (projector) 15: switching unit
16: 사운드부 17: 통신 단자16: sound part 17: communication terminal
18: 마이크부18: Microphone section
21: 몸체부 211: 중앙부21: body portion 211: central portion
212: 단부 22: 위치 인식부212: end portion 22: position recognition portion
23: 제스처 인식부 231: 이미지 센서23: Gesture recognition section 231: Image sensor
232: 엘이디 라이트 26: 사운드부232: LED light 26: sound part
29: 디스플레이부(스크린)29: Display unit (screen)
B1: 전원부 B2: 조작버튼B1: Power source B2: Operation button
U: 사용자 H: 머리U: User H: Head
N: 목 I: 눈N: Neck I: Eyes

Claims (12)

  1. 사용자에게 가상환경을 제공할 때 사용되는 장치로서, As a device used when providing a virtual environment to a user,
    상기 사용자의 목 또는 머리에 둘러 착용되는 몸체부;A body portion to be worn around the user's neck or head;
    상기 가상환경에 반영하기 위한 상기 사용자의 위치를 인식하는 위치 인식부; 및A location recognition unit for recognizing a location of the user to reflect the virtual environment; And
    상기 가상환경에 입력하기 위한 상기 사용자의 제스처를 인식하는 제스처 인식부를 포함하고, And a gesture recognition unit for recognizing a gesture of the user for input to the virtual environment,
    상기 위치 인식부는, The position recognizing unit,
    상기 몸체부의 좌우 양측에 마련되는 스테레오 타입인 것을 특징으로 하는 가상환경 보조장치.Wherein the virtual environment auxiliary device is a stereotype provided on both right and left sides of the body part.
  2. 제 1 항에 있어서, 상기 몸체부는, [2] The apparatus of claim 1,
    상기 사용자의 목을 두르며 전방이 개방되는 'U'자 목걸이 형태를 가지며,A U-shaped necklace in which the user's neck is opened and the front is opened,
    상기 위치 인식부는, 상기 몸체부의 양 단부에 전방을 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치.Wherein the position recognizing unit is provided so as to face forward at both ends of the body part.
  3. 제 2 항에 있어서, 상기 몸체부는, [3] The apparatus of claim 2,
    상기 양 단부 중 적어도 하나에 상기 제스처 인식부가 형성되되,Wherein the gesture recognition unit is formed on at least one of the two ends,
    상기 제스처 인식부는, 상기 몸체부의 양 단부에서 전방 아래를 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치. Wherein the gesture recognizing unit is provided so as to face downward at both ends of the body part.
  4. 제 3 항에 있어서, The method of claim 3,
    상기 양 단부 중 적어도 하나에 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.And a display unit formed on at least one of the two ends to display the virtual environment to the user with the naked eye.
  5. 제 4 항에 있어서, 상기 몸체부는, [5] The apparatus of claim 4,
    상기 양 단부가 각각 상하로 기울어질 수 있도록 조절하는 한 쌍의 절환부를 더 포함하고, Further comprising a pair of switching portions for adjusting the both end portions to be tilted up and down, respectively,
    상기 한 쌍의 절환부는, Wherein the pair of switching units comprises:
    상기 몸체부 상의 상기 양 단부 사이의 중앙을 기준으로 한 중앙부로부터 상기 양 단부 사이에 각각 형성되는 것을 특징으로 하는 가상환경 보조장치.Wherein the first and second end portions are formed between a center portion of the body portion and a center portion between the both end portions.
  6. 제 5 항에 있어서, 6. The method of claim 5,
    상기 몸체부에서 연장 형성되어 상기 사용자의 귀에 연결되며 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부;A pair of sound parts extending from the body part to be connected to the user's ear and capable of transmitting sound to the user;
    상기 몸체부의 상기 중앙부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및A communication terminal formed at the central portion of the body portion to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside; And
    상기 몸체부 상의 양 단부에 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.Further comprising a microphone unit formed at both ends of the body unit and receiving voice signals of the user.
  7. 제 6 항에 있어서,The method according to claim 6,
    상기 위치 인식부는, 카메라로 형성되고,The position recognition unit may be formed of a camera,
    상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, The gesture recognition unit may be formed of an image sensor and an LED light,
    상기 디스플레이부는, 상기 가상환경을 외부에 비춰주는 프로젝터인 것을 특징으로 하는 가상환경 보조장치.Wherein the display unit is a projector for illuminating the virtual environment to the outside.
  8. 제 1 항에 있어서, 상기 몸체부는, [2] The apparatus of claim 1,
    상기 사용자의 머리에 착용되도록 'U'자의 헤드폰 형태를 가지며, U 'shaped headphone to be worn on the head of the user,
    상기 'U'자 형태에 의해 형성되는 양 단부에 상기 사용자의 귀를 덮도록 형성되되 상기 사용자에게 사운드를 전달할 수 있도록 하는 한 쌍의 사운드부가 형성되고, A pair of sound parts formed to cover the user's ears at both ends formed by the 'U' shape and capable of transmitting sound to the user,
    상기 한 쌍의 사운드부의 전방 각각에 상기 위치 인식부가 각각 전방을 향하도록 구비되는 것을 특징으로 하는 가상환경 보조장치.Wherein the position recognition unit is disposed in front of each of the pair of sound units such that the position recognition unit faces forward.
  9. 제 8 항에 있어서, 9. The method of claim 8,
    상기 제스처 인식부는, 상기 한 쌍의 사운드부 중 적어도 하나의 전방에서 전방 아래를 향하도록 마련되는 것을 특징으로 하는 가상환경 보조장치. Wherein the gesture recognizing unit is provided so as to face downward in front of at least one of the pair of sound parts.
  10. 제 9 항에 있어서, 10. The method of claim 9,
    상기 한 쌍의 사운드부에 상기 사용자의 머리에 착용되도록 'U'자 형태를 가지도록 형성되되, 상기 사용자에게 상기 가상환경을 육안으로 표시해주는 디스플레이부를 더 포함하고,Further comprising a display unit having a 'U' shape to be worn on the user's head in the pair of sound units, and displaying the virtual environment to the user with the naked eye,
    상기 디스플레이부는, The display unit includes:
    상기 사운드부를 회전 기준점으로 하여 상기 사용자의 머리 상단에서 상기 사용자의 눈으로 회전 가능하게 형성되는 것을 특징으로 하는 가상환경 보조장치.Wherein the virtual environment supporting device is rotatably supported by the user's eye at an upper end of the user's head with the sound unit serving as a rotation reference point.
  11. 제 10 항에 있어서, 11. The method of claim 10,
    상기 몸체부에 형성되어 상기 위치 인식부 및 상기 제스처 인식부로부터 인식되는 신호를 외부로 송수신하는 통신단자; 및A communication terminal formed on the body portion and adapted to transmit and receive a signal recognized from the position recognition unit and the gesture recognition unit to the outside; And
    상기 몸체부의 양 단부에서 상기 사용자의 입까지 연장 형성되어 상기 사용자의 음성 신호를 수신하는 마이크부를 더 포함하는 것을 특징으로 하는 가상환경 보조장치.Further comprising a microphone unit extending from both ends of the body portion to the mouth of the user and receiving a voice signal of the user.
  12. 제 11 항에 있어서, 12. The method of claim 11,
    상기 위치 인식부는, 카메라로 형성되고,The position recognition unit may be formed of a camera,
    상기 제스처 인식부는, 이미지 센서(image senser)와 엘이디 라이트(LED light)로 형성되고, The gesture recognition unit may be formed of an image sensor and an LED light,
    상기 디스플레이부는, 상기 가상환경을 상기 사용자의 눈에 직접 비춰주는 스크린인 것을 특징으로 하는 가상환경 보조장치.Wherein the display unit is a screen that directly illuminates the virtual environment to the user's eyes.
PCT/KR2018/016766 2018-01-10 2018-12-27 Auxiliary device for virtual environment WO2019139289A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0003349 2018-01-10
KR1020180003349A KR102039728B1 (en) 2018-01-10 2018-01-10 Virtual environments auxiliary device

Publications (1)

Publication Number Publication Date
WO2019139289A1 true WO2019139289A1 (en) 2019-07-18

Family

ID=67219743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/016766 WO2019139289A1 (en) 2018-01-10 2018-12-27 Auxiliary device for virtual environment

Country Status (2)

Country Link
KR (1) KR102039728B1 (en)
WO (1) WO2019139289A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215920A (en) * 2010-03-31 2011-10-27 Namco Bandai Games Inc Program, information storage medium and image generation system
KR20140132278A (en) * 2013-05-07 2014-11-17 배영식 Head mounted display
KR20150041453A (en) * 2013-10-08 2015-04-16 엘지전자 주식회사 Wearable glass-type image display device and control method thereof
KR101700767B1 (en) * 2015-06-02 2017-01-31 엘지전자 주식회사 Head mounted display
KR20170016192A (en) * 2015-08-03 2017-02-13 엘지전자 주식회사 Wareable device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011215920A (en) * 2010-03-31 2011-10-27 Namco Bandai Games Inc Program, information storage medium and image generation system
KR20140132278A (en) * 2013-05-07 2014-11-17 배영식 Head mounted display
KR20150041453A (en) * 2013-10-08 2015-04-16 엘지전자 주식회사 Wearable glass-type image display device and control method thereof
KR101700767B1 (en) * 2015-06-02 2017-01-31 엘지전자 주식회사 Head mounted display
KR20170016192A (en) * 2015-08-03 2017-02-13 엘지전자 주식회사 Wareable device

Also Published As

Publication number Publication date
KR102039728B1 (en) 2019-11-04
KR20190085340A (en) 2019-07-18

Similar Documents

Publication Publication Date Title
WO2016126110A1 (en) Electrically stimulating head-mounted display device for reducing virtual reality motion sickness
WO2017061677A1 (en) Head mount display device
WO2016003078A1 (en) Glasses-type mobile terminal
WO2015076531A1 (en) Head-mounted display apparatus
WO2015122566A1 (en) Head mounted display device for displaying augmented reality image capture guide and control method for the same
WO2022080548A1 (en) Augmented reality interactive sports device using lidar sensors
EP3072009A1 (en) Head-mounted display apparatus
WO2018052231A1 (en) Electronic device including flexible display
WO2018088730A1 (en) Display apparatus and control method thereof
WO2022220658A1 (en) Mixed reality industrial helmet linked with digital twin and virtual image
WO2022050668A1 (en) Method for detecting hand motion of wearable augmented reality device by using depth image, and wearable augmented reality device capable of detecting hand motion by using depth image
WO2019139289A1 (en) Auxiliary device for virtual environment
WO2020197134A1 (en) Optical device for augmented reality using multiple augmented reality images
WO2019074228A2 (en) Head-mounted display for reducing virtual-reality motion sickness and operating method thereof
WO2022050742A1 (en) Method for detecting hand motion of wearable augmented reality device by using depth image and wearable augmented reality device capable of detecting hand motion by using depth image
WO2022014952A1 (en) Augmented reality display device
WO2022149829A1 (en) Wearable electronic device, and input structure using motion sensor
EP2625561A1 (en) Glasses
WO2021045386A1 (en) Helper system using cradle
WO2021029448A1 (en) Electronic device
WO2022231224A1 (en) Augmented reality glasses providing panoramic multi-screens and panoramic multi-screen provision method for augmented reality glasses
WO2022196869A1 (en) Head mounted display device, operating method for device, and storage medium
WO2023210970A1 (en) Augmented-reality texture display method using augmented reality-dedicated writing tool
WO2018212437A1 (en) Calibration method for matching of augmented reality objects and head mounted display for performing same
WO2022260272A1 (en) Head-mounted display device adjusting interpupillary distance by moving binocular lenses simultaneously in rack-and-pinion method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900291

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18900291

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/02/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18900291

Country of ref document: EP

Kind code of ref document: A1