KR101860680B1 - Method and apparatus for implementing 3d augmented presentation - Google Patents
Method and apparatus for implementing 3d augmented presentation Download PDFInfo
- Publication number
- KR101860680B1 KR101860680B1 KR1020170111990A KR20170111990A KR101860680B1 KR 101860680 B1 KR101860680 B1 KR 101860680B1 KR 1020170111990 A KR1020170111990 A KR 1020170111990A KR 20170111990 A KR20170111990 A KR 20170111990A KR 101860680 B1 KR101860680 B1 KR 101860680B1
- Authority
- KR
- South Korea
- Prior art keywords
- image
- space
- screen
- presenter
- presentation
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
The following embodiments are techniques for presentation implementation techniques.
The active role and intervention of the presenter was not actively addressed in the research of information visualization, although it is a key factor to effectively express and communicate information visualization. Information visualization can effectively represent information by visual means itself, but additional information or interaction intervention can make existing visualization more powerful. Especially, the presenter can actively intervene in the information and can complement the existing visualization more effectively. By directly intervening in a part of the information, the information can be supplemented, expressed more realistically, and the observer's understanding and immersion can be improved. Especially, it can support the process of information transmission through the explanation of the presenters, gestures and expressions, and it is possible to instantly interact with observers. Presenter can provide atmosphere and contextual information to help communication process.
In this regard, there have been several attempts to break the boundaries between visualizations and presenter, in order to provide a more immersive and effective presentation to the observer, beyond simply the traditional way in which the presenter looks at and explains information. This can be divided into visual integration problems and direct interaction issues. However, existing researches have not addressed the presenters and visualizations in a single integrated visualization system, but also limited the ability to integrate presenter and visual information in spatial and immersive ways. Because of this, the role of the presenter was limited to the role of traditional presenter and beyond, and could not be extended to various application scenarios. Therefore, it is necessary to study techniques for implementing presentations involving presenters.
A method for implementing a 3D enhanced presentation in accordance with an embodiment includes the steps of: detecting a depth position of a presenter in a 3D (3-dimensional) visualization space between a front screen and a back screen; Recognizing, based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image and a second space in which the rear screen is responsible for displaying an image; Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space; Displaying the generated first image using the front screen and displaying the generated second image using the rear screen; And implementing the enhanced presentation in which the speaker is engaged based on the 3D stereoscopic image in which the displayed first image and the displayed second image are integrated in the 3D visualization space.
According to one embodiment, the first space includes a space between the presenter and the front screen, and the second space may comprise a space between the presenter and the back screen.
According to an embodiment, the step of generating the first image and the second image may include: recognizing an overlapping space in which the first space overlaps with the second space; Determining a first ratio that the front screen contributes to and a second ratio that the rear screen contributes to in a 3D stereoscopic image to be reproduced in the overlapping space; Generating an image for display in the overlapping space from the front screen based on the first ratio; And generating an image for display in the overlapping space from the back screen based on the second ratio.
According to one embodiment, determining the first ratio and the second ratio comprises: setting a first percentage of the contribution of the front screen to the smaller the depth position in the overlapping space is from the front screen; And setting a second ratio at which the rear screen contributes to a smaller depth position in the overlapping space away from the rear screen.
According to one embodiment, the step of determining the first ratio and the second ratio comprises: recognizing the input of the presenter through a user interface for moving a 3D object being reproduced in the overlapping space; And adjusting a first ratio and a second ratio corresponding to depth positions of the 3D object moving in the overlapping space based on the input.
According to an embodiment, a method of implementing a 3D enhanced presentation includes: adjusting a first space and a second space in response to a variation of the depth position of the speaker; And adjusting the first image and the second image in response to the adjustment of the first space and the second space.
According to one embodiment, the front screen is a bottom screen installed at the lower end of the half mirror film, and the half mirror film is inclined at a predetermined angle in the direction of the presenter with respect to the bottom screen, Wherein the first image is emitted from the bottom screen and then refracted by the half mirror film to be displayed on an observer and the second image is emitted from the rear screen, The mirror film can be transmitted and displayed on the observer.
According to one embodiment, the first image may be projected on the bottom screen and reflected or displayed from the bottom screen, and the second image may be projected on the rear screen and reflected or displayed from the rear screen.
According to one embodiment, a total reflection mirror is provided on the ceiling of the upper half of the half mirror film, and the first image transmitted through the half mirror film is refracted by the half mirror film after being reflected from the total reflection mirror, And the second image is reflected from the half mirror film and then reflected from the total reflection mirror, reflected from the half mirror film and displayed on the presenter.
According to one embodiment, the step of implementing the augmented presentation in which the presenter is interposed comprises: identifying any one of the modes corresponding to the degree to which the presenter intervenes in the 3D stereoscopic image; And implementing an enhanced presentation based on the identified mode, wherein the modes include a storyteller mode in which the presenter announces without involvement of the 3D stereoscopic image, a mode in which the presenter interacts with the 3D stereoscopic image And an information augmenter mode for the presenter to enhance the information of the 3D stereoscopic image.
According to one embodiment, implementing the enhanced presentation comprises: recognizing the body part of the speaker; Processing the 3D object in the 3D stereoscopic image to match the body part based on the recognition result; And providing the augmented information in which the body part and the 3D object are integrated.
According to an exemplary embodiment, a 3D enhanced presentation device detects a depth position of a speaker, and based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image, Recognizes a second space responsible for display and generates a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space Displaying the generated first image using the front screen, displaying the generated second image using the rear screen, and displaying the generated first image and the displayed image in the 3D visualization space, And a processor for implementing the augmented presentation in which the presenter is interrogated, based on a 3D stereoscopic image with two images integrated.
1 is a view for explaining a 3D enhanced presentation implementation system according to an embodiment.
2 is a flowchart illustrating a method for implementing a 3D enhanced presentation according to an exemplary embodiment of the present invention.
3 is a view for explaining a method for implementing a 3D enhanced presentation according to an embodiment.
4 is a view for explaining a 3D enhanced presentation system according to an embodiment.
FIG. 5 is a view for explaining a 3D enhanced presentation implementing method according to an embodiment.
Figure 6 is an illustration of the components of an apparatus for implementing a 3D enhanced presentation according to one embodiment.
Specific structural or functional descriptions of embodiments are set forth for illustration purposes only and may be embodied with various changes and modifications. Accordingly, the embodiments are not intended to be limited to the specific forms disclosed, and the scope of the disclosure includes changes, equivalents, or alternatives included in the technical idea.
The terms first or second, etc. may be used to describe various elements, but such terms should be interpreted solely for the purpose of distinguishing one element from another. For example, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
It is to be understood that when an element is referred to as being "connected" to another element, it may be directly connected or connected to the other element, although other elements may be present in between.
The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", and the like, are used to specify one or more of the described features, numbers, steps, operations, elements, But do not preclude the presence or addition of steps, operations, elements, parts, or combinations thereof.
Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. Like reference symbols in the drawings denote like elements.
1 is a view for explaining a 3D enhanced presentation implementation system according to an embodiment.
A 3D (3-dimensional) enhanced
A 3D augmented presentation allows the presenter 104 to directly interact with the
According to one embodiment, the presenter 104 directly faces the information represented in the
According to one embodiment, depending on the extent to which the presenter 104 intervenes or participates in the visualization or 3D stereoscopic image, the role of the presenter 104 may be based on the simplest role of a storyteller, controller, And an information augmenter that intervenes as part of a visualization element. The 3D enhanced presentation device may identify any one of the modes corresponding to the degree to which the presenter 104 intervenes in the 3D stereoscopic image and implement the enhanced presentation based on the identified mode. Here, the modes include a storyteller mode in which the presenter 104 announces 3D stereoscopic images, a controller mode in which the presenter 104 interacts with 3D stereoscopic images, And an information augmenter mode for enhancing the information of the image. Accordingly, the observer 109 is not only provided with only visualization information, but is also provided with additional information and context related to the visualization, and receives visual information displayed together with the 3D physical space outside the
The 3D enhanced
In a 3D enhanced
2 is a flowchart illustrating a method for implementing a 3D enhanced presentation according to an exemplary embodiment of the present invention.
Referring to FIG. 2, the 3D enhanced presentation implement may detect the depth position of the presenter 104 in the
The 3D enhanced presentation implementing apparatus displays a
The 3D enhanced presentation implementing apparatus includes a first image to be reproduced in the
The 3D enhanced presentation implementing apparatus can display the generated
The 3D enhancement presentation implementation device may generate an enhancement presentation in which the presenter 104 intervenes based on a 3D stereoscopic image in which the
3 is a view for explaining a method for implementing a 3D enhanced presentation according to an embodiment.
In the 3D enhancement presentation, the 3D visualization space is a space for the 3D enhancement presentation, in which the 3D visual information and the presenter are physically integrated.
Referring to FIG. 3 (a), a front screen and a rear screen are installed in parallel to form a 3D visualization space, and the presenter can be located in a space between two screens. According to one embodiment, the front screen may be implemented by a bottom screen coupled with a half mirror film, and the back screen may be a projection screen or a display screen. The observer can observe the 3D information displayed on both screens and the intervening presenter simultaneously while standing outside the 3D visualization space of the presentation space.
Referring to FIG. 3 (b), the 3D enhanced presentation device can reproduce a realistic stereoscopic image in the air through the half mirror film, and the rear screen located behind the presenter expands the display space to be combined with the front screen Continuous and spatial information representation is possible. The 3D enhanced presentation implementer can enlarge the display area by increasing the number of rear screens, and it is possible to utilize the space between the screens to provide a physical space feeling.
The 3D enhancement presentation device can divide the space based on the depth position of the presenter in the 3D visualization space and then generate images or control the screens so that the front and back screens can take up the separated spaces, It is possible to express natural and accurate occlusion images without complicated rendering processing. The 3D enhancement presentation device can project a stereoscopic image on each of two screens, so that the two image layers are superficially superimposed, and the spatial feeling of visualization can be formed naturally without any boundary in the 3D visualization space between the screens. The 3D enhancement presentation device can display stereoscopic disparity of visual information displayed on two screens and display visual information in various depth areas. The presenters can also intervene anywhere in the space between the screens to visualize As shown in FIG.
Referring to FIG. 3C, the 3D enhanced presentation apparatus includes a
According to one embodiment, the 3D enhanced presentation implementer adjusts the
The 3D enhanced presentation implementation may determine a first ratio that the front screen 311 contributes to and a second ratio that the back screen 312 contributes to in a 3D stereoscopic image to be reproduced in the overlapping
The 3D enhanced presentation implementer determines the first ratio and the second ratio to be smaller as the depth position in the overlapping
For example, the 3D enhanced presentation implementing apparatus sets a first ratio corresponding to the
4 is a view for explaining a 3D enhanced presentation system according to an embodiment.
Referring to FIG. 4A, the front screen may be a
The first image for the
According to one embodiment, a
According to one embodiment, the 3D enhanced presentation may include a projector for projecting an image on the
The screen of the half mirror film 402 may be a film that transmits approximately 50% light, and may be installed to be inclined at 45 degrees in the direction of the observer. The projector image is projected on the
The 3D enhancement presentation system may employ a
When the
The 3D enhanced presentation system can be constructed by overlapping the half mirror film 402 and the general projection screen. Since the image projected onto the general white
FIG. 5 is a view for explaining a 3D enhanced presentation implementing method according to an embodiment.
The primary responsibility of the presenter is to intervene in the communication process so that the visualization can be communicated to the observer. In the 3D enhancement presentation, the role of the presenter can be defined as a storyteller, a controller, and an augmenter depending on the degree of involvement and participation in the visualization, and the role is defined as follows .
- Storyteller: Referring to Figure 5 (a), the speaker plays the simplest role as a speaker in the visualization space. The presenter does not directly engage in manipulating the visualization, but looks at the visualization in the space surrounding him, or directly approaches where the specific information is located, and then provides an explanation to the observer. For example, the presenter can focus on both the visual information and the observer's response while keeping the primary visual information and the observer in front of him in the same line of sight. At the same time, the presenter can create an atmosphere suitable for the visualization process by using additional information (related three-dimensional model, background image, etc.) located on the back of the presenter. At this time, the observer can move the attention according to the presenter's gaze movement and actively participate and immerse in visualization while viewing the rich visual information.
- controller: Referring to Figure 5 (b), the presenter acts as a controller, intervening and interacting with the visualization a little more aggressively than the speaker. Rather than containing the information itself, the presenter uses the gestures of the presenter or the physical tools manipulated by the presenter to manipulate complex visualizations floating around him in a realistic manner. For example, a presenter can stretch his hand and grab the visual information directly, or move visual information in space through gestures. The presenter can show various aspects of information by manipulating the properties of visual information in real time, such as by rotating the main information on the palm of his hand, rotating it, adjusting the ratio, etc. In addition, the presenter can move additional information placed in the 3D visualization space to the back to emphasize key information. The presenter can place related information on top of the 3D visualization space or change the shape of the graph to represent the information so that the observer can quickly recognize the information. Also, by using the physical tools manipulated by the presenter, it is possible to control the information of the area which is difficult to access by the presenter's action.
- information enhancer (augmenter): Referring to (c) of Figure 5, the speaker's body part or object with a person speaking, and directly involved in information visualization. The presenters' involvement in the 3D visualization space is the highest, and they themselves play a role in helping to present information more effectively by fully intervening as a part of information with information power. The characteristics of the presenter's shape and size are added to the physical information, which can enhance existing visualizations more realistically. For example, using the presenter's body as a direct interface, the geometric relationship between virtual visual information and the presenter can be expressed more realistically. At this time, a physical cue can be provided to assist in the interpretation of abstract visual information through a part of the speaker's body, such as the speaker's height and the spacing of the arms. Sometimes it is possible to supplement existing visualizations by directly involving the actual objects manipulated by the presenter into the visualization. For example, after assigning specific digital information to a physical object manipulated by the presenter, the presenter can combine with other virtual information in space and then interact and visualize and complement the visualization.
Although roles and characteristics are defined in three forms according to the extent to which the presenter engages and participates in the visualization, different presenter roles can be used in combination, and various methods can be applied to the techniques for defining presenter roles .
The 3D enhancement presentation device includes a storyteller mode in which the presenter announces 3D stereoscopic images, a controller mode in which the presenter interacts with 3D stereoscopic images, and information that enhances the information of the 3D stereoscopic image One of the augmenter modes may be identified and the augmented presentation may be implemented based on the identified mode.
Referring to FIG. 5 (c), the 3D enhanced presentation apparatus recognizes the body part of the speaker and processes the 3D object in the 3D stereoscopic image to match the body part based on the recognition result. The 3D enhanced presentation implementing apparatus can generate the augmented information in which the recognized body part and the processed 3D object are integrated, and display the generated augmented information in the 3D visualization space. According to one embodiment, the 3D enhanced presentation implementing apparatus recognizes the size, volume, or length of the speaker's body, and incorporates the recognized information into the 3D stereoscopic image to generate 3D enhanced information. For example, the apparatus for implementing 3D enhanced presentation uses the size of the presenter's hand to calculate the
Figure 6 is an illustration of the components of an apparatus for implementing a 3D enhanced presentation according to one embodiment.
Referring to FIG. 6, a 3D enhanced
The
The embodiments described above may be implemented in hardware components, software components, and / or a combination of hardware components and software components. For example, the devices, methods, and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, such as an array, a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
Although the embodiments have been described with reference to the drawings, various technical modifications and variations may be applied to those skilled in the art. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
Claims (12)
Recognizing, based on the depth position, a first space of the 3D visualization space in which the front screen is responsible for displaying an image and a second space in which the rear screen is responsible for displaying an image;
Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space;
Displaying the generated first image using the front screen and displaying the generated second image using the rear screen; And
Implementing a presenter-mediated incremental presentation based on a 3D stereoscopic image in which the displayed first image and the displayed second image are integrated within the 3D visualization space,
How to implement a 3D enhanced presentation.
Said first space comprising a space between said presenter and said front screen,
Said second space comprising a space between said presenter and said back screen,
How to implement a 3D enhanced presentation.
Wherein the generating the first image and the second image comprises:
Recognizing an overlapping space in which the first space overlaps with the second space;
Determining a first ratio that the front screen contributes to and a second ratio that the rear screen contributes to in a 3D stereoscopic image to be reproduced in the overlapping space;
Generating an image for display in the overlapping space from the front screen based on the first ratio; And
Generating an image for display in the overlapping space from the back screen based on the second ratio
/ RTI >
How to implement a 3D enhanced presentation.
Wherein determining the first ratio and the second ratio comprises:
Setting a first ratio that the front screen contributes to as the depth position in the overlapping space is away from the front screen; And
Setting a second ratio that the rear screen contributes to as the depth position in the overlapping space moves away from the rear screen,
/ RTI >
How to implement a 3D enhanced presentation.
Wherein determining the first ratio and the second ratio comprises:
Recognizing the speaker's input through a user interface for moving a 3D object being reproduced in the overlapping space; And
Adjusting a first ratio and a second ratio corresponding to a depth position of the 3D object moving in the overlapping space based on the input,
/ RTI >
How to implement a 3D enhanced presentation.
Adjusting the first space and the second space in response to a change in the depth position of the speaker; And
Adjusting the first image and the second image in response to the adjustment of the first space and the second space
≪ / RTI >
How to implement a 3D enhanced presentation.
The front screen is a bottom screen installed at the bottom of the half mirror film,
Wherein the half mirror film is installed at a predetermined angle in the direction of the presenter with respect to the bottom screen and transmits a part of the light emitted from the bottom screen,
Wherein the first image is emitted from the bottom screen and then refracted by the half mirror film to be displayed on an observer,
Wherein the second image is emitted from the rear screen and then transmitted through the half mirror film to be displayed on the observer.
How to implement a 3D enhanced presentation.
The first image is projected on the bottom screen and reflected or displayed from the bottom screen,
Wherein the second image is projected on the rear screen and reflected or displayed from the rear screen,
How to implement a 3D enhanced presentation.
A total reflection mirror is provided on the ceiling of the upper half of the half mirror film,
Wherein the first image transmitted through the half mirror film is refracted by the half mirror film after being reflected from the total reflection mirror and displayed on the presenter,
Wherein the second image is reflected from the half mirror film and then reflected from the total reflection mirror and reflected from the half mirror film to be displayed on the presenter.
How to implement a 3D enhanced presentation.
The step of implementing the enhanced presentation in which the speaker
Identifying one of the modes corresponding to the degree to which the presenter intervenes in the 3D stereoscopic image; And
Implementing the augmented presentation based on the identified mode
Lt; / RTI >
The modes include
A storyteller mode in which the speaker does not participate in the 3D stereoscopic image, a controller mode in which the presenter interacts with the 3D stereoscopic image, and information in which the presenter intends to enhance information of the 3D stereoscopic image Augmenter mode
/ RTI >
How to implement a 3D enhanced presentation.
The step of implementing the augmented presentation
Recognizing a body part of the speaker;
Processing the 3D object in the 3D stereoscopic image to match the body part based on the recognition result; And
Providing the augmentation information in which the body part and the 3D object are integrated
/ RTI >
How to implement a 3D enhanced presentation.
A first space in which a front screen plays a role of video display and a second space in which a rear screen plays a video display are recognized based on the depth position,
Generating a first image to be reproduced in the first space and a second image to be reproduced in the second space based on a 3D stereoscopic image to be reproduced in the 3D visualization space,
Displaying the generated first image using the front screen, displaying the generated second image using the rear screen,
Based on a 3D stereoscopic image in which the displayed first image and the displayed second image are integrated in the 3D visualization space, a processor for implementing the enhanced presentation in which the presenter is intervened
/ RTI >
A device for implementing a 3D enhanced presentation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170111990A KR101860680B1 (en) | 2017-09-01 | 2017-09-01 | Method and apparatus for implementing 3d augmented presentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170111990A KR101860680B1 (en) | 2017-09-01 | 2017-09-01 | Method and apparatus for implementing 3d augmented presentation |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101860680B1 true KR101860680B1 (en) | 2018-06-29 |
Family
ID=62780705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170111990A KR101860680B1 (en) | 2017-09-01 | 2017-09-01 | Method and apparatus for implementing 3d augmented presentation |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101860680B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139719A1 (en) * | 2018-12-28 | 2020-07-02 | Universal City Studios Llc | Augmented reality system for an amusement ride |
KR102456134B1 (en) | 2022-07-20 | 2022-10-18 | 이아이피커뮤니케이션 주식회사 | Method, device and system for creating 3d content of presenter participation type |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10509535A (en) * | 1995-09-20 | 1998-09-14 | マース・ウーヴェ | A device that displays moving images on the background of the stage |
KR20030061569A (en) * | 2002-01-15 | 2003-07-22 | 주식회사 아이젠텍 | 3-Dimensional Display System |
KR20130003145A (en) * | 2011-06-30 | 2013-01-09 | 주식회사 텐스퀘어 | Projection system using transparent foil |
-
2017
- 2017-09-01 KR KR1020170111990A patent/KR101860680B1/en active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10509535A (en) * | 1995-09-20 | 1998-09-14 | マース・ウーヴェ | A device that displays moving images on the background of the stage |
KR20030061569A (en) * | 2002-01-15 | 2003-07-22 | 주식회사 아이젠텍 | 3-Dimensional Display System |
KR20130003145A (en) * | 2011-06-30 | 2013-01-09 | 주식회사 텐스퀘어 | Projection system using transparent foil |
Non-Patent Citations (1)
Title |
---|
한국HCI학회 논문지 2013 Vol.8 No.2, 2013.11, 21-27 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139719A1 (en) * | 2018-12-28 | 2020-07-02 | Universal City Studios Llc | Augmented reality system for an amusement ride |
US10818090B2 (en) | 2018-12-28 | 2020-10-27 | Universal City Studios Llc | Augmented reality system for an amusement ride |
CN113227884A (en) * | 2018-12-28 | 2021-08-06 | 环球城市电影有限责任公司 | Augmented reality system for amusement ride |
KR102456134B1 (en) | 2022-07-20 | 2022-10-18 | 이아이피커뮤니케이션 주식회사 | Method, device and system for creating 3d content of presenter participation type |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108369457B (en) | Reality mixer for mixed reality | |
US20160163063A1 (en) | Mixed-reality visualization and method | |
CN116310218A (en) | Surface modeling system and method | |
US20190371072A1 (en) | Static occluder | |
US20120274745A1 (en) | Three-dimensional imager and projection device | |
US10019130B2 (en) | Zero parallax drawing within a three dimensional display | |
US11132590B2 (en) | Augmented camera for improved spatial localization and spatial orientation determination | |
US9681122B2 (en) | Modifying displayed images in the coupled zone of a stereoscopic display based on user comfort | |
US20170102791A1 (en) | Virtual Plane in a Stylus Based Stereoscopic Display System | |
WO2014108799A2 (en) | Apparatus and methods of real time presenting 3d visual effects with stereopsis more realistically and substract reality with external display(s) | |
US20240169489A1 (en) | Virtual, augmented, and mixed reality systems and methods | |
KR101860680B1 (en) | Method and apparatus for implementing 3d augmented presentation | |
US11057612B1 (en) | Generating composite stereoscopic images usually visually-demarked regions of surfaces | |
WO2018084087A1 (en) | Image display system, image display device, control method therefor, and program | |
US20230251710A1 (en) | Virtual, augmented, and mixed reality systems and methods | |
US11818325B2 (en) | Blended mode three dimensional display systems and methods | |
US10964056B1 (en) | Dense-based object tracking using multiple reference images | |
KR101430187B1 (en) | Controlling machine for multi display and method for providing contents | |
US11682162B1 (en) | Nested stereoscopic projections | |
Syed et al. | Digital sand model using virtual reality workbench | |
KR101800612B1 (en) | Apparatus for vibration generating in augmented reality environment using three-dimensional model and method thereof | |
JPWO2015156128A1 (en) | Display control apparatus, display control method, and program | |
Hough | Towards achieving convincing live interaction in a mixed reality environment for television studios | |
KR20160081602A (en) | Kinetic pixel display in which physical engine and actuator are linked |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GRNT | Written decision to grant |