KR101754976B1 - Contents convert method for layered hologram and apparatu - Google Patents

Contents convert method for layered hologram and apparatu Download PDF

Info

Publication number
KR101754976B1
KR101754976B1 KR1020150077107A KR20150077107A KR101754976B1 KR 101754976 B1 KR101754976 B1 KR 101754976B1 KR 1020150077107 A KR1020150077107 A KR 1020150077107A KR 20150077107 A KR20150077107 A KR 20150077107A KR 101754976 B1 KR101754976 B1 KR 101754976B1
Authority
KR
South Korea
Prior art keywords
image
hologram
display panel
background image
reproduced
Prior art date
Application number
KR1020150077107A
Other languages
Korean (ko)
Other versions
KR20160141446A (en
Inventor
오병기
Original Assignee
주식회사 쓰리디팩토리
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 쓰리디팩토리 filed Critical 주식회사 쓰리디팩토리
Priority to KR1020150077107A priority Critical patent/KR101754976B1/en
Priority to PCT/KR2015/009492 priority patent/WO2016195167A1/en
Publication of KR20160141446A publication Critical patent/KR20160141446A/en
Application granted granted Critical
Publication of KR101754976B1 publication Critical patent/KR101754976B1/en

Links

Images

Classifications

    • H04N13/0044
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/26Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
    • G03H1/268Holographic stereogram
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/0018
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H2250/00Laminate comprising a hologram layer

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Holo Graphy (AREA)

Abstract

According to an embodiment of the present invention, there is provided a content conversion method of a content conversion apparatus for a layered hologram for generating a layered type hologram from one 2D image content, comprising the steps of: extracting a still image according to a predetermined frame rate from the 2D image content; A step of separating a target image and a background image, each of which includes at least one object, with respect to the extracted still image, performing mapping using a specific map for each of the separated target image and background image, And generating a multi-view image using depth and stereoscopic values for each background image.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a hologram-

The present invention relates to a content conversion method, apparatus, and program for a layered hologram.

3D (3D) image technology is a new realistic image media that improves the quality level of visual information, similar to the actual image that people see and feel, unlike the existing two-dimensional (2D) Culture is expected to lead.

Such a 3D stereoscopic image may be obtained by directly photographing through a plurality of cameras, but may be obtained by a method of converting a 2D plane image into a stereoscopic 3D stereoscopic image.

In the case of generating a 3D stereoscopic image by using a 2D plane image, depth information is given to an object of the 2D plane image, so that the 2D plane image can be converted into a 3D stereoscopic image.

On the other hand, as the need to display 3D stereoscopic images more realistically and consistently by displaying 3D stereoscopic 3D (AS3D) on a three-dimensional space, a plurality of 3D images are converted into AS3D images Technology is being used.

However, since the multi-view AS3D image is to integrate and display the 3D image on the three-dimensional space as the multi-viewpoint image, there is a problem that the stereoscopic effect is weak at one point and the stereoscopic feeling becomes relatively unstable at other points Which causes viewers to cause visual fatigue or ghost phenomenon that causes dizziness.

As described above, it is an object of the present invention to provide a technique for producing a content for displaying a multi-view 3D stereoscopic image improved in stereoscopic effect by using 2D image content.

In order to achieve the above-mentioned object, an embodiment of the present invention proposes a content production method for providing a hologram having improved stereoscopic effect.

According to an embodiment of the present invention, there is provided a content conversion method of a content conversion apparatus for a layered hologram for generating a layered type hologram from one 2D image content, comprising the steps of: extracting a still image according to a predetermined frame rate from the 2D image content; A step of separating a target image and a background image, each of which includes at least one object, with respect to the extracted still image, performing mapping using a specific map for each of the separated target image and background image, And generating a multi-view image using depth and stereoscopic values for each background image.

According to another embodiment of the present invention, there is provided an image processing apparatus including an extraction module for extracting a still image according to a predetermined frame rate from the 2D image content, and separating an object image and a background image including at least one object, A mapping module for performing mapping using a specific map for each of the target image and the background image, and a viewpoint conversion module for generating a multi-view image for each of the mapped target image and background image, There is provided a content conversion apparatus for a layered hologram for generating a hologram.

According to the embodiment of the present invention, one 2D image content is converted into multi-point 3D image content, and the object image and the background image can be reproduced on two physically separated displays, thereby realizing a stereoscopic image without dizziness or ghosting .

1 is a block diagram showing a main configuration of a content conversion apparatus for a layered hologram according to the present invention.
2 is a flowchart of a content conversion method of a content conversion apparatus for a layered hologram according to an embodiment of the present invention.
3A and 3B are exemplary diagrams for explaining generation of an object image and a background image according to an embodiment of the present invention.
FIG. 4 is a flowchart illustrating mapping according to an embodiment of the present invention.
FIGS. 5A to 5E are diagrams for explaining mapping of an object image and a background image, respectively, according to an embodiment of the present invention.
6A and 6B are exemplary diagrams illustrating a method of generating a multi-view image according to an embodiment of the present invention.
FIG. 7 is an exemplary diagram illustrating formats of an object image and a background image content generated according to an embodiment of the present invention.
FIG. 8 is an exemplary diagram illustrating a method for synchronizing an object image and a background image into one file according to an embodiment of the present invention. Referring to FIG.
FIG. 9 is an exemplary diagram illustrating a setup method for displaying an object image and a background image according to an embodiment of the present invention. Referring to FIG.
FIG. 10 is a view illustrating an example in which a layered hologram embodying system according to an embodiment of the present invention is installed.
FIG. 11 is a view illustrating an example in which a layered hologram embodying system according to another embodiment of the present invention is installed.

It is noted that the technical terms used herein are used only to describe specific embodiments and are not intended to limit the invention. It is also to be understood that the technical terms used herein are to be interpreted in a sense generally understood by a person skilled in the art to which the present invention belongs, Should not be construed to mean, or be interpreted in an excessively reduced sense. Further, when a technical term used herein is an erroneous technical term that does not accurately express the spirit of the present invention, it should be understood that technical terms that can be understood by a person skilled in the art are replaced. In addition, the general terms used in the present invention should be interpreted according to a predefined or prior context, and should not be construed as being excessively reduced.

Also, the singular forms "as used herein include plural referents unless the context clearly dictates otherwise. In the present application, the term "comprising" or "comprising" or the like should not be construed as necessarily including the various elements or steps described in the specification, Or may be further comprised of additional components or steps.

Further, the suffix "module" and "part" for components used in the present specification are given or mixed in consideration of ease of specification, and do not have their own meaning or role.

Furthermore, terms including ordinals such as first, second, etc. used in this specification can be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or similar elements throughout the several views, and redundant description thereof will be omitted.

In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. It is to be noted that the accompanying drawings are only for the purpose of facilitating understanding of the present invention, and should not be construed as limiting the scope of the present invention with reference to the accompanying drawings.

Prior to the description, '3D', '3D', and 'stereoscopic' mixed in the present invention are used in the same sense.

In addition, a display panel is used as a generic meaning of an image that can be viewed by a person.

1 is a detailed configuration diagram of a main configuration of a content conversion apparatus for a layered hologram according to the present invention.

As shown in the figure, the content conversion apparatus for a layered hologram according to the present invention includes an extraction module 110, a mapping module 120, a viewpoint conversion module 130, and a synchronization module 140.

The above constituent parts are expressed by dividing the content conversion apparatus for a layered hologram according to each function according to the embodiment of the present invention and one constituent part may be divided into a plurality of constituent parts or a plurality of constituent parts may be integrated into one constituent part And these embodiments are also included in the scope of the present invention.

The extraction module 110 generates a target image and a background image, which are still images, from the moving image of the 2D image content. For this, the extraction module 110 may extract the still image at a predetermined frame rate from the moving image of the 2D image content. The frame rate refers to the number of frames being processed per second. For example, 30 frames per second for TV, and 24 frames per second for movies. Such frames per second may be selected, changed and stored by the user.

The 2D image content obtained by the extraction module 110 may be a single view image. A single viewpoint image is an image obtained by photographing one object and a background at the same position through a photographing apparatus.

The extraction module 110 separates the target image including the one or more objects with respect to the extracted still image. The target image is separated and the remaining part becomes the background image.

The mapping module 120 converts the 2D image content into a 3D image. More specifically, the mapping module 120 performs mapping for each of the object image and the background image separated by the extraction module 110. In this case, the specific map may be at least one of a depth map, a displacement map, a texture map, and a combination thereof.

In addition, the mapping module 120 converts a plurality of single viewpoint 2D images into a plurality of 3D images using various conversion techniques including a 3D image converter or a 3D image conversion program to convert the 3D images into a plurality of 3D images.

The viewpoint conversion module 130 generates a multi-view image for each of the target image and the background image converted into the 3D image. The viewpoint conversion module 140 renders a multi-view AS3D image in a three-dimensional space through a projective reconfiguration in a three-dimensional space.

Therefore, the viewpoint conversion module 130 displays a plurality of 3D images in a single three-dimensional space and converts them into multi-point AS3D images.

The synchronization module 140 synchronizes the multi-view AS3D object image and the multi-view AS3D image background image separated from one still image of the original 2D image content.

The content conversion method of the content conversion apparatus for a layered hologram will be described below with reference to Figs. 2 to 8. Fig.

2 is a flowchart of a content conversion method of a content conversion apparatus for a layered hologram according to an embodiment of the present invention.

3 to 6 are diagrams for explaining a method of extracting a still image according to an embodiment of the present invention.

FIG. 7 is an exemplary diagram illustrating formats of an object image and a background image content generated according to an embodiment of the present invention.

FIG. 8 is an exemplary diagram illustrating a method for synchronizing an object image and a background image into one file according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 2, in step S210, a still image is extracted from a 2D image content acquired by the content conversion apparatus for a layered hologram according to a predetermined frame rate.

3A is a diagram illustrating a state in which a plurality of still images are extracted. Each extracted still image is 2D content.

For each still image thus extracted, the following operation is performed.

Next, in step 220, the content conversion apparatus for a layered hologram separates the object image and the background image by extracting at least one object on the still image extracted in step 210 as a target image, and defining the unextracted background as a background image.

One example of a technique used for such separation is rotoscoping technology.

As shown in FIG. 3B, a desired object, such as a person's outline, is displayed as a CG program on the basis of the actually photographed image, and a human image and a background Separate images.

In order to extract a specific region from a frame, an optical flow method, a kernel-based mean-shift method using the similarity of the object distribution, and a contour ) Tracking method. In addition, it can be tracked smoothly even when occlusion occurs by making full use of the depth value.

 Since the depth information of each object separated from the 2D plane represents a simple planar shape, a method for expressing an actual object is needed. Steps 230 and 240 are steps for representing this actual object.

In step 230, mapping is performed for each of the target image and the background image. In this step, the 2D image content in a simple plane form is converted into the 3D image content.

The mapping is described below with reference to Fig.

4 is a flowchart illustrating mapping according to an embodiment of the present invention.

First, in step 231, it is determined whether the image to be processed is a background image. Since the background image is a residual image obtained by separating the target image from one image, an empty area 601 as shown in FIG. 5 (a) is formed by separation of the target image.

In step 232, pixels are filled in the free area 601 with reference to the pixels around the free area 601 and the information of the previous / next frame of the current image, as shown in FIG. 5 (b).

If the target image is not a background image in step 231, the process proceeds to step 233 immediately after the step 232 without going through step 232.

In step 233, when there are a plurality of objects included in the working image, which is either the target image or the background image, the depth of each object is extracted and the object is relocated by referring to the depth value (step 234).

The depth value of each pixel represents the 3D distance difference between objects in the image and is expressed as a value between 0 and 255. The closer the distance is, the closer to zero. Therefore, an object having a small depth value is placed before an object having a large depth value.

For example, as shown in FIG. 5C, an object T1 having the smallest depth value is disposed on the front face (front), an object T2 having a middle depth value is disposed in the middle, and an object having the largest depth value (T3) is disposed on the rear surface (rear).

As shown in FIG. 5 (d), when the object T1 having the smallest depth value is disposed on the front surface of the display panel D as viewed from the side of the display, And an object T3 having the largest depth value is disposed on the rear surface of the display panel D. [

In step 234, the rearranged object is given a three-dimensional effect to form a background image and a target image as 3D images. 5 (e) illustrates an object image to which a three-dimensional effect is imparted.

If you do not have this step, you will have a cardboard effect where the object looks like a paper board.

Referring again to FIG. 3, in step 240, each of the 3D object image and the 3D background image is converted into a multi-view image AS3D.

A plurality of viewpoint images are generated for the 3D object image and the 3D background image, respectively. 6A and 6B, in order to generate a multi-view image based on a single viewpoint image, three-dimensional image information such as a variation map, a motion compensation information, and an object division information is extracted from a 3D object image and a 3D background image Thereby generating N virtual viewpoint images.

In step 250, the AS3D object image converted into the multi-view image is synchronized with the AS3D background image.

A raster file of the target image and a raster file of the multi-view image are generated so that the generated AS3D target image and the AS3D background image are respectively reproduced on a screen separated from each other. For example, as shown in FIG. 7, assuming that an object image and a background image are generated as nine-view images, respectively, as in the embodiment of the present invention, a raster file generated at each viewpoint is composed of nine tile contents , And each image has a resolution of 1280 x 720.

In order to synchronize the generated raster files, as shown in FIG. 8, each view image of the view image of the target image content and the view image of the background image content may be combined into a pair of tiles to generate nine new tiles . For example, the one-eye image of the target image content and the one-eye image of the background image content are the contents of the one-eye image grouped together. Each pair of viewpoint images may be processed for synchronization. For example, the target image content and the background image content may include time information for each arbitrary unit. At the time of reproduction, the synchronization of the time information can be made.

The generated image can be a total resolution of 3840 × 4320 or 7680 × 2160 by pairing the target image content and the background image content.

FIG. 9 is an exemplary diagram illustrating a setup method for displaying an object image and a background image according to an embodiment of the present invention. Referring to FIG.

As shown in Figs. 9 (a) and 9 (b), display settings can be performed through a display screen of a PC or the like connected to the playback apparatus. In order to reproduce the target image and the background image on the first display panel and the second display panel, two displays are set to the extended mode.

In the embodiment of the present invention, since the luminance decreases in proportion to the number of viewpoints, and the resolution also decreases with the number of viewpoints, a nine-eye display is adopted to obtain a stable image in content reproduction for a stereoscopic image array, But is not limited thereto. In addition, in the case of adopting the nine-eye type display, it is desirable to adopt a high resolution LCD panel of 4K (3840x2160) or more in order to prevent the problem of degraded brightness and resolution.

FIG. 10 is a view illustrating an example in which a layered hologram embodying system according to an embodiment of the present invention is installed.

As shown in FIG. 9, the first display panel 210 of the two separated display panels is installed on the rear surface to display a background image, and the second display panel is mounted on the ceiling or the bottom surface And outputs the target image. At this time, the objective image is floated by the two way mirror (TWO WAY MIRROR) provided at the angle of 45 degrees on the second display panel to form the hologram. Therefore, the target images are displayed in a stacked manner with different depths in front of the background image output by the first display panel.

In general, the depth of view that can be represented in a spectacles 3d display panel is the maximum pop-out distance and the maximum depth in distance. The maximum pop-out distance and the maximum depth in distance are each 0.5 times the display panel height H, and the maximum depth perceivable in one display is the height of the display panel.

As shown in FIG. 9, when the background image and the target image are output by the separated display panel, a deeper depth sense can be expressed by a physical distance between the first display panel and the second display panel, It is possible.

FIG. 11 is a view illustrating an example in which a layered hologram embodying system according to another embodiment of the present invention is installed.

The first display panel 410 and the second display panel 420 are raised on one plane, and a two-way mirror is provided at each of the first display panel 410 and the second display panel 420.

The background image is reproduced on the first display panel 410 disposed further behind the viewer, and the target image is reproduced on the second display panel 420 disposed further on the front.

The background image output from the first display panel 410 is floated by the first two-way mirror 430 to generate the first hologram, and the object image output from the second display panel 420 is transmitted to the second two-way mirror 440 ) To generate a second hologram.

Accordingly, the first hologram and the second hologram are displayed in a stacked manner with different depths.

That is, only a physical distance between the first display panel 410 and the second display panel 420 can be further provided with a sense of depth between the first hologram and the second hologram.

At this point, it will be appreciated that the combinations of blocks and flowchart illustrations in the process flow diagrams may be performed by computer program instructions. These computer program instructions may be loaded into a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, so that those instructions, which are executed through a processor of a computer or other programmable data processing apparatus, A module that performs functions is created. These computer program instructions may also be stored in a computer usable or computer readable memory capable of directing a computer or other programmable data processing apparatus to implement the functionality in a particular manner so that the computer usable or computer readable memory The instructions stored in the block diagram (s) are also capable of producing manufacturing items containing instruction modules that perform the functions described in the flowchart block (s). Computer program instructions may also be stored on a computer or other programmable data processing equipment so that a series of operating steps may be performed on a computer or other programmable data processing equipment to create a computer- It is also possible for the instructions to perform the processing equipment to provide steps for executing the functions described in the flowchart block (s).

In addition, each block may represent a module, segment, or portion of code that includes one or more executable instructions for executing the specified logical function (s). It should also be noted that in some alternative implementations, the functions mentioned in the blocks may occur out of order. For example, two blocks shown in succession may actually be executed substantially concurrently, or the blocks may sometimes be performed in reverse order according to the corresponding function.

Herein, the term " part " used in the present embodiment means a hardware component such as software or an FPGA or an ASIC, and 'part' performs certain roles. However, 'part' is not meant to be limited to software or hardware. &Quot; to " may be configured to reside on an addressable storage medium and may be configured to play one or more processors. Thus, by way of example, 'parts' may refer to components such as software components, object-oriented software components, class components and task components, and processes, functions, , Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functions provided in the components and components may be further combined with a smaller number of components and components or further components and components. In addition, the components and components may be implemented to play back one or more CPUs in a device or a secure multimedia card.

It will be understood by those skilled in the art that the present specification may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present specification is defined by the appended claims rather than the foregoing detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents are included in the scope of the present specification Should be interpreted.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, It is not intended to limit the scope of the specification. It will be apparent to those skilled in the art that other modifications based on the technical idea of the present invention are possible in addition to the embodiments disclosed herein.

110: Extraction module
120: mapping module
130: viewpoint conversion module
140: Synchronization module

Claims (10)

A content conversion method of a content conversion apparatus for a layered hologram for generating a layered type hologram from one 2D image content,
Extracting a still image according to a predetermined frame rate from the 2D image content;
Separating a target image and a background image including the one or more objects with respect to the extracted still image;
Performing a mapping using a specific map for each of the separated target image and background image;
Extracting a depth and a stereoscopic value for each of the mapped target image and background image to generate a multi-view image; And
The generated multi-view image of the target image and the multi-view image of the background image are combined into one file, and then a raster file is generated, and the reproduced image is generated by synchronizing the generated raster file for the target image and the raster file for the background image step
Lt; / RTI >
The background image is reproduced on the first display panel, the target image on the reproduced image is reproduced on the second display, the first display panel is installed on the rear side of the second display, A target image reproduced on the second display panel is floated in front of a background image reproduced on the first display panel by a two way mirror obliquely arranged on the second display panel in front of the first display panel to form a hologram Wherein the hologram is formed of a plurality of layers.
The method according to claim 1,
The step of performing the mapping using the specific map with respect to the separated target image and background image,
Extracting a depth value of each pixel for at least one object included in the target image and the background image; And
A step of rearranging an object having a small extracted depth value to the foreground than an object having a large depth value
And converting the hologram into a hologram.
3. The method of claim 2,
Before extracting a depth value of each pixel for at least one object included in the target image,
Filling the pixel by referring to the pixel around the boundary of the blank area in the blank area of the background image from which the target image is separated
And converting the hologram into a hologram.
delete The method according to claim 1,
Wherein the specific map is at least one of a depth map, a displacement map, a texture map, and a combination thereof.
delete A content conversion apparatus for a layered hologram for generating a layered hologram from one 2D image content,
An extraction module for extracting a still image according to a predetermined frame rate from the 2D image content and separating an object image and a background image including at least one object with respect to the extracted still image;
A mapping module for mapping the target image and the background image using a specific map;
A viewpoint transformation module for generating a viewpoint image for each of the mapped target and background images; And
The generated multi-view image of the target image and the multi-view image of the background image are combined into one file, and then a raster file is generated, and the reproduced image is generated by synchronizing the generated raster file for the target image and the raster file for the background image Synchronization module
/ RTI >
The background image of the reproduced image is reproduced on the first display panel and the target image of the reproduced image is reproduced on the second display while the first display panel is installed on the rear side of the second display, A target image reproduced on the second display panel is floated in front of a background image reproduced on the first display panel by a two way mirror obliquely arranged on the second display panel in front of the first display panel to form a hologram Wherein the hologram-forming unit comprises:
delete 8. The method of claim 7,
Wherein the extraction module fills pixels in a blank area of the background image from which the target image has been separated by referring to pixels around the border of the blank area before performing the mapping.
8. The method of claim 7,
Wherein the mapping module extracts a depth value of each pixel for at least one object included in the image and arranges the object having a small depth value in front of the object having a large depth value, .
KR1020150077107A 2015-06-01 2015-06-01 Contents convert method for layered hologram and apparatu KR101754976B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020150077107A KR101754976B1 (en) 2015-06-01 2015-06-01 Contents convert method for layered hologram and apparatu
PCT/KR2015/009492 WO2016195167A1 (en) 2015-06-01 2015-09-09 Content conversion method, content conversion apparatus, and program for multi-layered hologram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150077107A KR101754976B1 (en) 2015-06-01 2015-06-01 Contents convert method for layered hologram and apparatu

Publications (2)

Publication Number Publication Date
KR20160141446A KR20160141446A (en) 2016-12-09
KR101754976B1 true KR101754976B1 (en) 2017-07-06

Family

ID=57441403

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150077107A KR101754976B1 (en) 2015-06-01 2015-06-01 Contents convert method for layered hologram and apparatu

Country Status (2)

Country Link
KR (1) KR101754976B1 (en)
WO (1) WO2016195167A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071392A (en) * 2016-12-23 2017-08-18 网易(杭州)网络有限公司 Image processing method and device
KR20180116708A (en) * 2017-04-17 2018-10-25 주식회사 쓰리디팩토리 Method and apparatus for providing contents for layered hologram
KR20240093270A (en) * 2022-12-15 2024-06-24 한국전자기술연구원 A method for generating hologram contents using a video recording function of a user terminal and a method for reading a holographic security code generated by the method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101451236B1 (en) * 2014-03-03 2014-10-15 주식회사 비즈아크 Method for converting three dimensional image and apparatus thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100533328B1 (en) * 2003-06-27 2005-12-05 한국과학기술연구원 Method of rendering a 3D image from a 2D image
KR20100036683A (en) * 2008-09-30 2010-04-08 삼성전자주식회사 Method and apparatus for output image
KR101356544B1 (en) * 2012-03-29 2014-02-19 한국과학기술원 Method and apparatus for generating 3d stereoscopic image
KR20140003741A (en) * 2012-06-26 2014-01-10 세종대학교산학협력단 Method for manufacturing 3d contents and apparatus for thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101451236B1 (en) * 2014-03-03 2014-10-15 주식회사 비즈아크 Method for converting three dimensional image and apparatus thereof

Also Published As

Publication number Publication date
KR20160141446A (en) 2016-12-09
WO2016195167A1 (en) 2016-12-08

Similar Documents

Publication Publication Date Title
US8471898B2 (en) Medial axis decomposition of 2D objects to synthesize binocular depth
US9094675B2 (en) Processing image data from multiple cameras for motion pictures
EP2347597B1 (en) Method and system for encoding a 3d image signal, encoded 3d image signal, method and system for decoding a 3d image signal
US20140111627A1 (en) Multi-viewpoint image generation device and multi-viewpoint image generation method
JP4489610B2 (en) Stereoscopic display device and method
KR102121389B1 (en) Glassless 3d display apparatus and contorl method thereof
CN103081476A (en) Method and device for converting three-dimensional image using depth map information
CN106296781B (en) Special effect image generation method and electronic equipment
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US9196080B2 (en) Medial axis decomposition of 2D objects to synthesize binocular depth
WO2018090923A1 (en) Display processing method and device, and terminal
CN102905145A (en) Stereoscopic image system, image generation method, image adjustment device and method thereof
WO2013108285A1 (en) Image recording device, three-dimensional image reproduction device, image recording method, and three-dimensional image reproduction method
KR101754976B1 (en) Contents convert method for layered hologram and apparatu
JP2015534745A (en) Stereo image generation, transmission, and reception method and related apparatus
KR101228916B1 (en) Apparatus and method for displaying stereoscopic 3 dimensional image in multi vision
JP2014072809A (en) Image generation apparatus, image generation method, and program for the image generation apparatus
CN101908233A (en) Method and system for producing plural viewpoint picture for three-dimensional image reconstruction
KR101912242B1 (en) 3d display apparatus and method for image processing thereof
KR20120087867A (en) Method for converting 2 dimensional video image into stereoscopic video
Jeong et al. Depth image‐based rendering for multiview generation
KR20110091377A (en) A 3d picture producing method & a 3d picture using optimum parallax stored in an electronic medium, and a storage medium
US10475233B2 (en) System, method and software for converting images captured by a light field camera into three-dimensional images that appear to extend vertically above or in front of a display medium
KR101937127B1 (en) Apparatua and method for processing the image information of 3dimension display
KR101369006B1 (en) Apparatus and method for displaying multiview image

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right