KR101867212B1 - Apparatus and method for synchronication of image frames captured from color and depth camera - Google Patents
Apparatus and method for synchronication of image frames captured from color and depth camera Download PDFInfo
- Publication number
- KR101867212B1 KR101867212B1 KR1020170004361A KR20170004361A KR101867212B1 KR 101867212 B1 KR101867212 B1 KR 101867212B1 KR 1020170004361 A KR1020170004361 A KR 1020170004361A KR 20170004361 A KR20170004361 A KR 20170004361A KR 101867212 B1 KR101867212 B1 KR 101867212B1
- Authority
- KR
- South Korea
- Prior art keywords
- motion
- image
- depth
- dimensional
- dimensional image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/285—Analysis of motion using a sequence of stereo image pairs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
Abstract
Description
The present invention relates to a technique of synchronizing a captured image with a color camera and a depth camera, and relates to a technical idea for temporal and spatial synchronization between a color image and a depth image by predicting a motion of a video object.
As technology develops, contents using 3D technologies such as 3D movies, virtual reality, and augmented reality are increasing. In many cases, this technology consumes a lot of manpower, time, and cost. To reduce this, several 3D information acquisition devices are also emerging. Of these, RGBD cameras, which can acquire color and three-dimensional information in real time, are actively used for depth video shooting.
However, even with the latest RGBD cameras, there is no time synchronization between color / depth video acquired due to technical limitations. The temporal asynchronization results in a spatial synchronous difference between the two images. This makes it difficult to apply existing 3D image restoration and improvement algorithms by providing distorted information to various vision algorithms that utilize both information to perform information analysis.
In order to correct the viewpoint between the two video images, a method of using an external device that corrects the shooting time of two shooting devices, a method of acquiring a video image by moving and capturing a still object, And the continuity of the system.
This approach has the problem that additional equipment is required and therefore the use environment is limited and it can not be applied to images that have already been shot. Limiting the shooting method has a limitation that it can not be utilized under a general scenario of shooting a moving object. In addition, since depth information and color information have very different characteristics of information, it is very difficult to find a characteristic value common to both images.
Currently, color / depth cameras that are being commercialized require spatial synchronization of images obtained through mathematical calibration. However, in the case of moving object video images, because of the asynchronization of the imaging time of two cameras, The spatial motivation of
The general color / depth video image related technology deals with a single image and is based on spatial synchronization between images through correction. However, these techniques have the disadvantage of not being applied or applied to video images where time and space synchronization is not achieved.
The present invention aims at performing temporal / spatial synchronization by approaching the problem of temporal error between color / depth video images, which has been limited to existing technology and mechanical limit, as a method of predicting the motion of moving objects in two video images.
In the present invention, only the image obtained from the commercial RGBD camera is used without a separate measuring device, and the time and space synchronization problem is solved based on the similarity of the object motion information, rather than the characteristic value matching based on the visual similarity between the two images .
The object of the present invention is to provide a technique for assisting accurate shooting while meeting the purpose of a commercialization device capable of time-space synchronization without additional equipment.
The present invention aims at overcoming the practical limitations of many image processing / vision algorithms that assume depth image matching in advance.
The present invention aims at effectively improving the quality of a database of existing vision algorithms.
The present invention aims at significantly improving the performance of the final algorithm.
An object of the present invention is to expand the application field using commercial RGBD cameras.
An image synchronization apparatus according to an embodiment includes a three-dimensional motion collection unit for separating a target object from a depth image and obtaining a three-dimensional motion of the separated target object, at least one A motion estimator for estimating at least one two-dimensional image motion by projecting the generated at least one or more sub-frames; a motion estimator for estimating at least one two-dimensional image motion, A matching processor for determining whether or not to match the two-dimensional image motion obtained from the color image corresponding to the image, and a correction processor for reproducing the depth image frame based on the determined matching.
The subframe generation unit may generate the at least one or more subframes having a depth value between the adjacent depth images, taking into consideration the depth values of the adjacent depth images.
The matching processing unit according to an embodiment calculates the similarity between the estimated at least one two-dimensional image motion and the two-dimensional image motion acquired from the color image, and as a result of calculating the similarity degree, Determining a two-dimensional image motion having a degree of similarity higher than a threshold value among the one or more two-dimensional image motion, and corresponding to the two-dimensional image motion obtained from the determined color image.
The correction processing unit may regenerate a depth image frame for a two-dimensional image motion acquired from the color image using a depth image frame corresponding to the determined two-dimensional image motion.
The correction processing unit may include a depth image frame that is adjacent to a depth image frame corresponding to the determined two-dimensional image motion, a depth image frame that corresponds to the determined two-dimensional image motion, It is possible to correct the synchronization error of the depth image with respect to the color images.
According to an exemplary embodiment, an image synchronization method includes separating a target object from a depth image, acquiring a three-dimensional motion of the separated target object, interpreting at least one or more sub- Estimating the at least one two-dimensional image motion by projecting the generated at least one sub-frame, calculating at least one of the at least one two-dimensional image motion estimated from the at least one two-dimensional image motion, Determining whether a two-dimensional image motion is matched, and regenerating a depth image frame based on the determined matching.
The generating of the at least one subframe according to an exemplary embodiment may include generating the at least one or more subframes having a depth value existing between the adjacent depth images in consideration of a depth value of adjacent depth images .
The step of determining whether or not the matching according to an embodiment includes the steps of calculating a degree of similarity between the estimated at least one two-dimensional image motion and a two-dimensional image motion acquired from the color image, Determining a two-dimensional image motion having a degree of similarity equal to or greater than a threshold value among the estimated at least one two-dimensional image motion, and determining the determined two-dimensional image motion as corresponding to a two-dimensional image motion obtained from the color image .
The step of regenerating the depth image frame according to an exemplary embodiment may include regenerating a depth image frame for a two-dimensional image motion obtained from the color image using a depth image frame corresponding to the determined two- . ≪ / RTI >
The step of regenerating the depth image frame according to an exemplary embodiment of the present invention includes the steps of: generating depth image frames adjacent to a depth image frame corresponding to the determined two-dimensional image motion, a depth image frame corresponding to the determined two- And correcting a synchronization error of the depth image with respect to the color images using the relationship between adjacent depth image frames.
According to one embodiment, temporal / spatial synchronization can be performed by approaching the temporal error problem between the color / depth video images, which has been limited to existing technology and mechanical limit, by a method of predicting the motion of moving objects in two video images.
According to an exemplary embodiment, only an image obtained from a commercial RGBD camera is used without a separate measuring device, and based on similarity of object motion information, rather than characteristic value matching based on visual similarity between two images, I can solve the problem.
According to one embodiment, it is possible to provide a technique for assisting accurate shooting while meeting the purpose of a commercialization device capable of time-space synchronization without additional equipment.
According to an exemplary embodiment, it is possible to overcome practical limitations of a number of image processing / vision algorithms that assume a depth image matching in advance.
According to one embodiment, the quality of the database of existing vision algorithms can be effectively improved.
According to one embodiment, the performance of the final algorithm can be greatly influenced.
According to one embodiment, the application field using a commercially available RGBD camera can be greatly expanded.
1 is a view for explaining an image pickup apparatus according to an embodiment of the present invention.
2 is a view for explaining an embodiment for generating a detailed frame based on three-dimensional motion.
FIG. 3 shows an image for generating a detailed frame while slightly moving from the first image using the acquired motion information.
FIG. 4 is a diagram for explaining a process of acquiring a correspondence point between color and depth through motion relevance according to an embodiment.
5 is a diagram for explaining a process of finding a corresponding point according to an embodiment.
6 is a view for explaining a method of synchronizing shot images according to an embodiment.
It is to be understood that the specific structural or functional descriptions of embodiments of the present invention disclosed herein are presented for the purpose of describing embodiments only in accordance with the concepts of the present invention, May be embodied in various forms and are not limited to the embodiments described herein.
Embodiments in accordance with the concepts of the present invention are capable of various modifications and may take various forms, so that the embodiments are illustrated in the drawings and described in detail herein. However, it is not intended to limit the embodiments according to the concepts of the present invention to the specific disclosure forms, but includes changes, equivalents, or alternatives falling within the spirit and scope of the present invention.
The terms first, second, or the like may be used to describe various elements, but the elements should not be limited by the terms. The terms may be named for the purpose of distinguishing one element from another, for example without departing from the scope of the right according to the concept of the present invention, the first element being referred to as the second element, Similarly, the second component may also be referred to as the first component.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. Expressions that describe the relationship between components, for example, "between" and "immediately" or "directly adjacent to" should be interpreted as well.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", and the like, are used to specify one or more of the features, numbers, steps, operations, elements, But do not preclude the presence or addition of steps, operations, elements, parts, or combinations thereof.
Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, the scope of the patent application is not limited or limited by these embodiments. Like reference symbols in the drawings denote like elements.
1 is a diagram for explaining an
The photographed
The photographed
In detail, the
First, the three-dimensional
The depth image may include a plurality of target objects. The target object may mean an object to be separated to obtain three-dimensional motion among the target objects.
Also, the three-dimensional motion can be interpreted as a positional change with respect to the depth value according to the motion of the target object.
Next, the
The three-dimensional motion may include a depth value corresponding to a particular color pixel. However, the interval of the depth value is inferior to the interval of the pixels.
The detailed
That is, the
The ICP (Iterative Closet Point) algorithm is a method of registering current data in an existing data set. The ICP (Iterative Closet Point) algorithm is a method of registering current data in an existing data set. To the data set. In other words, it is an algorithm that finds the associations using the depth values closest to the depth values included in the depth image, finds other depth values, and adds the intervals of the depth values more densely.
The
The matching
In general, a third depth value located between the first depth value and the second depth value may be estimated, and the estimated third depth value may be determined as a depth value corresponding to another color value. More specifically, a third color value, which is an intermediate color value of the second color value corresponding to the first depth value and the first color value corresponding to the first depth value, may be matched to the determined depth value. That is, it is possible to determine and match the third depth value assuming that the first depth value and the first color value, or the second depth value and the second color value already match exactly.
Alternatively, the matching
For example, the matching
For example, the matching
For example, the matching
The matching
Thus, the matching
As a result, the
On the other hand, the
That is, the present invention does not assume that the first depth value and the first color value, or the second depth value and the second color value already match exactly, and matches the third depth value and the third color value first, The first depth value and the first color value are matched with the correction value generated in the matching or the second depth value and the second color value are matched to correct the synchronization error.
2 is a view for explaining an embodiment for generating a detailed frame based on three-dimensional motion.
The first and second depth images can be represented by points on the three-dimensional space, which correspond to the
[Equation 1]
[Mathematical Expression 1] is a mathematical expression expressed by an inter-matrix operation,
Can be interpreted as a result, i.e., a three-dimensional motion of the target object. That is, a formula with '(prime)' can be displayed as a result of Equation (1).If the equation of Y = aX type is expressed as [Equation 1], it can be expressed in the form of X '= aX. That is, a part can be interpreted as a part corresponding to an intermediate parenthesis. Also, in Equation (1), R denotes rotation, and T can be interpreted as translation.
The subscripts indicate the size of the matrix as 3x3, 3x1. That is, R can be interpreted as 3 rows and 3 columns, and T can be interpreted as 3 rows, and 0 can also mean that the zero matrix is 1 row and 3 columns.
That is, since the current point is a three-dimensional shape represented by the coordinates of x, y, z, if the matrix capable of changing the rotation and translation values is multiplied, the three-dimensional position is moved, The position can be expressed as x ', y', z '.
On the other hand, at 210, the blue dot can be interpreted as a converted point cloud.
As shown at 220, it can be seen that depth values are shifted and aligned between two different depth values in three-dimensional motion. That is, using the ICP algorithm, points in the image at 210 can be moved to points in the image at 220 to align.
FIG. 3 shows an image for generating a detailed frame while slightly moving from the first image using the acquired motion information.
FIG. 4 is an
According to an embodiment (400), a color image can be matched with a two-dimensional motion for a depth image using a motion clue.
Specifically,
The color values of the
FIG. 5 is a diagram 500 illustrating a process of finding a corresponding point according to an embodiment.
The left image and the right image can be interpreted as a first color image and a second color image, respectively. In the case of the
A
More specifically, the first and second depth images can be represented by points on the three-dimensional space, which correspond to the left points in FIG. These points almost overlap as shown in the right image of FIG. 2 when they are appropriately moved through the optimization algorithm using Equation (1). If you find the ideal motion, it will overlap completely.
In other words, we can find the three-dimensional movement and find out where each point moves and matches to which point.
When this information is projected on a two-dimensional plane, the position of corresponding points can be known and it is possible to see how these points move.
In this case, the point where the first and third dimensional points before the movement are projected is represented by a
In other words, since the first and second color values of the projected points matched through the three-dimensional information must have the same value, but the first input image is not, the three-dimensional value is slightly changed The same frame can be found.
6 is a view for explaining a method of synchronizing shot images according to an embodiment.
The method of synchronizing shot images according to an exemplary embodiment may acquire a depth image (step 602) and a color image (step 603) by shooting a color / depth camera for a target object (step 601).
Meanwhile, the captured image synchronization method according to an exemplary embodiment may separate the target object from the acquired depth image (step 604). The target object means all the objects shot by the color / depth camera, and the target object can be interpreted as an object for acquiring motion information among the target objects.
Next, the captured image synchronization method according to one embodiment may acquire a three-dimensional motion of the separated target object (step 605). In addition, the captured image synchronization method according to an exemplary embodiment may generate at least one detailed frame between the depth images based on the acquired three-dimensional motion (step 606).
In order to generate at least one detailed frame, at least one or more detailed frames having a depth value existing between adjacent depth images are generated in consideration of a depth value of adjacent depth images, .
The shot image synchronization method may estimate at least one two-dimensional image motion by projecting at least one or more generated detailed frames (step 607). That is, the generated detailed frame can be estimated as a two-dimensional shape by projecting a depth value to contrast with a color image of a two-dimensional shape as three-dimensional information having a depth value.
According to an embodiment of the present invention, it is possible to determine whether at least one two-dimensional image motion estimated and the two-dimensional image motion acquired from the color image corresponding to the depth image are matched. To this end, the shot image synchronization method can search for a frame in which the projected three-dimensional motion is most similar from the two-dimensional motion and the depth image of the color image (step 608).
In order to determine whether or not to match, the degree of similarity between the estimated at least one two-dimensional image motion and the two-dimensional image motion acquired from the color image can be calculated. As a result of calculating the degree of similarity, a two-dimensional image motion having a degree of similarity equal to or greater than a threshold value among at least one of the two-dimensional image motion estimated is determined, and the determined two-dimensional image motion corresponds to a two- .
In addition, the captured image synchronization method according to an exemplary embodiment may regenerate the depth image frame based on the determined matching (step 609).
As a result, according to the present invention, temporal / spatial synchronization can be performed by approaching the problem of temporal error between color / depth video images, which has been limited to existing technology and mechanical limit, as a method of predicting the motion of moving objects in two video images. In addition, we can solve the time and space synchronization problem based on similarity of object motion information, rather than characteristic value matching based on visual similarity between two images, using only images obtained from commercial RGBD camera without a separate measuring device have. In addition, it is possible to provide a technology to assist in accurate shooting while meeting the purpose of a commercialization device capable of time and space synchronization without additional equipment, and to overcome practical limitations of a number of image processing / vision algorithms that assume a depth image matching in advance And can effectively improve the quality of the database of existing vision algorithms. In addition, it can greatly affect the performance of the final algorithm, and can also greatly expand the application field of commercial RGBD cameras.
The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
100: image synchronizing apparatus 110: three-dimensional motion collecting unit
120: detailed frame generation unit 130: motion estimation unit
140: matching processing unit 150:
Claims (10)
A detailed frame generation unit for generating at least one detailed frame between the depth images based on the obtained three-dimensional motion;
A motion estimator for estimating at least one two-dimensional image motion by projecting the generated at least one detailed frame;
A matching processor for determining whether to match at least one of the at least one two-dimensional image motion estimated and the two-dimensional image motion acquired from the color image corresponding to the depth image; And
And a correction processor for regenerating the depth image frame based on the determined matching,
The matching processing unit,
Calculating a degree of similarity according to the degree of coincidence of the motion values constituting the estimated at least one two-dimensional image motion and the color points constituting the two-dimensional image motion acquired from the color image,
Determining a two-dimensional image motion having a degree of similarity not less than a threshold value among the estimated at least one two-dimensional image motion as a result of calculating the degree of similarity,
And determines the determined two-dimensional image motion to correspond to a two-dimensional image motion obtained from the color image
Image synchronization device.
Wherein the subframe generator comprises:
And generates the at least one or more detailed frames having a depth value existing between the adjacent depth images in consideration of depth values of adjacent depth images.
Wherein,
And reconstructing a depth image frame for a two-dimensional image motion acquired from the color image using a depth image frame corresponding to the determined two-dimensional image motion.
Wherein,
Using the relationship between the depth image frames adjacent to the determined depth image frame corresponding to the determined two-dimensional image motion, the depth image frame corresponding to the determined two-dimensional image motion, and the adjacent depth image frames, And corrects the synchronization error of the depth image with respect to the depth image.
Obtaining three-dimensional motion of the separated target object;
Generating at least one sub-frame between the depth images based on the acquired three-dimensional motion;
Estimating at least one two-dimensional image motion by projecting the generated at least one detailed frame;
Determining whether the estimated at least one two-dimensional image motion matches a two-dimensional image motion acquired from a color image corresponding to the depth image; And
And reproducing the depth image frame based on the determined matching,
The method of claim 1,
Calculating a degree of similarity according to the degree of matching between the motion values constituting the estimated at least one two-dimensional image motion and the color points constituting the two-dimensional image motion acquired from the color image;
Determining a two-dimensional image motion having a degree of similarity higher than a threshold value among the estimated at least one two-dimensional image motion as a result of calculating the degree of similarity; And
Determining that the determined two-dimensional image motion corresponds to a two-dimensional image motion obtained from the color image
/ RTI >
Wherein the generating of the at least one sub-
Generating the at least one or more detailed frames having a depth value existing between the adjacent depth images in consideration of depth values of adjacent depth images
/ RTI >
Wherein the step of regenerating the depth image frame comprises:
Reconstructing a depth image frame for a two-dimensional image motion obtained from the color image using a depth image frame corresponding to the determined two-dimensional image motion,
/ RTI >
Wherein the step of regenerating the depth image frame comprises:
Using the relationship between the depth image frames adjacent to the determined depth image frame corresponding to the determined two-dimensional image motion, the depth image frame corresponding to the determined two-dimensional image motion, and the adjacent depth image frames, A step of correcting the synchronization error of the depth image with respect to the depth image
/ RTI >
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170004361A KR101867212B1 (en) | 2017-01-11 | 2017-01-11 | Apparatus and method for synchronication of image frames captured from color and depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170004361A KR101867212B1 (en) | 2017-01-11 | 2017-01-11 | Apparatus and method for synchronication of image frames captured from color and depth camera |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101867212B1 true KR101867212B1 (en) | 2018-06-12 |
Family
ID=62622177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170004361A KR101867212B1 (en) | 2017-01-11 | 2017-01-11 | Apparatus and method for synchronication of image frames captured from color and depth camera |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101867212B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139533A1 (en) * | 2018-12-26 | 2020-07-02 | Snap Inc. | Creation and user interactions with three-dimensional wallpaper on computing devices |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0122440B1 (en) | 1994-12-14 | 1997-11-20 | 양승택 | The audio board which voice capture and play are concurrcutly operating |
KR20110127899A (en) * | 2010-05-20 | 2011-11-28 | 삼성전자주식회사 | Temporal interpolation of three dimension depth image method and apparatus |
KR20130032083A (en) | 2011-09-22 | 2013-04-01 | 엘지디스플레이 주식회사 | Liquid crystal display device |
-
2017
- 2017-01-11 KR KR1020170004361A patent/KR101867212B1/en active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0122440B1 (en) | 1994-12-14 | 1997-11-20 | 양승택 | The audio board which voice capture and play are concurrcutly operating |
KR20110127899A (en) * | 2010-05-20 | 2011-11-28 | 삼성전자주식회사 | Temporal interpolation of three dimension depth image method and apparatus |
KR20130032083A (en) | 2011-09-22 | 2013-04-01 | 엘지디스플레이 주식회사 | Liquid crystal display device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139533A1 (en) * | 2018-12-26 | 2020-07-02 | Snap Inc. | Creation and user interactions with three-dimensional wallpaper on computing devices |
US11240481B2 (en) | 2018-12-26 | 2022-02-01 | Snap Inc. | Creation and user interactions with three-dimensional wallpaper on computing devices |
US11843758B2 (en) | 2018-12-26 | 2023-12-12 | Snap Inc. | Creation and user interactions with three-dimensional wallpaper on computing devices |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10750150B2 (en) | Methods for automatic registration of 3D image data | |
US10818029B2 (en) | Multi-directional structured image array capture on a 2D graph | |
US10846913B2 (en) | System and method for infinite synthetic image generation from multi-directional structured image array | |
US10789765B2 (en) | Three-dimensional reconstruction method | |
US10262426B2 (en) | System and method for infinite smoothing of image sequences | |
Tuytelaars et al. | Synchronizing video sequences | |
CN106991650B (en) | Image deblurring method and device | |
KR102135770B1 (en) | Method and apparatus for reconstructing 3d face with stereo camera | |
CN110009672A (en) | Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment | |
KR101544021B1 (en) | Apparatus and method for generating 3d map | |
KR20130084849A (en) | Method and apparatus for camera tracking | |
US11922658B2 (en) | Pose tracking method, pose tracking device and electronic device | |
WO2018216341A1 (en) | Information processing device, information processing method, and program | |
CN105335959B (en) | Imaging device quick focusing method and its equipment | |
US10346949B1 (en) | Image registration | |
CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
US9208549B2 (en) | Method and apparatus for color transfer between images | |
KR101867212B1 (en) | Apparatus and method for synchronication of image frames captured from color and depth camera | |
JP2005141655A (en) | Three-dimensional modeling apparatus and three-dimensional modeling method | |
KR20110133677A (en) | Method and apparatus for processing 3d image | |
KR20150040194A (en) | Apparatus and method for displaying hologram using pupil track based on hybrid camera | |
CN111091621A (en) | Binocular vision synchronous positioning and composition method, device, equipment and storage medium | |
KR20210133472A (en) | Method of merging images and data processing device performing the same | |
Nadar et al. | Sensor simulation for monocular depth estimation using deep neural networks | |
US20230281862A1 (en) | Sampling based self-supervised depth and pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |