KR101867212B1 - Apparatus and method for synchronication of image frames captured from color and depth camera - Google Patents

Apparatus and method for synchronication of image frames captured from color and depth camera Download PDF

Info

Publication number
KR101867212B1
KR101867212B1 KR1020170004361A KR20170004361A KR101867212B1 KR 101867212 B1 KR101867212 B1 KR 101867212B1 KR 1020170004361 A KR1020170004361 A KR 1020170004361A KR 20170004361 A KR20170004361 A KR 20170004361A KR 101867212 B1 KR101867212 B1 KR 101867212B1
Authority
KR
South Korea
Prior art keywords
motion
image
depth
dimensional
dimensional image
Prior art date
Application number
KR1020170004361A
Other languages
Korean (ko)
Inventor
심현정
방두현
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Priority to KR1020170004361A priority Critical patent/KR101867212B1/en
Application granted granted Critical
Publication of KR101867212B1 publication Critical patent/KR101867212B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Abstract

The present invention relates to a technique for synchronization of photographed images for a color camera and a depth camera. According to an embodiment of the present invention, an image synchronization apparatus comprises a three-dimensional motion collection unit to separate a target object from depth images and acquire a three-dimensional motion of the separated target object; a detail frame generation unit to generate one or more detail frames between the depth images based on the acquired three-dimensional motion; a motion estimation unit to project one or more generated detail frames to estimate one or more two-dimensional image motions; a matching processing unit to determine whether the one or more estimated two-dimensional image motions and a two-dimensional image motion acquired from a color image corresponding to the depth images match; and a correction processing unit to regenerate a depth image frame based on whether the image motions match.

Description

Field of the Invention [0001] The present invention relates to an apparatus and method for synchronizing captured images of a color camera and a depth camera,

The present invention relates to a technique of synchronizing a captured image with a color camera and a depth camera, and relates to a technical idea for temporal and spatial synchronization between a color image and a depth image by predicting a motion of a video object.

As technology develops, contents using 3D technologies such as 3D movies, virtual reality, and augmented reality are increasing. In many cases, this technology consumes a lot of manpower, time, and cost. To reduce this, several 3D information acquisition devices are also emerging. Of these, RGBD cameras, which can acquire color and three-dimensional information in real time, are actively used for depth video shooting.

However, even with the latest RGBD cameras, there is no time synchronization between color / depth video acquired due to technical limitations. The temporal asynchronization results in a spatial synchronous difference between the two images. This makes it difficult to apply existing 3D image restoration and improvement algorithms by providing distorted information to various vision algorithms that utilize both information to perform information analysis.

In order to correct the viewpoint between the two video images, a method of using an external device that corrects the shooting time of two shooting devices, a method of acquiring a video image by moving and capturing a still object, And the continuity of the system.

This approach has the problem that additional equipment is required and therefore the use environment is limited and it can not be applied to images that have already been shot. Limiting the shooting method has a limitation that it can not be utilized under a general scenario of shooting a moving object. In addition, since depth information and color information have very different characteristics of information, it is very difficult to find a characteristic value common to both images.

Currently, color / depth cameras that are being commercialized require spatial synchronization of images obtained through mathematical calibration. However, in the case of moving object video images, because of the asynchronization of the imaging time of two cameras, The spatial motivation of

The general color / depth video image related technology deals with a single image and is based on spatial synchronization between images through correction. However, these techniques have the disadvantage of not being applied or applied to video images where time and space synchronization is not achieved.

Korean Patent Application No. 10-2013-0032083 entitled " 3D Depth Image Time Interpolation Method and Apparatus " Korean Patent Application No. 10-2008-0122440 "Method and Apparatus for Correcting Depth Images"

The present invention aims at performing temporal / spatial synchronization by approaching the problem of temporal error between color / depth video images, which has been limited to existing technology and mechanical limit, as a method of predicting the motion of moving objects in two video images.

In the present invention, only the image obtained from the commercial RGBD camera is used without a separate measuring device, and the time and space synchronization problem is solved based on the similarity of the object motion information, rather than the characteristic value matching based on the visual similarity between the two images .

The object of the present invention is to provide a technique for assisting accurate shooting while meeting the purpose of a commercialization device capable of time-space synchronization without additional equipment.

The present invention aims at overcoming the practical limitations of many image processing / vision algorithms that assume depth image matching in advance.

The present invention aims at effectively improving the quality of a database of existing vision algorithms.

The present invention aims at significantly improving the performance of the final algorithm.

An object of the present invention is to expand the application field using commercial RGBD cameras.

An image synchronization apparatus according to an embodiment includes a three-dimensional motion collection unit for separating a target object from a depth image and obtaining a three-dimensional motion of the separated target object, at least one A motion estimator for estimating at least one two-dimensional image motion by projecting the generated at least one or more sub-frames; a motion estimator for estimating at least one two-dimensional image motion, A matching processor for determining whether or not to match the two-dimensional image motion obtained from the color image corresponding to the image, and a correction processor for reproducing the depth image frame based on the determined matching.

The subframe generation unit may generate the at least one or more subframes having a depth value between the adjacent depth images, taking into consideration the depth values of the adjacent depth images.

The matching processing unit according to an embodiment calculates the similarity between the estimated at least one two-dimensional image motion and the two-dimensional image motion acquired from the color image, and as a result of calculating the similarity degree, Determining a two-dimensional image motion having a degree of similarity higher than a threshold value among the one or more two-dimensional image motion, and corresponding to the two-dimensional image motion obtained from the determined color image.

The correction processing unit may regenerate a depth image frame for a two-dimensional image motion acquired from the color image using a depth image frame corresponding to the determined two-dimensional image motion.

The correction processing unit may include a depth image frame that is adjacent to a depth image frame corresponding to the determined two-dimensional image motion, a depth image frame that corresponds to the determined two-dimensional image motion, It is possible to correct the synchronization error of the depth image with respect to the color images.

According to an exemplary embodiment, an image synchronization method includes separating a target object from a depth image, acquiring a three-dimensional motion of the separated target object, interpreting at least one or more sub- Estimating the at least one two-dimensional image motion by projecting the generated at least one sub-frame, calculating at least one of the at least one two-dimensional image motion estimated from the at least one two-dimensional image motion, Determining whether a two-dimensional image motion is matched, and regenerating a depth image frame based on the determined matching.

The generating of the at least one subframe according to an exemplary embodiment may include generating the at least one or more subframes having a depth value existing between the adjacent depth images in consideration of a depth value of adjacent depth images .

The step of determining whether or not the matching according to an embodiment includes the steps of calculating a degree of similarity between the estimated at least one two-dimensional image motion and a two-dimensional image motion acquired from the color image, Determining a two-dimensional image motion having a degree of similarity equal to or greater than a threshold value among the estimated at least one two-dimensional image motion, and determining the determined two-dimensional image motion as corresponding to a two-dimensional image motion obtained from the color image .

The step of regenerating the depth image frame according to an exemplary embodiment may include regenerating a depth image frame for a two-dimensional image motion obtained from the color image using a depth image frame corresponding to the determined two- . ≪ / RTI >

The step of regenerating the depth image frame according to an exemplary embodiment of the present invention includes the steps of: generating depth image frames adjacent to a depth image frame corresponding to the determined two-dimensional image motion, a depth image frame corresponding to the determined two- And correcting a synchronization error of the depth image with respect to the color images using the relationship between adjacent depth image frames.

According to one embodiment, temporal / spatial synchronization can be performed by approaching the temporal error problem between the color / depth video images, which has been limited to existing technology and mechanical limit, by a method of predicting the motion of moving objects in two video images.

According to an exemplary embodiment, only an image obtained from a commercial RGBD camera is used without a separate measuring device, and based on similarity of object motion information, rather than characteristic value matching based on visual similarity between two images, I can solve the problem.

According to one embodiment, it is possible to provide a technique for assisting accurate shooting while meeting the purpose of a commercialization device capable of time-space synchronization without additional equipment.

According to an exemplary embodiment, it is possible to overcome practical limitations of a number of image processing / vision algorithms that assume a depth image matching in advance.

According to one embodiment, the quality of the database of existing vision algorithms can be effectively improved.

According to one embodiment, the performance of the final algorithm can be greatly influenced.

According to one embodiment, the application field using a commercially available RGBD camera can be greatly expanded.

1 is a view for explaining an image pickup apparatus according to an embodiment of the present invention.
2 is a view for explaining an embodiment for generating a detailed frame based on three-dimensional motion.
FIG. 3 shows an image for generating a detailed frame while slightly moving from the first image using the acquired motion information.
FIG. 4 is a diagram for explaining a process of acquiring a correspondence point between color and depth through motion relevance according to an embodiment.
5 is a diagram for explaining a process of finding a corresponding point according to an embodiment.
6 is a view for explaining a method of synchronizing shot images according to an embodiment.

It is to be understood that the specific structural or functional descriptions of embodiments of the present invention disclosed herein are presented for the purpose of describing embodiments only in accordance with the concepts of the present invention, May be embodied in various forms and are not limited to the embodiments described herein.

Embodiments in accordance with the concepts of the present invention are capable of various modifications and may take various forms, so that the embodiments are illustrated in the drawings and described in detail herein. However, it is not intended to limit the embodiments according to the concepts of the present invention to the specific disclosure forms, but includes changes, equivalents, or alternatives falling within the spirit and scope of the present invention.

The terms first, second, or the like may be used to describe various elements, but the elements should not be limited by the terms. The terms may be named for the purpose of distinguishing one element from another, for example without departing from the scope of the right according to the concept of the present invention, the first element being referred to as the second element, Similarly, the second component may also be referred to as the first component.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. Expressions that describe the relationship between components, for example, "between" and "immediately" or "directly adjacent to" should be interpreted as well.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", and the like, are used to specify one or more of the features, numbers, steps, operations, elements, But do not preclude the presence or addition of steps, operations, elements, parts, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, the scope of the patent application is not limited or limited by these embodiments. Like reference symbols in the drawings denote like elements.

1 is a diagram for explaining an apparatus 100 for synchronizing an image taken according to an embodiment.

The photographed image synchronizing apparatus 100 according to an embodiment approaches temporal error between color / depth video images, which has been limited to existing technology and mechanical limit, as a method of predicting the motion of moving objects in two video images, Can be performed. In addition, we can solve the time and space synchronization problem based on similarity of object motion information, rather than characteristic value matching based on visual similarity between two images, using only images obtained from commercial RGBD camera without a separate measuring device have.

The photographed image synchronizing apparatus 100 may be able to estimate the motion information from the image by focusing on the fact that the photographed object has the temporal / spatial synchronization error between the color / depth images but the motion of the target object in each image is the same Time / spatial synchronization can be performed. In particular, the photographed image synchronizing apparatus 100 can estimate the trajectory of an object in a three-dimensional space by predicting the motion of the object through the analysis of the depth image, and project the estimated three- Two-dimensional motion can be obtained. The photographed image synchronizing apparatus 100 can estimate an image in which the time / spatial synchronism coincides with a frame whose synchronization difference between the obtained two-dimensional motion and the two-dimensional motion obtained through the color image is minimized. Also, the photographed image synchronizing apparatus 100 can correct the existing distortion information by regenerating the depth image whose time / spatial synchronization agrees with the color image using the obtained information.

In detail, the apparatus 100 for synchronizing the images according to the embodiment includes a three-dimensional motion acquisition unit 110, a detailed frame generation unit 120, a motion estimation unit 130, a matching processing unit 140, 150).

First, the three-dimensional motion acquisition unit 110 according to an embodiment can separate the target object from the depth image and obtain the three-dimensional motion of the separated target object.

The depth image may include a plurality of target objects. The target object may mean an object to be separated to obtain three-dimensional motion among the target objects.

Also, the three-dimensional motion can be interpreted as a positional change with respect to the depth value according to the motion of the target object.

Next, the subframe generator 120 according to an embodiment may generate at least one subframe between the depth images based on the acquired three-dimensional motion.

The three-dimensional motion may include a depth value corresponding to a particular color pixel. However, the interval of the depth value is inferior to the interval of the pixels.

The detailed frame generation unit 120 may estimate a depth value between intervals of depth values and generate a detailed frame using the estimated depth values.

That is, the subframe generator 120 may generate at least one or more subframes having a depth value between adjacent depth images, considering depth values of adjacent depth images.

The ICP (Iterative Closet Point) algorithm is a method of registering current data in an existing data set. The ICP (Iterative Closet Point) algorithm is a method of registering current data in an existing data set. To the data set. In other words, it is an algorithm that finds the associations using the depth values closest to the depth values included in the depth image, finds other depth values, and adds the intervals of the depth values more densely.

The motion estimation unit 130 may estimate at least one two-dimensional image motion by projecting the generated at least one sub-frame. The detailed frame corresponds to the three-dimensional data, and the motion estimator 130 can estimate the two-dimensional image motion by projecting the three-dimensional data.

The matching processing unit 140 according to an exemplary embodiment may determine whether at least one of the two-dimensional image motion estimated and the two-dimensional image motion acquired from the color image corresponding to the depth image match.

In general, a third depth value located between the first depth value and the second depth value may be estimated, and the estimated third depth value may be determined as a depth value corresponding to another color value. More specifically, a third color value, which is an intermediate color value of the second color value corresponding to the first depth value and the first color value corresponding to the first depth value, may be matched to the determined depth value. That is, it is possible to determine and match the third depth value assuming that the first depth value and the first color value, or the second depth value and the second color value already match exactly.

Alternatively, the matching processing unit 140 according to the present invention does not assume that the first depth value and the first color value, or the second depth value and the second color value are already precisely matched, The color value is first matched and then the first color value and the generated third depth value are matched with the second color value and the fourth depth value by the correction value generated in the matching.

For example, the matching processing unit 140 may calculate the similarity between the estimated at least one two-dimensional image motion and the two-dimensional image motion obtained from the color image. In addition, the most similar frame among two-dimensional image motion can be searched based on the calculated similarity.

For example, the matching processing unit 140 may calculate the similarity between the estimated two-dimensional image motion and the two-dimensional image motion obtained from the color image. In addition, the most similar frame among the two-dimensional image motion projected based on the calculated similarity can be searched. The projected two-dimensional image motion corresponds to the motion obtained by projecting the three-dimensional motion of the tracked object on the two-dimensional image in the three-dimensional space.

For example, the matching processing unit 140 may determine a two-dimensional image motion having a degree of similarity higher than a threshold value among at least one or more two-dimensional image motion estimated as a result of calculating the degree of similarity. The threshold value can be determined by the administrator in the system mass production process or can be determined as the experience value for the maximum performance improvement. In addition, the matching processing unit 140 can determine the two-dimensional image motion having the highest similarity among the at least one two-dimensional image motion as the corresponding two-dimensional image motion. The threshold value at this time can be interpreted as a relative value rather than an absolute value.

The matching processing unit 140 may determine that the determined two-dimensional image motion corresponds to the two-dimensional image motion obtained from the color image. That is, the matching processing unit 140 can determine a specific two-dimensional image motion having the highest degree of similarity among the at least one two-dimensional image motion estimated.

Thus, the matching processing unit 140 can match the depth value of the subframe to the depth value of the most similar color image.

As a result, the correction processing unit 150 can regenerate the depth image frame based on the determined matching.

On the other hand, the correction processing unit 150 can reproduce the depth image frame for the two-dimensional image motion acquired from the color image, using the depth image frame corresponding to the determined two-dimensional image motion. On the other hand, the correction processing unit 150 uses the relationship between the depth image frames adjacent to the depth image frame corresponding to the determined two-dimensional image motion, the depth image frame corresponding to the determined two-dimensional image motion, and the adjacent depth image frames So that the synchronization error of the depth image with respect to the color images can be corrected.

That is, the present invention does not assume that the first depth value and the first color value, or the second depth value and the second color value already match exactly, and matches the third depth value and the third color value first, The first depth value and the first color value are matched with the correction value generated in the matching or the second depth value and the second color value are matched to correct the synchronization error.

2 is a view for explaining an embodiment for generating a detailed frame based on three-dimensional motion.

The first and second depth images can be represented by points on the three-dimensional space, which correspond to the left points 210 of FIG. These points almost overlap as shown in the right image of FIG. 2 when they are appropriately moved through the optimization algorithm using Equation (1). If you find the ideal motion, it will overlap completely.

[Equation 1]

Figure 112017003568699-pat00001

[Mathematical Expression 1] is a mathematical expression expressed by an inter-matrix operation,

Figure 112017003568699-pat00002
Can be interpreted as a result, i.e., a three-dimensional motion of the target object. That is, a formula with '(prime)' can be displayed as a result of Equation (1).

If the equation of Y = aX type is expressed as [Equation 1], it can be expressed in the form of X '= aX. That is, a part can be interpreted as a part corresponding to an intermediate parenthesis. Also, in Equation (1), R denotes rotation, and T can be interpreted as translation.

The subscripts indicate the size of the matrix as 3x3, 3x1. That is, R can be interpreted as 3 rows and 3 columns, and T can be interpreted as 3 rows, and 0 can also mean that the zero matrix is 1 row and 3 columns.

That is, since the current point is a three-dimensional shape represented by the coordinates of x, y, z, if the matrix capable of changing the rotation and translation values is multiplied, the three-dimensional position is moved, The position can be expressed as x ', y', z '.

On the other hand, at 210, the blue dot can be interpreted as a converted point cloud.

Reference numeral 220 denotes a three-dimensional motion in which a detailed frame generated in association with the reference numeral 210 is reflected on the basis of the three-dimensional motion.

As shown at 220, it can be seen that depth values are shifted and aligned between two different depth values in three-dimensional motion. That is, using the ICP algorithm, points in the image at 210 can be moved to points in the image at 220 to align.

FIG. 3 shows an image for generating a detailed frame while slightly moving from the first image using the acquired motion information.

Reference numeral 300 denotes a three-dimensional motion in which a detailed frame is gradually reflected. That is, as shown in FIG. 3, an image for generating a detailed frame while slightly moving from the (1, 1) -th image using the obtained motion information is shown.

FIG. 4 is an exemplary view 400 illustrating a process of acquiring a correspondence point between color and depth through motion relevance according to an exemplary embodiment.

According to an embodiment (400), a color image can be matched with a two-dimensional motion for a depth image using a motion clue.

Specifically, reference numeral 410 denotes a color image, and reference numeral 420 denotes color points corresponding to the color image 410. That is, the pixel 411 of the color image 410 corresponds to the color point 421.

Reference numeral 430 denotes a depth image, and reference numeral 440 denotes a three-dimensional motion corresponding to the depth image 430. The depth value 431 of the depth image 430 may correspond to the three-dimensional motion value 441.

The color values of the color image 410 and the depth values of the depth image 430 do not coincide exactly and an error may occur. Accordingly, in the present invention, temporal / spatial synchronization errors between a color image and a depth image can be corrected. That is, the temporal / spatial synchronism does not coincide due to the asynchronization of the photographing time in spite of the corrected color camera and the depth camera. In the present invention, spatial asynchrony problem object motion matching by temporal error can be solved.

FIG. 5 is a diagram 500 illustrating a process of finding a corresponding point according to an embodiment.

The left image and the right image can be interpreted as a first color image and a second color image, respectively. In the case of the red circle 511 displayed on the object 510 of the left image and the green cross 521 displayed on the object 520 of the right image, The matching relationship between the red circle 511 and the green cross 521 can be visualized.

A red circle 511 and a green cross 521 can be obtained through the process of FIG.

More specifically, the first and second depth images can be represented by points on the three-dimensional space, which correspond to the left points in FIG. These points almost overlap as shown in the right image of FIG. 2 when they are appropriately moved through the optimization algorithm using Equation (1). If you find the ideal motion, it will overlap completely.

In other words, we can find the three-dimensional movement and find out where each point moves and matches to which point.

When this information is projected on a two-dimensional plane, the position of corresponding points can be known and it is possible to see how these points move.

In this case, the point where the first and third dimensional points before the movement are projected is represented by a red circle 511, and the second three-dimensional point can be expressed by a green cross 521. Also, the 3D motion can be found and the matching result of the points can be known, and the representation of this can be interpreted as a yellow line.

In other words, since the first and second color values of the projected points matched through the three-dimensional information must have the same value, but the first input image is not, the three-dimensional value is slightly changed The same frame can be found.

6 is a view for explaining a method of synchronizing shot images according to an embodiment.

The method of synchronizing shot images according to an exemplary embodiment may acquire a depth image (step 602) and a color image (step 603) by shooting a color / depth camera for a target object (step 601).

Meanwhile, the captured image synchronization method according to an exemplary embodiment may separate the target object from the acquired depth image (step 604). The target object means all the objects shot by the color / depth camera, and the target object can be interpreted as an object for acquiring motion information among the target objects.

Next, the captured image synchronization method according to one embodiment may acquire a three-dimensional motion of the separated target object (step 605). In addition, the captured image synchronization method according to an exemplary embodiment may generate at least one detailed frame between the depth images based on the acquired three-dimensional motion (step 606).

In order to generate at least one detailed frame, at least one or more detailed frames having a depth value existing between adjacent depth images are generated in consideration of a depth value of adjacent depth images, .

The shot image synchronization method may estimate at least one two-dimensional image motion by projecting at least one or more generated detailed frames (step 607). That is, the generated detailed frame can be estimated as a two-dimensional shape by projecting a depth value to contrast with a color image of a two-dimensional shape as three-dimensional information having a depth value.

According to an embodiment of the present invention, it is possible to determine whether at least one two-dimensional image motion estimated and the two-dimensional image motion acquired from the color image corresponding to the depth image are matched. To this end, the shot image synchronization method can search for a frame in which the projected three-dimensional motion is most similar from the two-dimensional motion and the depth image of the color image (step 608).

In order to determine whether or not to match, the degree of similarity between the estimated at least one two-dimensional image motion and the two-dimensional image motion acquired from the color image can be calculated. As a result of calculating the degree of similarity, a two-dimensional image motion having a degree of similarity equal to or greater than a threshold value among at least one of the two-dimensional image motion estimated is determined, and the determined two-dimensional image motion corresponds to a two- .

In addition, the captured image synchronization method according to an exemplary embodiment may regenerate the depth image frame based on the determined matching (step 609).

As a result, according to the present invention, temporal / spatial synchronization can be performed by approaching the problem of temporal error between color / depth video images, which has been limited to existing technology and mechanical limit, as a method of predicting the motion of moving objects in two video images. In addition, we can solve the time and space synchronization problem based on similarity of object motion information, rather than characteristic value matching based on visual similarity between two images, using only images obtained from commercial RGBD camera without a separate measuring device have. In addition, it is possible to provide a technology to assist in accurate shooting while meeting the purpose of a commercialization device capable of time and space synchronization without additional equipment, and to overcome practical limitations of a number of image processing / vision algorithms that assume a depth image matching in advance And can effectively improve the quality of the database of existing vision algorithms. In addition, it can greatly affect the performance of the final algorithm, and can also greatly expand the application field of commercial RGBD cameras.

The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.

The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.

The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.

Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

100: image synchronizing apparatus 110: three-dimensional motion collecting unit
120: detailed frame generation unit 130: motion estimation unit
140: matching processing unit 150:

Claims (10)

A three-dimensional motion collecting unit for separating a target object from a depth image and acquiring a three-dimensional motion of the separated target object;
A detailed frame generation unit for generating at least one detailed frame between the depth images based on the obtained three-dimensional motion;
A motion estimator for estimating at least one two-dimensional image motion by projecting the generated at least one detailed frame;
A matching processor for determining whether to match at least one of the at least one two-dimensional image motion estimated and the two-dimensional image motion acquired from the color image corresponding to the depth image; And
And a correction processor for regenerating the depth image frame based on the determined matching,
The matching processing unit,
Calculating a degree of similarity according to the degree of coincidence of the motion values constituting the estimated at least one two-dimensional image motion and the color points constituting the two-dimensional image motion acquired from the color image,
Determining a two-dimensional image motion having a degree of similarity not less than a threshold value among the estimated at least one two-dimensional image motion as a result of calculating the degree of similarity,
And determines the determined two-dimensional image motion to correspond to a two-dimensional image motion obtained from the color image
Image synchronization device.
The method according to claim 1,
Wherein the subframe generator comprises:
And generates the at least one or more detailed frames having a depth value existing between the adjacent depth images in consideration of depth values of adjacent depth images.
delete The method according to claim 1,
Wherein,
And reconstructing a depth image frame for a two-dimensional image motion acquired from the color image using a depth image frame corresponding to the determined two-dimensional image motion.
5. The method of claim 4,
Wherein,
Using the relationship between the depth image frames adjacent to the determined depth image frame corresponding to the determined two-dimensional image motion, the depth image frame corresponding to the determined two-dimensional image motion, and the adjacent depth image frames, And corrects the synchronization error of the depth image with respect to the depth image.
Separating the target object from the depth image;
Obtaining three-dimensional motion of the separated target object;
Generating at least one sub-frame between the depth images based on the acquired three-dimensional motion;
Estimating at least one two-dimensional image motion by projecting the generated at least one detailed frame;
Determining whether the estimated at least one two-dimensional image motion matches a two-dimensional image motion acquired from a color image corresponding to the depth image; And
And reproducing the depth image frame based on the determined matching,
The method of claim 1,
Calculating a degree of similarity according to the degree of matching between the motion values constituting the estimated at least one two-dimensional image motion and the color points constituting the two-dimensional image motion acquired from the color image;
Determining a two-dimensional image motion having a degree of similarity higher than a threshold value among the estimated at least one two-dimensional image motion as a result of calculating the degree of similarity; And
Determining that the determined two-dimensional image motion corresponds to a two-dimensional image motion obtained from the color image
/ RTI >
The method according to claim 6,
Wherein the generating of the at least one sub-
Generating the at least one or more detailed frames having a depth value existing between the adjacent depth images in consideration of depth values of adjacent depth images
/ RTI >
delete The method according to claim 6,
Wherein the step of regenerating the depth image frame comprises:
Reconstructing a depth image frame for a two-dimensional image motion obtained from the color image using a depth image frame corresponding to the determined two-dimensional image motion,
/ RTI >
10. The method of claim 9,
Wherein the step of regenerating the depth image frame comprises:
Using the relationship between the depth image frames adjacent to the determined depth image frame corresponding to the determined two-dimensional image motion, the depth image frame corresponding to the determined two-dimensional image motion, and the adjacent depth image frames, A step of correcting the synchronization error of the depth image with respect to the depth image
/ RTI >
KR1020170004361A 2017-01-11 2017-01-11 Apparatus and method for synchronication of image frames captured from color and depth camera KR101867212B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170004361A KR101867212B1 (en) 2017-01-11 2017-01-11 Apparatus and method for synchronication of image frames captured from color and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170004361A KR101867212B1 (en) 2017-01-11 2017-01-11 Apparatus and method for synchronication of image frames captured from color and depth camera

Publications (1)

Publication Number Publication Date
KR101867212B1 true KR101867212B1 (en) 2018-06-12

Family

ID=62622177

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170004361A KR101867212B1 (en) 2017-01-11 2017-01-11 Apparatus and method for synchronication of image frames captured from color and depth camera

Country Status (1)

Country Link
KR (1) KR101867212B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020139533A1 (en) * 2018-12-26 2020-07-02 Snap Inc. Creation and user interactions with three-dimensional wallpaper on computing devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0122440B1 (en) 1994-12-14 1997-11-20 양승택 The audio board which voice capture and play are concurrcutly operating
KR20110127899A (en) * 2010-05-20 2011-11-28 삼성전자주식회사 Temporal interpolation of three dimension depth image method and apparatus
KR20130032083A (en) 2011-09-22 2013-04-01 엘지디스플레이 주식회사 Liquid crystal display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0122440B1 (en) 1994-12-14 1997-11-20 양승택 The audio board which voice capture and play are concurrcutly operating
KR20110127899A (en) * 2010-05-20 2011-11-28 삼성전자주식회사 Temporal interpolation of three dimension depth image method and apparatus
KR20130032083A (en) 2011-09-22 2013-04-01 엘지디스플레이 주식회사 Liquid crystal display device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020139533A1 (en) * 2018-12-26 2020-07-02 Snap Inc. Creation and user interactions with three-dimensional wallpaper on computing devices
US11240481B2 (en) 2018-12-26 2022-02-01 Snap Inc. Creation and user interactions with three-dimensional wallpaper on computing devices
US11843758B2 (en) 2018-12-26 2023-12-12 Snap Inc. Creation and user interactions with three-dimensional wallpaper on computing devices

Similar Documents

Publication Publication Date Title
US10750150B2 (en) Methods for automatic registration of 3D image data
US10818029B2 (en) Multi-directional structured image array capture on a 2D graph
US10846913B2 (en) System and method for infinite synthetic image generation from multi-directional structured image array
US10789765B2 (en) Three-dimensional reconstruction method
US10262426B2 (en) System and method for infinite smoothing of image sequences
Tuytelaars et al. Synchronizing video sequences
CN106991650B (en) Image deblurring method and device
KR102135770B1 (en) Method and apparatus for reconstructing 3d face with stereo camera
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
KR101544021B1 (en) Apparatus and method for generating 3d map
KR20130084849A (en) Method and apparatus for camera tracking
US11922658B2 (en) Pose tracking method, pose tracking device and electronic device
WO2018216341A1 (en) Information processing device, information processing method, and program
CN105335959B (en) Imaging device quick focusing method and its equipment
US10346949B1 (en) Image registration
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
US9208549B2 (en) Method and apparatus for color transfer between images
KR101867212B1 (en) Apparatus and method for synchronication of image frames captured from color and depth camera
JP2005141655A (en) Three-dimensional modeling apparatus and three-dimensional modeling method
KR20110133677A (en) Method and apparatus for processing 3d image
KR20150040194A (en) Apparatus and method for displaying hologram using pupil track based on hybrid camera
CN111091621A (en) Binocular vision synchronous positioning and composition method, device, equipment and storage medium
KR20210133472A (en) Method of merging images and data processing device performing the same
Nadar et al. Sensor simulation for monocular depth estimation using deep neural networks
US20230281862A1 (en) Sampling based self-supervised depth and pose estimation

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant