KR20140120527A - Apparatus and method for matchong stereo image - Google Patents
Apparatus and method for matchong stereo image Download PDFInfo
- Publication number
- KR20140120527A KR20140120527A KR1020130036400A KR20130036400A KR20140120527A KR 20140120527 A KR20140120527 A KR 20140120527A KR 1020130036400 A KR1020130036400 A KR 1020130036400A KR 20130036400 A KR20130036400 A KR 20130036400A KR 20140120527 A KR20140120527 A KR 20140120527A
- Authority
- KR
- South Korea
- Prior art keywords
- codewords
- code word
- brightness
- occlusion
- codebook
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/167—Synchronising or controlling image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
The present invention relates to a stereo image matching apparatus and a stereo image matching method. More particularly, the present invention relates to a stereo image matching apparatus and a stereo image matching method using a codebook.
The stereoscopic camera system requires a lot of calculation to calculate dense parallax information for the target scene, but it has an advantage of obtaining the distance information on the space.
At this time, there is a method of using an infrared sensor as a method of extracting distance information, i.e., depth information. Extract structured IR patterns or modulated IR signals to extract depth information. Although this method is excellent in indoor performance, it is inferior in outdoor and indoor window performance. The infrared sensor has a disadvantage that it can be used only in a limited place.
In other methods, it operates in outdoor environment and extracts depth information by using inexpensive image sensor compared to IR sensor. Depth information can be extracted using a stereo camera using an image sensor. At this time, since a stereo camera is used, a method of matching left and right images is important. The similarity can be discriminated based on the luminance information of the left and right images by the stereo matching method or the similarity can be judged based on the color information.
When these methods are used, depth information of an image can be extracted, but there is a problem that noise is generated when applied to real-time image.
In order to solve the above-mentioned problems, the present invention proposes a stereo matching technique which can obtain a robust disparity result by using a codebook as a method for reducing parallax errors obtained by stereo matching.
To solve this problem, we propose a robust stereo matching technique based on a codebook that includes surrounding background information in order to realize stereo matching in real time and to solve the occlusion problem caused by foreground objects.
In order to solve the above-described problems, according to one aspect of the present invention, a parallax operation block for calculating a parallax by matching left and right images obtained from a stereo camera and detecting an area where closure is occluded (an occlusion area); A codebook storage block storing a codebook in which each code word related to information including color, brightness and depth in the previous frames is recorded; And a code word update block for generating codewords for information including color, brightness and depth in a current frame, and for updating codewords in a current frame using a codebook in an occlusion region detected in a parallax block, A stereo image matching device is proposed.
At this time, in one example, the code word update block updates the code word using the codebook in the occlusion region, updates the code word using the codebook in the non-occlusion region, adds a new code word, You can update it.
Also in this case, in one example, the codeword update block is configured to store the current codewords for color, brightness, or color and brightness information in the current frame with respect to the occlusion region, Brightness, or color and brightness information and updates the depth codeword in the previous frames with the depth codeword in the current frame if the difference between the current codewords and the previous codewords is below the set threshold .
Further, according to one example, the codeword update block may include one or more of color and brightness in the current frame for the non-occlusion region, current codewords for depth information, and color And brightness, and the previous codewords for depth information, and averages the previous codewords in the pre-set range similar to the current codewords to update the codeword and update the codebook.
At this time, in another example, the code word update block adds the current code words in the current frame to the new code word if the similar range of previous code words do not exist for the non-occlusion area, and updates the codebook .
In one example, the parallax operation block includes: a parallax operator for calculating a parallax by matching left and right images; And an occlusion detecting unit for detecting a parallax and detecting an occlusion area.
In addition, in one example, the stereo image matching apparatus may further include a stereoscopic image generation block that generates a stereoscopic image using the depth code words in the current frame, which is updated or added.
Next, in order to solve the above-described problem, according to another aspect of the present invention, there is provided a time difference calculating method for calculating a time difference by matching right and left images acquired from a stereo camera, and detecting a time lag area (occlusion area) An operation and an area detection step; And each code word for information including color, brightness and depth in the current frame, and for each detected occlusion region, each code word for information including color, brightness and depth in previous frames And a codeword updating step of updating the codewords in the current frame using the recorded codebook.
At this time, in one example, the code word updating step may include: an updating step of updating the code words using the codebook with respect to the occlusion area; A non-occlusion code word updating step of updating codewords or adding new codewords using a codebook for a non-occlusion area; And an updating step of updating the codebook in accordance with the update or addition of the codewords.
Also in this case, in one example, in the occlusion code word update step, the current codewords for color, brightness, or color and brightness information in the current frame with respect to the occlusion region and previous codewords Brightness, or color and brightness information of the current frame, and if the difference between the current codewords and the previous codewords is below the set threshold, the depth codeword in the previous frames is compared with the depth codeword . ≪ / RTI >
In another example, in the non-occlusion codeword updating step, the current codewords for at least one of color and brightness in the current frame and the depth information for the non-occlusion region, And compare the previous codewords for depth information and update the codeword by averaging if there are previous codewords of a predetermined range similar to the current codewords.
At this time, according to another example, in the non-occlusion code word updating step, if there is no similar range of previous codewords in the non-occlusion region, the current codewords in the current frame are replaced with a new codeword Can be added.
Further, according to one example, the parallax calculation and the area detection step include: calculating a parallax by matching left and right images; And detecting a parallax and detecting an occlusion area.
In another example, the stereoscopic image matching method may further include a stereoscopic image generation step of generating a stereoscopic image using depth code words in a current frame that is updated or added after the codeword updating step.
According to the embodiment of the present invention, it is possible to reduce errors in parallax obtained through stereo matching. That is, based on the more robust stereo matching result using the codebook, the disparity result and the sample depth information can be obtained.
It is apparent that various effects not directly referred to in accordance with various embodiments of the present invention can be derived by those of ordinary skill in the art from the various configurations according to the embodiments of the present invention.
1 is a block diagram schematically illustrating a stereo image matching apparatus according to an embodiment of the present invention.
2 is a block diagram schematically illustrating a stereo image matching apparatus according to another embodiment of the present invention.
3 is a block diagram schematically illustrating a stereo image matching apparatus according to another embodiment of the present invention.
4 is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention.
FIG. 5 is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention.
FIG. 6 is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing the configuration of a first embodiment of the present invention; Fig. In the description, the same reference numerals denote the same components, and a detailed description may be omitted for the sake of understanding of the present invention to those skilled in the art.
As used herein, unless an element is referred to as being 'direct' in connection, combination, or placement with other elements, it is to be understood that not only are there forms of being 'directly connected, They may also be present in the form of being connected, bonded or disposed.
It should be noted that, even though a singular expression is described in this specification, it can be used as a concept representing the entire constitution unless it is contrary to, or obviously different from, or inconsistent with the concept of the invention. It is to be understood that the phrases "including", "having", "having", "comprising", etc. in this specification are intended to be additionally or interchangeable with one or more other elements or combinations thereof.
A stereo image matching apparatus according to an embodiment of the present invention will be described in detail with reference to the drawings. Here, reference numerals not shown in the drawings to be referred to may be reference numerals in other drawings showing the same configuration.
FIG. 1 is a block diagram schematically showing a stereo image matching apparatus according to an embodiment of the present invention, FIG. 2 is a block diagram schematically showing a stereo image matching apparatus according to another embodiment of the present invention, 3 is a block diagram schematically illustrating a stereo image matching apparatus according to another embodiment of the present invention.
Referring to FIGS. 1 to 3, a stereo image matching apparatus according to an embodiment of the present invention includes a
Referring to FIGS. 1 to 3, the
The occlusion region refers to the area covered by the foreground in one of the images due to the left and right parallax of the stereo image. On the other hand, the non-occlusion region refers to an area not covered by the foreground of both stereo images. That is, in the non-occlusion region, the right and left parallax of the stereo image can be calculated / detected, whereas the right parallax of the stereo image can not be detected / detected in the occlusion region. In this case, the disparity means a distance that is separated by a difference of right and left timings when any one point matches the stereo image. It is possible to calculate depth information of the corresponding point or distance information from the parallax information detected at the time of matching of the stereo image.
For example, referring to FIG. 3, in one example, the
Next, referring to FIGS. 1 to 3, the
In the conventional codebook, the foreground region is extracted from the target scene, assuming that the background and the camera are stationary. However, there are disadvantages that background is changed or performance is affected by noise. To solve this problem, distance information is stored together with existing color values, and continuity in object space and time is considered.
For example, the
Next, the code
The code
3 or 5 and / or 6, in one example, the code
For example, if it is determined that reliable distance information is obtained in the non-occlusion region, that is, in the non-occlusion region, the distance information input in the current frame is output as it is, Information is replaced with code words of n frames and updated. On the other hand, in the occlusion region, the codewords before n frames are compared with the codewords of n frames, and the output can be corrected to the distance information of the codewords having similar color information and brightness values. At this time, the codewords before n frames stored in the
In the non-occlusion area, a method of updating the code word using the
Specifically, in one example, the code
For example, when the occlusion region is detected by checking the continuity of two disparity maps in the
Further, according to one example, the code
For example, if there is no similar range of previous codewords for the non-occlusion region, the
In other words, when no obstruction exists, it is determined that reliable distance information has been obtained, and the inputted distance value is used as the final information as it is. In the
In addition, for example, a 3x3 median filter can be applied to obtain a smoother interpolated parallax map, and more weighted distances on similar color values on a one-dimensional scan line to produce discontinuous noise components Can be removed. Since the area having similar color values has a high probability of being similar to the actual distance value, the distance value can be corrected according to the similarity of the color values in the filter. If the color values in the filter are similar, if there is an error in the distance value, the distance value can be corrected in consideration of the color similarity.
Further, in one example, the stereo image matching apparatus may further include a stereoscopic
Next, a stereo image matching method according to another aspect of the present invention will be described in detail with reference to the following drawings. Here, stereo image matching devices according to the above-described embodiment and FIGS. 1 to 3 will be referred to, and redundant explanations therefor may be omitted.
FIG. 4 is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention, FIG. 5 is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention, FIG. Is a flowchart schematically illustrating a stereo image matching method according to another embodiment of the present invention.
Referring to FIGS. 4 to 6, a stereo image matching method according to an embodiment of the present invention includes a parallax calculation and region detection step S100 and a code word update step S300 and S300 '. In addition, referring to FIG. 6, a stereo image matching method according to one example may further include a step S 500 of generating a stereoscopic image. Hereinafter, each step will be described in detail. The technique according to the present invention can be applied to implement element technology in the field of computer vision for effectively analyzing human gestures.
4 to 6, in the parallax calculation and area detection step S100, the parallax is calculated by matching the left and right images obtained from the stereo camera (not shown), and the area (occlusion area) where the parallax is obscured is detected do.
For example, referring to FIG. 6, the parallax calculation and area detection step S100 may include a parallax calculation step S110 and an occlusion area detection step S130. In the parallax calculation step S110, the left and right images obtained from the stereo camera (not shown) are matched to calculate the parallax. Further, in the occlusion area detecting step (S130), it is possible to detect the parallax, extract the parallax detection area, and detect the occlusion area where the parallax is not detected.
Next, referring to FIGS. 4 to 6, in the code word updating step (S300, S300 '), code words related to information including color, brightness, and depth are generated in the current frame, , It is possible to update the codewords in the current frame using the
Referring to FIGS. 4 to 6, in the codeword generation step (S310), each codeword for information including color, brightness, and depth in the current frame may be generated. At this time, codewords related to color and brightness can be generated in the occlusion region, but since the left and right parallax of the stereo image can not be recognized / detected, the depth codeword can not be generated from the current frame. Accordingly, in the code word updating step (S330, S330 ') of the current frame, information including color, brightness and depth in the previous frames for the occlusion area detected in the occlusion area detecting step (S130) For example, a depth code word, in the current frame using the
For example, referring to FIG. 5 and / or 6, in one example, the code word updating step (S300, S300 ') includes the updating step S331 of the occlusion code word, the updating step S333 of non-occlusion code word And an update step S350. Of course, at this time, the codeword updating step (S300, S300 ') may include a codeword generating step (S310).
Referring to FIG. 5 and / or 6, in the occlusion code word updating step (S331), code words can be updated using the
For example, at this time, in one example, the occlusion code word updating step (S331) may update the current code words and the code book (30) regarding the color, brightness, Brightness, or color and brightness information in previous frames stored in the previous frame. Also, as a result of the comparison, if the difference between the current codewords and the previous codewords is equal to or less than the set threshold value, the depth codeword in the previous frames can be updated with the depth codeword in the current frame.
5 and 6, in the non-occlusion code word updating step S333, the
For example, in another example, in the non-occlusion code word updating step (S333), the current code words related to at least one of the color and the brightness in the current frame and the depth information with respect to the non- It is possible to compare the previous codewords for depth information with at least one of the color and brightness in the previous frame stored in the
If there is no similar range of previous codewords for the non-occlusion region, in one example, the non-occlusion codeword updating step S333 may update the current codewords in the current frame It can be updated or added with a new code word.
Subsequently, referring to FIG. 5 and / or 6, in the update step S350, the
6, a stereo image matching method according to one example includes a step S 500 of generating a stereoscopic image using depth codewords in a current frame, which is updated or added after the codeword updating step, As shown in FIG.
The foregoing embodiments and accompanying drawings are not intended to limit the scope of the present invention but to illustrate the present invention in order to facilitate understanding of the present invention by those skilled in the art. Embodiments in accordance with various combinations of the above-described configurations can also be implemented by those skilled in the art from the foregoing detailed description. Accordingly, various embodiments of the present invention may be embodied in various forms without departing from the essential characteristics thereof, and the scope of the present invention should be construed in accordance with the invention as set forth in the appended claims. Alternatives, and equivalents by those skilled in the art.
10: time difference calculation block 11:
13: Occlusion detector 30: Codebook or codebook storage block
50: code word update block 70: stereoscopic image generation block
Claims (14)
A codebook storage block storing a codebook in which each code word related to information including color, brightness and depth in the previous frames is recorded; And
Generating respective codewords for information including color, brightness and depth in a current frame, and updating the codewords in the current frame using the codebook in the occlusion region detected in the parallax operation block And a code word update block.
The code word update block updates the code word using the codebook in the occlusion region and updates the code word with the codebook in the non-occlusion region, or adds a new code word, And updating the stereo image.
Wherein the code word update block comprises code words associated with color, brightness, or color and brightness information in the current frame with respect to the occlusion region, color, brightness, or color in the previous frames stored in the codebook, Comparing the previous codewords for brightness information and updating the depth codeword in the previous frames to the depth codeword in the current frame if the difference between the current codewords and the previous codewords is below a set threshold, And a stereo image matching device.
Wherein the code word update block is configured to code the current codewords for at least one of color and brightness in the current frame and the depth information for the non-occlusion region and color and brightness in the previous frame stored in the codebook Comparing the one or more previous codewords with depth information and averaging the previous codewords in a predetermined range similar to the current codewords to update the codeword and update the codeword. Image matching device.
Wherein the codeword update block adds the current codewords in the current frame to a new codeword and updates the codebook if the previous codewords of the similar range do not exist for the non-occlusion area And a stereo image matching device.
Wherein the parallax operation block comprises: a parallax operator for calculating a parallax by matching the left and right images; And an occlusion detector for detecting a parallax and detecting the occlusion area.
Further comprising a stereoscopic image generation block for generating a stereoscopic image using the depth code words in the current frame that is updated or added.
Generating respective codewords for information including color, brightness and depth in a current frame, wherein for each detected occlusion area, each codeword for information including color, brightness and depth in previous frames And updating the codewords in the current frame using the codebook in which the codewords are recorded.
The code word updating step comprises:
An occlusion code word updating step of updating the code words using the codebook with respect to the occlusion area;
A non-occlusion code word updating step of updating the code words or adding new code words using the codebook with respect to the non-occlusion area; And
And updating the codebook according to an update or addition of the codewords.
In the occlusion code word updating step, the current code words related to the color, brightness, or color and brightness information in the current frame with respect to the occlusion area and the color, brightness, Or previous codewords for color and brightness information and updates the depth codeword in the previous frames to the depth codeword in the current frame if the difference between the current codewords and the previous codewords is below a set threshold Wherein the stereo image matching method comprises:
Wherein the non-occlusion code word updating step comprises updating the current code words for at least one of color and brightness and depth information in the current frame with respect to the non-occlusion region, Comparing the previous codewords for at least one of color and brightness and depth information and for updating the codeword by averaging the previous codewords in a predetermined range similar to the current codewords, Image matching method.
Wherein the updating of the non-occlusion code word comprises adding the current code words in the current frame to a new code word if the previous code words in the similar range do not exist for the non-occlusion area Wherein the stereo image matching method comprises:
Wherein the parallax calculation and the area detection step comprise: calculating a parallax by matching the left and right images; And detecting a parallax and detecting the occlusion area.
Further comprising a stereoscopic image generation step of generating a stereoscopic image by using depth code words in the current frame updated or added after the code word updating step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130036400A KR20140120527A (en) | 2013-04-03 | 2013-04-03 | Apparatus and method for matchong stereo image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130036400A KR20140120527A (en) | 2013-04-03 | 2013-04-03 | Apparatus and method for matchong stereo image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140120527A true KR20140120527A (en) | 2014-10-14 |
Family
ID=51992378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130036400A KR20140120527A (en) | 2013-04-03 | 2013-04-03 | Apparatus and method for matchong stereo image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140120527A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101889886B1 (en) * | 2017-12-22 | 2018-08-21 | 세명대학교 산학협력단 | Depth information generating method and apparatus |
KR101988551B1 (en) | 2018-01-15 | 2019-06-12 | 충북대학교 산학협력단 | Efficient object detection and matching system and method using stereo vision depth estimation |
KR101999797B1 (en) | 2018-01-15 | 2019-07-12 | 충북대학교 산학협력단 | Stereo image feature matching system and method based on harris corner vector clustering algorithm |
KR20210081527A (en) | 2019-12-24 | 2021-07-02 | 동의대학교 산학협력단 | Apparatus and method for improving the performance of stereo-based ROI detection algorithm |
-
2013
- 2013-04-03 KR KR1020130036400A patent/KR20140120527A/en not_active Application Discontinuation
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101889886B1 (en) * | 2017-12-22 | 2018-08-21 | 세명대학교 산학협력단 | Depth information generating method and apparatus |
US10482620B2 (en) | 2017-12-22 | 2019-11-19 | Light And Math Inc. | Method and device for producing depth information |
KR101988551B1 (en) | 2018-01-15 | 2019-06-12 | 충북대학교 산학협력단 | Efficient object detection and matching system and method using stereo vision depth estimation |
KR101999797B1 (en) | 2018-01-15 | 2019-07-12 | 충북대학교 산학협력단 | Stereo image feature matching system and method based on harris corner vector clustering algorithm |
KR20210081527A (en) | 2019-12-24 | 2021-07-02 | 동의대학교 산학협력단 | Apparatus and method for improving the performance of stereo-based ROI detection algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11462028B2 (en) | Information processing device and information processing method to generate a virtual object image based on change in state of object in real space | |
US11629964B2 (en) | Navigation map updating method and apparatus and robot using the same | |
JP6244407B2 (en) | Improved depth measurement quality | |
JP6554169B2 (en) | Object recognition device and object recognition system | |
Yang et al. | Depth hole filling using the depth distribution of neighboring regions of depth holes in the kinect sensor | |
CN109949347B (en) | Human body tracking method, device, system, electronic equipment and storage medium | |
US11847796B2 (en) | Calibrating cameras using human skeleton | |
KR100953076B1 (en) | Multi-view matching method and device using foreground/background separation | |
US10957100B2 (en) | Method and apparatus for generating 3D map of indoor space | |
CN104079912A (en) | Image processing apparatus and image processing method | |
KR20140120527A (en) | Apparatus and method for matchong stereo image | |
JP5027758B2 (en) | Image monitoring device | |
JP2012506994A (en) | Video processing method | |
KR101086274B1 (en) | Apparatus and method for extracting depth information | |
US11861900B2 (en) | Multi-view visual data damage detection | |
US11948312B2 (en) | Object detection/tracking device, method, and program recording medium | |
KR102083293B1 (en) | Object reconstruction apparatus using motion information and object reconstruction method using thereof | |
KR101700651B1 (en) | Apparatus for tracking object using common route date based on position information | |
JP6962365B2 (en) | Object detection system and program | |
KR102240570B1 (en) | Method and apparatus for generating spanning tree,method and apparatus for stereo matching,method and apparatus for up-sampling,and method and apparatus for generating reference pixel | |
CN114757824B (en) | Image splicing method, device, equipment and storage medium | |
WO2015150286A1 (en) | Motion field estimation | |
KR101856257B1 (en) | Apparatus for compensating disparity image and method thereof | |
JP7024401B2 (en) | Information processing equipment, programs, and information processing methods | |
CN113673284B (en) | Depth camera snapshot method, system, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |