CN108038887B - Binocular RGB-D camera based depth contour estimation method - Google Patents
Binocular RGB-D camera based depth contour estimation method Download PDFInfo
- Publication number
- CN108038887B CN108038887B CN201711311829.3A CN201711311829A CN108038887B CN 108038887 B CN108038887 B CN 108038887B CN 201711311829 A CN201711311829 A CN 201711311829A CN 108038887 B CN108038887 B CN 108038887B
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- edge
- pixel position
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000005457 optimization Methods 0.000 claims abstract description 11
- 238000010586 diagram Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the field of computer vision, and provides a method for generating high-quality depth profile estimation. The technical scheme adopted by the invention is that based on a binocular RGB-D camera depth contour estimation method, RGB-D represents color and depth images. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. The invention is mainly applied to computer vision occasions.
Description
Technical Field
The invention belongs to the field of computer vision. In particular to a binocular RGB-D camera based depth contour estimation algorithm.
Background
Depth acquisition is a major concern in the industry and academia. Currently, there are many methods for obtaining high-quality depth images, which are mainly classified into two categories, one of which is passive acquisition, such as stereo matching, 2D-3D conversion technology, color camera array, and the like. However, these methods are all based on inference, and estimate depth from the structural information of the color image, and do not measure depth directly, and this method often produces a wrong depth estimation result. The other is an active mode, namely: the depth image is directly acquired. With the advent of Kinect, depth measurable camera TOF camera and other depth cameras, people tend to be more inclinedDepth information of a scene is directly acquired using a depth camera. Kinect is available from Microsoft in 2009 on 6/2/2009E3On the big exhibition, the formally published XBOX360 body feels the peripheral. This approach not only improves the quality and comprehensiveness of the scene information, but also greatly reduces the workload when acquiring 3D content. Various depth cameras exist on the market at present, in 2010, microsoft introduced the first generation of Kinect depth camera, and recently, microsoft updated the second generation of Kinect v 2. Unlike the first generation Kinect using the speckle structured light imaging principle, Kinect v2 can acquire a depth image with higher accuracy than the first generation Kinect using ToF (time-of-flight) technology, but the problems of system error, low resolution, noise, depth missing and the like still exist. In response to these problems, many depth repair algorithms are currently used in the depth image reconstruction. Including depth image reconstruction models based on global optimization, depth enhancement algorithms based on filtering, etc., such as markov random field (markov random field) based models, total resolution (TV), guided filtering, local multi-point filtering based on intersection, etc.
However, when a large-area depth missing phenomenon exists in the depth image, the effects of the methods are not optimal, the problems of edge blurring, depth estimation errors and the like easily occur, and the depth repair algorithm still needs to be further improved. Moreover, these methods are only directed to single viewpoint depth images, and are not effective and applicable to stereoscopic display systems requiring multi-viewpoint color image-depth image pairs
For a multi-viewpoint imaging task, the method of realizing multi-viewpoint imaging by using the first generation Kinect is proposed by the principal of the principal, and the like; zhu et al have built a multi-view camera system with one ToF camera and two color cameras to obtain high quality depth images; choi et al also established a multi-view system to perform upsampling restoration on low resolution depth images. However, in these works, the accuracy of depth acquisition is a concern, and the correlation between the viewpoints in the system is not considered, or only a simple fusion method is used to fuse the images of different viewpoints. It is therefore necessary to further analyze and refine the characterization of the binocular acquisition system, improving the fusion approach to achieve high quality depth recovery.
Disclosure of Invention
The present invention aims to remedy the deficiencies of the prior art, namely the method of generating a high quality depth profile estimate. The technical scheme adopted by the invention is that based on a binocular RGB-D camera depth contour estimation method, RGB-D represents color and depth images. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image.
Further, the specific steps are as follows:
Denoising and filling the depth image, adopting filtering and bicubic interpolation as preprocessing operations of the original depth image, and then extracting a depth edge by using a canny detection operator
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
wherein, theta (DEG) represents a non-negative weight function, sigma is summation operation, and | is | | | is Euclidean distance, after MLS fitting interpolation, inverse transformation, namely one-dimensional conversion is carried out to two-dimensional, and a continuous depth profile image is obtained
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL;
5) Depth scatter-color edge combination, depth profile correction optimization:
where x is the pixel position of the edge point in the color edge image, Nd(x)、The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile imageColor edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,for the gradient operation, the physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend, i.e. curvature, then the pixel point is considered to have a higher probability of being a depth profile point, where R (·) represents a constraint term corresponding to the main viewpoint depth image, and is defined as:
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1The constraint terms represent that, in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, that is, no depth contour exists in the neighborhood range, and no depth contour information should exist in the pixel position, that is, no depth contour information exists in the pixel position, that is, the pixel position of the pixel position is not in the pixel position of the pixel position is not in the pixel position of the pixel position, and the pixel position, and the pixel is not in the pixel position, and the pixel is not in the pixel position, and the pixel is not in the depth contour information of the pixel position, the depth contour information of the depth, and the depth information of the depth, and the depth contour information of the depth, and the depth information of the depth, and the depth of the depth, and the depth of the depthd(x)=0。
The invention has the technical characteristics and effects that:
aiming at the problem of low quality of depth contour estimation, the method extracts the depth edge of a low-resolution depth map as an initialized depth contour, performs connected reconstruction on the scattered points of the depth contour by combining a color edge and a Moving Least Square (MLS) method after viewpoint torsion, and finally obtains the high-resolution, connected and smooth depth contour. The invention has the following characteristics:
1. the advantages of the binocular system are fully utilized, and more information references are provided.
2. The depth edges of the low resolution depth map are first proposed as the initialization depth profile.
3. In conjunction with the color edges, the depth profile scatter is connected using moving MLS.
Drawings
Fig. 1 is a flow chart of an algorithm, in which,for the depth edge information of low resolution,for high resolution scatter plots of the depth edges,high-resolution continuous depth profile, EcFor color edge images, EdIs the final depth profile estimate.
FIG. 2 is a (a) joint representation of color edge (red), calibrated depth edge (green), and dominant viewpoint depth image (blue);
fig. 3 is a high resolution depth profile estimation result.
Detailed Description
The dominant viewpoint color-depth image pair is used as input information. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. The present invention will be described in detail below with reference to the accompanying drawings and examples.
Because the original depth image has noise and depth loss, the original depth image needs to be denoised and filled firstly, filtering and bicubic interpolation are adopted as preprocessing operations of the original depth image, and then a canny detection operator is utilized to extract a depth edge
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
wherein θ (·) represents a non-negative weight function, Σ · is a summation operation, and | | · | | is an euclidean distance. After MLS fitting interpolation, inverse transformation (one-dimensional transformation into two-dimensional transformation) is carried out to obtain continuous depth profile image
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the color image edge so as to avoid excessive fine lines caused by the single use of the canny detection operatorAnd finishing the viewpoint registration of the depth image to obtain a high-resolution depth image DL。
5) Depth scatter-color edge combination, depth profile correction optimization:
where x is the pixel position of the edge point in the color edge image, Nd(x)、The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile imageColor edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,is a gradient operation. The physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend (curvature), the probability that the pixel point is a depth profile point is considered to be high. Wherein, R (-) represents the constraint term corresponding to the main viewpoint depth image, and is defined as:
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1Respectively, the total number of valid depth values and the threshold value of the depth mean difference. The constraint term indicates that in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, namely, no depth contour exists in the neighborhood range, andno depth profile information should be present for that pixel location, i.e. Ed(x) 0. The constraint phase can effectively remove redundant boundary information in the color edge image, avoid error guidance brought by the redundant boundary, reduce the occurrence of false color and generate continuous and smooth depth profile information.
The method takes a main viewpoint color-depth image pair as input information, and firstly obtains low-resolution depth edge information; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. (the experimental flow chart is shown in FIG. 1). The detailed description of the embodiments in conjunction with the drawings is as follows:
Because the original depth image has noise and depth loss, the original depth image needs to be denoised and filled firstly, filtering and bicubic interpolation are adopted as preprocessing operations of the original depth image, and then a canny detection operator is utilized to extract a depth edge
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
wherein θ (·) represents a non-negative weight function, Σ · is a summation operation, and | | · | | is an euclidean distance. After MLS fitting interpolation, inverse transformation (one-dimensional transformation into two-dimensional transformation) is carried out to obtain continuous depth profile image
4) Generating a high resolution depth image DL(blue in FIG. 2)
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL。
Because 1) the color edge image more accurately describes the depth contour information of a scene, and also contains redundant boundary information of partial non-depth contours; 2) for those regions containing redundant boundary information, their corresponding main view depth image DLThe depth values in (1) are usually smooth, i.e. the part of the area is a depth smooth area of the depth image. Although the depth image is in a scatter-point form, the effective depth values are distributed dispersedly, and the depth values of 4 neighborhoods or 8 neighborhoods of pixel points are difficult to directly calculate, the depth image can still provide a certain degree of constraint to remove redundant boundary information in the color edge image; 3) compared with color edge image, the image is obtained by MLS interpolationTo depth profileIs inaccurate, but it still has the same tendency to change (i.e., curvature) as the true depth profile, which is crucial to the corrective optimization of the next depth profile. Based on these properties, the present subject matter further proposes a method for optimization of depth profile correction combining depth scatter-color edge.
5) And (3) combining the depth scatter-color edge to correct and optimize the depth profile to generate a high-resolution depth profile image (figure 3):
where x is the pixel position of the edge point in the color edge image, Nd(x)、The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile imageColor edge image EcNeighborhood region in (fig. 2 red); t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,is a gradient operation. The physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend (curvature), the probability that the pixel point is a depth profile point is considered to be high. Wherein, R (-) represents the constraint term corresponding to the main viewpoint depth image, and is defined as:
wherein i, j ═1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1Respectively, the total number of valid depth values and the threshold value of the depth mean difference. The constraint term indicates that in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, namely, no depth contour exists in the neighborhood range, and no depth contour information exists at the pixel position, namely Ed(x) 0. The constraint phase can effectively remove redundant boundary information in the color edge image, avoid error guidance brought by the redundant boundary, reduce the occurrence of false color and generate continuous and smooth depth profile information.
Claims (1)
1. A binocular RGB-D camera based depth contour estimation method is characterized in that RGB-D represents color and depth images; firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; finally, under the guidance and constraint of the color image edge, carrying out correction optimization on the depth profile to generate a final depth profile image; the method comprises the following specific steps:
Denoising and filling the depth image, adopting filtering and bicubic interpolation as preprocessing operations of the original depth image, and then extracting a depth edge by using a canny detection operator
By using lendingThe camera parameters obtained by the depth image calibration and preprocessing are assisted, the original depth image is subjected to image registration to enable the original depth image to have the same resolution as the color image, and a depth edge scatter diagram with high resolution is obtained
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
wherein, theta (DEG) represents a non-negative weight function, sigma is summation operation, and | is an Euclidean distance, and is subjected to inverse transformation, namely one-dimensional conversion into two-dimensional after moving least square MLS fitting interpolation to obtain a continuous depth profile image
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL;
5) Depth scatter-color edge combination, depth profile correction optimization:
where x is the pixel position of the edge point in the color edge image, Nd(x)、The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile imageColor edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel and represents a color edge constraint term, (. v.) is gradient operation, the physical meaning of formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend, namely curvature, the pixel point is considered to be a depth profile point with higher possibility, wherein R (-) represents a constraint term corresponding to the main viewpoint depth image and is defined as:
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1The constraint terms represent that, in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, that is, no depth contour exists in the neighborhood range, and no depth contour information should exist in the pixel position, that is, no depth contour information exists in the pixel position, that is, the pixel position of the pixel position is not in the pixel position of the pixel position is not in the pixel position of the pixel position, and the pixel position, and the pixel is not in the pixel position, and the pixel is not in the pixel position, and the pixel is not in the depth contour information of the pixel position, the depth contour information of the depth, and the depth information of the depth, and the depth contour information of the depth, and the depth information of the depth, and the depth of the depth, and the depth of the depthd(x)=0。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711311829.3A CN108038887B (en) | 2017-12-11 | 2017-12-11 | Binocular RGB-D camera based depth contour estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711311829.3A CN108038887B (en) | 2017-12-11 | 2017-12-11 | Binocular RGB-D camera based depth contour estimation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108038887A CN108038887A (en) | 2018-05-15 |
CN108038887B true CN108038887B (en) | 2021-11-02 |
Family
ID=62102463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711311829.3A Expired - Fee Related CN108038887B (en) | 2017-12-11 | 2017-12-11 | Binocular RGB-D camera based depth contour estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038887B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108846857A (en) * | 2018-06-28 | 2018-11-20 | 清华大学深圳研究生院 | The measurement method and visual odometry of visual odometry |
TWI725522B (en) * | 2018-08-28 | 2021-04-21 | 鈺立微電子股份有限公司 | Image capture system with calibration function |
CN110322411A (en) * | 2019-06-27 | 2019-10-11 | Oppo广东移动通信有限公司 | Optimization method, terminal and the storage medium of depth image |
CN112535870B (en) * | 2020-06-08 | 2021-12-14 | 苏州麟琪程科技有限公司 | Soft cushion supply system and method applying ankle detection |
CN112819878B (en) * | 2021-01-28 | 2023-01-31 | 北京市商汤科技开发有限公司 | Depth detection method and device, computer equipment and storage medium |
CN113689400B (en) * | 2021-08-24 | 2024-04-19 | 凌云光技术股份有限公司 | Method and device for detecting profile edge of depth image section |
CN116311079B (en) * | 2023-05-12 | 2023-09-01 | 探长信息技术(苏州)有限公司 | Civil security engineering monitoring method based on computer vision |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440664A (en) * | 2013-09-05 | 2013-12-11 | Tcl集团股份有限公司 | Method, system and computing device for generating high-resolution depth map |
CN106162147A (en) * | 2016-07-28 | 2016-11-23 | 天津大学 | Depth recovery method based on binocular Kinect depth camera system |
-
2017
- 2017-12-11 CN CN201711311829.3A patent/CN108038887B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440664A (en) * | 2013-09-05 | 2013-12-11 | Tcl集团股份有限公司 | Method, system and computing device for generating high-resolution depth map |
CN106162147A (en) * | 2016-07-28 | 2016-11-23 | 天津大学 | Depth recovery method based on binocular Kinect depth camera system |
Non-Patent Citations (3)
Title |
---|
Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model;Jingyu Yang 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20140831;第23卷(第8期);第3443-3458页 * |
Depth Map Super-Resolution for Cost-Effective RGB-D Camera;Ryotaro Takaoka 等;《2015 International Conference on Cyberworlds》;20151009;第133-136页 * |
面向3DTV的深度计算重建;叶昕辰;《中国博士学位论文全文数据库 信息科技辑》;20170715;正文第2-4章 * |
Also Published As
Publication number | Publication date |
---|---|
CN108038887A (en) | 2018-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038887B (en) | Binocular RGB-D camera based depth contour estimation method | |
CN106651938B (en) | A kind of depth map Enhancement Method merging high-resolution colour picture | |
CN102867288B (en) | Depth image conversion apparatus and method | |
CN106780590B (en) | Method and system for acquiring depth map | |
Liu et al. | Guided depth enhancement via anisotropic diffusion | |
Yang et al. | Color-guided depth recovery from RGB-D data using an adaptive autoregressive model | |
Kiechle et al. | A joint intensity and depth co-sparse analysis model for depth map super-resolution | |
CN106408513B (en) | Depth map super resolution ratio reconstruction method | |
CN107622480B (en) | Kinect depth image enhancement method | |
CN118212141A (en) | System and method for hybrid depth regularization | |
CN103761721B (en) | One is applicable to space rope system machine human stereo vision fast image splicing method | |
US20200380711A1 (en) | Method and device for joint segmentation and 3d reconstruction of a scene | |
Lindner et al. | Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images | |
CN103810685A (en) | Super resolution processing method for depth image | |
CN104680496A (en) | Kinect deep image remediation method based on colorful image segmentation | |
CN107680140B (en) | Depth image high-resolution reconstruction method based on Kinect camera | |
CN104756490A (en) | Depth image enhancement method | |
CN103440653A (en) | Binocular vision stereo matching method | |
CN110853151A (en) | Three-dimensional point set recovery method based on video | |
Maier et al. | Super-resolution keyframe fusion for 3D modeling with high-quality textures | |
KR101714224B1 (en) | 3 dimension image reconstruction apparatus and method based on sensor fusion | |
EP3566206B1 (en) | Visual odometry | |
CN104537627B (en) | A kind of post-processing approach of depth image | |
CN109903322B (en) | Depth camera depth image restoration method | |
Shen et al. | Depth map enhancement method based on joint bilateral filter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20211102 |