CN108038887B - Binocular RGB-D camera based depth contour estimation method - Google Patents

Binocular RGB-D camera based depth contour estimation method Download PDF

Info

Publication number
CN108038887B
CN108038887B CN201711311829.3A CN201711311829A CN108038887B CN 108038887 B CN108038887 B CN 108038887B CN 201711311829 A CN201711311829 A CN 201711311829A CN 108038887 B CN108038887 B CN 108038887B
Authority
CN
China
Prior art keywords
depth
image
edge
pixel position
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711311829.3A
Other languages
Chinese (zh)
Other versions
CN108038887A (en
Inventor
杨敬钰
蔡常瑞
柏井慧
侯春萍
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201711311829.3A priority Critical patent/CN108038887B/en
Publication of CN108038887A publication Critical patent/CN108038887A/en
Application granted granted Critical
Publication of CN108038887B publication Critical patent/CN108038887B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of computer vision, and provides a method for generating high-quality depth profile estimation. The technical scheme adopted by the invention is that based on a binocular RGB-D camera depth contour estimation method, RGB-D represents color and depth images. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. The invention is mainly applied to computer vision occasions.

Description

Binocular RGB-D camera based depth contour estimation method
Technical Field
The invention belongs to the field of computer vision. In particular to a binocular RGB-D camera based depth contour estimation algorithm.
Background
Depth acquisition is a major concern in the industry and academia. Currently, there are many methods for obtaining high-quality depth images, which are mainly classified into two categories, one of which is passive acquisition, such as stereo matching, 2D-3D conversion technology, color camera array, and the like. However, these methods are all based on inference, and estimate depth from the structural information of the color image, and do not measure depth directly, and this method often produces a wrong depth estimation result. The other is an active mode, namely: the depth image is directly acquired. With the advent of Kinect, depth measurable camera TOF camera and other depth cameras, people tend to be more inclinedDepth information of a scene is directly acquired using a depth camera. Kinect is available from Microsoft in 2009 on 6/2/2009E3On the big exhibition, the formally published XBOX360 body feels the peripheral. This approach not only improves the quality and comprehensiveness of the scene information, but also greatly reduces the workload when acquiring 3D content. Various depth cameras exist on the market at present, in 2010, microsoft introduced the first generation of Kinect depth camera, and recently, microsoft updated the second generation of Kinect v 2. Unlike the first generation Kinect using the speckle structured light imaging principle, Kinect v2 can acquire a depth image with higher accuracy than the first generation Kinect using ToF (time-of-flight) technology, but the problems of system error, low resolution, noise, depth missing and the like still exist. In response to these problems, many depth repair algorithms are currently used in the depth image reconstruction. Including depth image reconstruction models based on global optimization, depth enhancement algorithms based on filtering, etc., such as markov random field (markov random field) based models, total resolution (TV), guided filtering, local multi-point filtering based on intersection, etc.
However, when a large-area depth missing phenomenon exists in the depth image, the effects of the methods are not optimal, the problems of edge blurring, depth estimation errors and the like easily occur, and the depth repair algorithm still needs to be further improved. Moreover, these methods are only directed to single viewpoint depth images, and are not effective and applicable to stereoscopic display systems requiring multi-viewpoint color image-depth image pairs
For a multi-viewpoint imaging task, the method of realizing multi-viewpoint imaging by using the first generation Kinect is proposed by the principal of the principal, and the like; zhu et al have built a multi-view camera system with one ToF camera and two color cameras to obtain high quality depth images; choi et al also established a multi-view system to perform upsampling restoration on low resolution depth images. However, in these works, the accuracy of depth acquisition is a concern, and the correlation between the viewpoints in the system is not considered, or only a simple fusion method is used to fuse the images of different viewpoints. It is therefore necessary to further analyze and refine the characterization of the binocular acquisition system, improving the fusion approach to achieve high quality depth recovery.
Disclosure of Invention
The present invention aims to remedy the deficiencies of the prior art, namely the method of generating a high quality depth profile estimate. The technical scheme adopted by the invention is that based on a binocular RGB-D camera depth contour estimation method, RGB-D represents color and depth images. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image.
Further, the specific steps are as follows:
1) obtaining low resolution depth edge information
Figure BDA0001503155970000021
Denoising and filling the depth image, adopting filtering and bicubic interpolation as preprocessing operations of the original depth image, and then extracting a depth edge by using a canny detection operator
Figure BDA0001503155970000022
2) Generating a high resolution scatter plot of depth edges
Figure BDA0001503155970000023
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Figure BDA0001503155970000024
3) Generating high resolution continuous depth profiles
Figure BDA0001503155970000025
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
Figure BDA0001503155970000026
wherein, theta (DEG) represents a non-negative weight function, sigma is summation operation, and | is | | | is Euclidean distance, after MLS fitting interpolation, inverse transformation, namely one-dimensional conversion is carried out to two-dimensional, and a continuous depth profile image is obtained
Figure BDA0001503155970000027
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL
5) Depth scatter-color edge combination, depth profile correction optimization:
Figure BDA0001503155970000028
where x is the pixel position of the edge point in the color edge image, Nd(x)、
Figure BDA0001503155970000029
The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile image
Figure BDA00015031559700000210
Color edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,
Figure BDA00015031559700000211
for the gradient operation, the physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend, i.e. curvature, then the pixel point is considered to have a higher probability of being a depth profile point, where R (·) represents a constraint term corresponding to the main viewpoint depth image, and is defined as:
Figure BDA00015031559700000212
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1The constraint terms represent that, in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, that is, no depth contour exists in the neighborhood range, and no depth contour information should exist in the pixel position, that is, no depth contour information exists in the pixel position, that is, the pixel position of the pixel position is not in the pixel position of the pixel position is not in the pixel position of the pixel position, and the pixel position, and the pixel is not in the pixel position, and the pixel is not in the pixel position, and the pixel is not in the depth contour information of the pixel position, the depth contour information of the depth, and the depth information of the depth, and the depth contour information of the depth, and the depth information of the depth, and the depth of the depth, and the depth of the depthd(x)=0。
The invention has the technical characteristics and effects that:
aiming at the problem of low quality of depth contour estimation, the method extracts the depth edge of a low-resolution depth map as an initialized depth contour, performs connected reconstruction on the scattered points of the depth contour by combining a color edge and a Moving Least Square (MLS) method after viewpoint torsion, and finally obtains the high-resolution, connected and smooth depth contour. The invention has the following characteristics:
1. the advantages of the binocular system are fully utilized, and more information references are provided.
2. The depth edges of the low resolution depth map are first proposed as the initialization depth profile.
3. In conjunction with the color edges, the depth profile scatter is connected using moving MLS.
Drawings
Fig. 1 is a flow chart of an algorithm, in which,
Figure BDA0001503155970000031
for the depth edge information of low resolution,
Figure BDA0001503155970000032
for high resolution scatter plots of the depth edges,
Figure BDA0001503155970000033
high-resolution continuous depth profile, EcFor color edge images, EdIs the final depth profile estimate.
FIG. 2 is a (a) joint representation of color edge (red), calibrated depth edge (green), and dominant viewpoint depth image (blue);
fig. 3 is a high resolution depth profile estimation result.
Detailed Description
The dominant viewpoint color-depth image pair is used as input information. Firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. The present invention will be described in detail below with reference to the accompanying drawings and examples.
1) Obtaining low resolution depth edge information
Figure BDA0001503155970000034
Because the original depth image has noise and depth loss, the original depth image needs to be denoised and filled firstly, filtering and bicubic interpolation are adopted as preprocessing operations of the original depth image, and then a canny detection operator is utilized to extract a depth edge
Figure BDA0001503155970000035
2) Generating a high resolution scatter plot of depth edges
Figure BDA0001503155970000036
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Figure BDA0001503155970000037
3) Generating high resolution continuous depth profiles
Figure BDA0001503155970000038
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
Figure BDA0001503155970000039
wherein θ (·) represents a non-negative weight function, Σ · is a summation operation, and | | · | | is an euclidean distance. After MLS fitting interpolation, inverse transformation (one-dimensional transformation into two-dimensional transformation) is carried out to obtain continuous depth profile image
Figure BDA00015031559700000310
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the color image edge so as to avoid excessive fine lines caused by the single use of the canny detection operatorAnd finishing the viewpoint registration of the depth image to obtain a high-resolution depth image DL
5) Depth scatter-color edge combination, depth profile correction optimization:
Figure BDA0001503155970000041
where x is the pixel position of the edge point in the color edge image, Nd(x)、
Figure BDA0001503155970000042
The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile image
Figure BDA0001503155970000043
Color edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,
Figure BDA0001503155970000044
is a gradient operation. The physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend (curvature), the probability that the pixel point is a depth profile point is considered to be high. Wherein, R (-) represents the constraint term corresponding to the main viewpoint depth image, and is defined as:
Figure BDA0001503155970000045
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1Respectively, the total number of valid depth values and the threshold value of the depth mean difference. The constraint term indicates that in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, namely, no depth contour exists in the neighborhood range, andno depth profile information should be present for that pixel location, i.e. Ed(x) 0. The constraint phase can effectively remove redundant boundary information in the color edge image, avoid error guidance brought by the redundant boundary, reduce the occurrence of false color and generate continuous and smooth depth profile information.
The method takes a main viewpoint color-depth image pair as input information, and firstly obtains low-resolution depth edge information; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; and finally, under the guidance and constraint of the edge of the color image, carrying out correction optimization on the depth profile to generate a final depth profile image. (the experimental flow chart is shown in FIG. 1). The detailed description of the embodiments in conjunction with the drawings is as follows:
1) obtaining low resolution depth edge information
Figure BDA0001503155970000046
Because the original depth image has noise and depth loss, the original depth image needs to be denoised and filled firstly, filtering and bicubic interpolation are adopted as preprocessing operations of the original depth image, and then a canny detection operator is utilized to extract a depth edge
Figure BDA0001503155970000047
2) Generating a high resolution scatter plot of depth edges
Figure BDA0001503155970000048
Using camera parameters obtained by means of depth image calibration and preprocessing to perform image registration on the original depth image to enable the original depth image to have the same resolution as the color image, and obtaining a depth edge scatter diagram with high resolution
Figure BDA0001503155970000049
3) Generating high resolution continuous depth profiles
Figure BDA00015031559700000410
(FIG. 2 Green)
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
Figure BDA00015031559700000411
wherein θ (·) represents a non-negative weight function, Σ · is a summation operation, and | | · | | is an euclidean distance. After MLS fitting interpolation, inverse transformation (one-dimensional transformation into two-dimensional transformation) is carried out to obtain continuous depth profile image
Figure BDA0001503155970000051
4) Generating a high resolution depth image DL(blue in FIG. 2)
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL
Because 1) the color edge image more accurately describes the depth contour information of a scene, and also contains redundant boundary information of partial non-depth contours; 2) for those regions containing redundant boundary information, their corresponding main view depth image DLThe depth values in (1) are usually smooth, i.e. the part of the area is a depth smooth area of the depth image. Although the depth image is in a scatter-point form, the effective depth values are distributed dispersedly, and the depth values of 4 neighborhoods or 8 neighborhoods of pixel points are difficult to directly calculate, the depth image can still provide a certain degree of constraint to remove redundant boundary information in the color edge image; 3) compared with color edge image, the image is obtained by MLS interpolationTo depth profile
Figure BDA0001503155970000052
Is inaccurate, but it still has the same tendency to change (i.e., curvature) as the true depth profile, which is crucial to the corrective optimization of the next depth profile. Based on these properties, the present subject matter further proposes a method for optimization of depth profile correction combining depth scatter-color edge.
5) And (3) combining the depth scatter-color edge to correct and optimize the depth profile to generate a high-resolution depth profile image (figure 3):
Figure BDA0001503155970000053
where x is the pixel position of the edge point in the color edge image, Nd(x)、
Figure BDA0001503155970000054
The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile image
Figure BDA0001503155970000055
Color edge image EcNeighborhood region in (fig. 2 red); t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel, representing a color edge constraint term,
Figure BDA0001503155970000056
is a gradient operation. The physical meaning of the formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend (curvature), the probability that the pixel point is a depth profile point is considered to be high. Wherein, R (-) represents the constraint term corresponding to the main viewpoint depth image, and is defined as:
Figure BDA0001503155970000057
wherein i, j ═1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1Respectively, the total number of valid depth values and the threshold value of the depth mean difference. The constraint term indicates that in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, namely, no depth contour exists in the neighborhood range, and no depth contour information exists at the pixel position, namely Ed(x) 0. The constraint phase can effectively remove redundant boundary information in the color edge image, avoid error guidance brought by the redundant boundary, reduce the occurrence of false color and generate continuous and smooth depth profile information.

Claims (1)

1. A binocular RGB-D camera based depth contour estimation method is characterized in that RGB-D represents color and depth images; firstly, obtaining depth edge information with low resolution; then, obtaining a high-resolution scatter diagram of the depth edge through camera calibration and image calibration operation, and carrying out edge interpolation to obtain a high-resolution continuous depth profile; finally, under the guidance and constraint of the color image edge, carrying out correction optimization on the depth profile to generate a final depth profile image; the method comprises the following specific steps:
1) obtaining low resolution depth edge information
Figure FDA0003212714630000011
Denoising and filling the depth image, adopting filtering and bicubic interpolation as preprocessing operations of the original depth image, and then extracting a depth edge by using a canny detection operator
Figure FDA0003212714630000012
2) Generating a high resolution scatter plot of depth edges
Figure FDA0003212714630000013
By using lendingThe camera parameters obtained by the depth image calibration and preprocessing are assisted, the original depth image is subjected to image registration to enable the original depth image to have the same resolution as the color image, and a depth edge scatter diagram with high resolution is obtained
Figure FDA0003212714630000014
3) Generating high resolution continuous depth profiles
Figure FDA0003212714630000015
Aiming at the pixel position X of an edge point in a color edge image, converting edge information in a neighborhood N (X) into a coordinate pair represented in a one-dimensional way to obtain a coordinate scatter set X XiAnd the value f (X) corresponding to the point set Xi) (ii) a Then, for the position x where the difference needs to be made, the weighted least-squares error p (-) of its fitting function is minimized:
Figure FDA0003212714630000016
wherein, theta (DEG) represents a non-negative weight function, sigma is summation operation, and | is an Euclidean distance, and is subjected to inverse transformation, namely one-dimensional conversion into two-dimensional after moving least square MLS fitting interpolation to obtain a continuous depth profile image
Figure FDA0003212714630000017
4) Generating a high resolution depth image DL
The non-linear transformation and the canny detection operator are jointly used for extracting the edge of the color image so as to avoid excessive fine texture generated by independently using the canny detection operator, complete the view point registration of the depth image and obtain the depth image D with high resolutionL
5) Depth scatter-color edge combination, depth profile correction optimization:
Figure FDA0003212714630000018
where x is the pixel position of the edge point in the color edge image, Nd(x)、
Figure FDA0003212714630000019
The depth images D corresponding to the positions at the main viewpointLHigh resolution depth profile image
Figure FDA00032127146300000110
Color edge image EcA neighborhood region of; t (-) represents a transformation operation from two-dimensional to one-dimensional; g (-) is a Gaussian kernel and represents a color edge constraint term, (. v.) is gradient operation, the physical meaning of formula (2) is that if the depth profile and the color edge at the same pixel position have the same variation trend, namely curvature, the pixel point is considered to be a depth profile point with higher possibility, wherein R (-) represents a constraint term corresponding to the main viewpoint depth image and is defined as:
Figure FDA00032127146300000111
where i, j ═ 1,2,3,4 represents the neighborhood Nd(x) Four sub-regions of upper, lower, left and right, Nth1And Dth1The constraint terms represent that, in four neighborhood sub-regions, as long as any two sub-regions have similar effective depth value quantity and depth mean value, the neighborhood is considered to be a depth smooth region, that is, no depth contour exists in the neighborhood range, and no depth contour information should exist in the pixel position, that is, no depth contour information exists in the pixel position, that is, the pixel position of the pixel position is not in the pixel position of the pixel position is not in the pixel position of the pixel position, and the pixel position, and the pixel is not in the pixel position, and the pixel is not in the pixel position, and the pixel is not in the depth contour information of the pixel position, the depth contour information of the depth, and the depth information of the depth, and the depth contour information of the depth, and the depth information of the depth, and the depth of the depth, and the depth of the depthd(x)=0。
CN201711311829.3A 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method Expired - Fee Related CN108038887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711311829.3A CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711311829.3A CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Publications (2)

Publication Number Publication Date
CN108038887A CN108038887A (en) 2018-05-15
CN108038887B true CN108038887B (en) 2021-11-02

Family

ID=62102463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711311829.3A Expired - Fee Related CN108038887B (en) 2017-12-11 2017-12-11 Binocular RGB-D camera based depth contour estimation method

Country Status (1)

Country Link
CN (1) CN108038887B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846857A (en) * 2018-06-28 2018-11-20 清华大学深圳研究生院 The measurement method and visual odometry of visual odometry
TWI725522B (en) * 2018-08-28 2021-04-21 鈺立微電子股份有限公司 Image capture system with calibration function
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image
CN112535870B (en) * 2020-06-08 2021-12-14 苏州麟琪程科技有限公司 Soft cushion supply system and method applying ankle detection
CN112819878B (en) * 2021-01-28 2023-01-31 北京市商汤科技开发有限公司 Depth detection method and device, computer equipment and storage medium
CN113689400B (en) * 2021-08-24 2024-04-19 凌云光技术股份有限公司 Method and device for detecting profile edge of depth image section
CN116311079B (en) * 2023-05-12 2023-09-01 探长信息技术(苏州)有限公司 Civil security engineering monitoring method based on computer vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440664A (en) * 2013-09-05 2013-12-11 Tcl集团股份有限公司 Method, system and computing device for generating high-resolution depth map
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model;Jingyu Yang 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20140831;第23卷(第8期);第3443-3458页 *
Depth Map Super-Resolution for Cost-Effective RGB-D Camera;Ryotaro Takaoka 等;《2015 International Conference on Cyberworlds》;20151009;第133-136页 *
面向3DTV的深度计算重建;叶昕辰;《中国博士学位论文全文数据库 信息科技辑》;20170715;正文第2-4章 *

Also Published As

Publication number Publication date
CN108038887A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038887B (en) Binocular RGB-D camera based depth contour estimation method
CN106651938B (en) A kind of depth map Enhancement Method merging high-resolution colour picture
CN102867288B (en) Depth image conversion apparatus and method
CN106780590B (en) Method and system for acquiring depth map
Liu et al. Guided depth enhancement via anisotropic diffusion
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
Kiechle et al. A joint intensity and depth co-sparse analysis model for depth map super-resolution
CN106408513B (en) Depth map super resolution ratio reconstruction method
CN107622480B (en) Kinect depth image enhancement method
CN118212141A (en) System and method for hybrid depth regularization
CN103761721B (en) One is applicable to space rope system machine human stereo vision fast image splicing method
US20200380711A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
Lindner et al. Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images
CN103810685A (en) Super resolution processing method for depth image
CN104680496A (en) Kinect deep image remediation method based on colorful image segmentation
CN107680140B (en) Depth image high-resolution reconstruction method based on Kinect camera
CN104756490A (en) Depth image enhancement method
CN103440653A (en) Binocular vision stereo matching method
CN110853151A (en) Three-dimensional point set recovery method based on video
Maier et al. Super-resolution keyframe fusion for 3D modeling with high-quality textures
KR101714224B1 (en) 3 dimension image reconstruction apparatus and method based on sensor fusion
EP3566206B1 (en) Visual odometry
CN104537627B (en) A kind of post-processing approach of depth image
CN109903322B (en) Depth camera depth image restoration method
Shen et al. Depth map enhancement method based on joint bilateral filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211102