CN109903322B - Depth camera depth image restoration method - Google Patents

Depth camera depth image restoration method Download PDF

Info

Publication number
CN109903322B
CN109903322B CN201910066697.5A CN201910066697A CN109903322B CN 109903322 B CN109903322 B CN 109903322B CN 201910066697 A CN201910066697 A CN 201910066697A CN 109903322 B CN109903322 B CN 109903322B
Authority
CN
China
Prior art keywords
pixel
pixel block
repaired
depth
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910066697.5A
Other languages
Chinese (zh)
Other versions
CN109903322A (en
Inventor
刘慧�
朱晟辉
沈跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910066697.5A priority Critical patent/CN109903322B/en
Publication of CN109903322A publication Critical patent/CN109903322A/en
Application granted granted Critical
Publication of CN109903322B publication Critical patent/CN109903322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a depth camera depth image restoration method, which comprises the steps of firstly registering an acquired depth image and a color image; dividing a color image pixel point into pixel blocks through an SLIC super-pixel segmentation algorithm; scanning each pixel block in the depth image in sequence, if a small hole exists in the pixel block, repairing by adopting a fast travelling method, if a large hole exists in the pixel block, selecting an adjacent similar pixel block, judging whether the adjacent similar pixel block and the super pixel block to be repaired are the same as a foreground or a background, selecting a texture block with the highest similarity, filling and repairing the super pixel block to be repaired, and if no effective pixel value exists in the adjacent pixel block of the large hole, temporarily not repairing; and checking whether invalid pixel points still exist in the depth image, if so, repairing all the pixel points, and if not, repairing each pixel block with a large hole by combining the combined bilateral filtering with the color image. The method can improve the repairing speed while ensuring the repairing effect.

Description

Depth camera depth image restoration method
Technical Field
The invention relates to an algorithm suitable for repairing depth image cavity points of a Realsense depth camera, and belongs to the technical field of image processing.
Background
The Realsense depth camera is introduced by Intel corporation at 2014 international consumer electronics exhibition to address man-machine interaction technology, is a first device integrating a 3D depth module and a 2D lens module worldwide, and gives the device visual depth similar to human eyes. The invention adopts the Realsense 435 depth camera, the depth image imaging principle is the structured light principle, the change of the infrared signal on the surface of the object is captured by the depth sensor so as to calculate the depth value, and the depth value is widely used because the depth value is less influenced by illumination and has high cost performance.
However, due to the reasons of feature matching errors or absorption of reflected light, the imaging principle is unfavorable for subsequent experiments because of the phenomena of cavity or depth edge blurring and the like of the depth image, and then the depth image is repaired first. To date, many researchers have studied the repair technology of depth images, and regarding the phenomenon of depth edge blurring, researchers have proposed to improve depth images by using a filtering technology based on boundary preservation, and researchers have proposed to use a repair algorithm based on upsampling of depth images, however, the repair of errors specific to Realsense cannot be effectively performed; for the depth image, there are void phenomena, researchers propose to fill black holes by using inter-frame motion compensation and median filtering, but there is no consideration of boundary alignment and the phenomenon that depth value repair errors occur when large-area voids are encountered, and researchers propose an improved fast-travelling method to fill voids by using color images as guiding information, but the method also cannot eliminate interference depth values around boundaries and artifacts occur at the edges of repaired objects. So far, research on depth image restoration algorithms at home and abroad is quite mature, but some defects exist more or less.
Disclosure of Invention
The invention aims to repair the phenomena of cavity, depth edge blurring and the like of a depth image of a Realsense depth camera, and provides a method for repairing the depth image by super-pixel segmentation and refilling, so that the defects of the prior art scheme are overcome, the repairing time is effectively reduced, and the repairing efficiency is improved.
The invention adopts the technical scheme that: a depth camera depth image restoration algorithm comprises the following steps:
step 1, acquiring a depth image and a color image in a target scene in real time through a depth camera; step 2, registering coordinates of the depth image and the color image; step 3, dividing the color image pixel point into pixel blocks by SLIC super-pixel segmentation algorithm, and mapping the pixel block coordinates into a depth image; step 4, scanning each pixel block in the depth image in sequence, repairing by adopting a fast traveling method if small holes exist in the pixel blocks, selecting adjacent similar pixel blocks if large holes exist in the pixel blocks, judging whether the adjacent similar pixel blocks and the super pixel blocks to be repaired are foreground or background, selecting a texture block with the highest similarity to be repaired, filling and repairing the super pixel blocks to be repaired if the adjacent pixel blocks of the large holes have no effective pixel values, and temporarily not repairing; and 5, checking whether invalid pixel points still exist in the depth image, if so, continuing to execute the step 4 until all the pixel points are repaired, and if not, repairing each pixel block with a large hole by combining the combined bilateral filtering with the color image. The method can improve the repairing speed while ensuring the repairing effect.
Further, the step 3 specifically includes:
when super-pixel segmentation is carried out, a lab color space is used, the maximum number of super-pixels finally segmented is set to 400, the maximum possible value of the lab space distance is set to 5, the iteration times are set to 4, and finally segmented pixel points are stored in a pixel block mode.
Further, the specific process of the step 4 is as follows:
step 4.1, repairing small holes based on a fast traveling method: and calculating the position weight and gradient value of the neighborhood pixel point and the central point space by utilizing the effective pixel points in the neighborhood of the boundary of the small hole area in the depth image, then obtaining an ineffective pixel point value to be filled, repairing the boundary of the small hole each time by an algorithm, repairing the boundary layer by layer inwards, and finally finishing the repairing.
Step 4.2, repairing a large hole based on a pixel block filling method: if no effective pixel point exists in the super pixel block, firstly judging the area of the super pixel block as a foreground or a background, then selecting the super pixel block adjacent to the non-ineffective pixel point which is the foreground or the background, selecting the texture of the super pixel block in the color image, dividing the texture into small pixel blocks, matching the texture of the small pixel blocks around the pixel point to be repaired with the highest priority in the boundary of the super pixel block to be repaired, selecting the small pixel block with the highest texture similarity as the adjacent surface of the same object as the super pixel block to be repaired, so that the theoretical depth values are similar, and then using the depth value corresponding to the small pixel block with the highest texture similarity as the depth value of the small pixel block around the pixel point to be repaired with the highest interpolation repair priority.
Further, the specific method for calculating whether the super pixel block is foreground or background is as follows:
firstly, setting the neighborhood of the super pixel block as the super pixel blocks containing effective pixel points in the theta range around the pixel point p to be repaired, then calculating the average depth values of the super pixel blocks, and selecting the maximum and minimum depth values as D respectively BG And D FG And calculates the average pixel value of each as C BG And C FG Defining the probability E that the neighborhood super-pixel blocks respectively belong to the background and the foreground BG And E is FG The method comprises the following steps:
Figure BDA0001955935570000021
Figure BDA0001955935570000031
wherein: c represents the average pixel value of a super pixel block; d represents the average depth value of a super pixel block; alpha and beta are custom constants;
and finally, selecting the larger probability as the probability of the foreground or the background of the super pixel block.
Further, the specific method for matching the textures of the small pixel blocks around the pixel point to be repaired with the highest priority in the boundary of the super pixel block to be repaired is as follows:
firstly, calculating the depth priority P of a pixel point P to be repaired on a boundary d :
Figure BDA0001955935570000032
Wherein: n (N) p A neighboring pixel block representing a pixel point p to be repaired; d (p) represents a depth value of the repair pixel point p; epsilon (p) indicates whether the pixel point p to be repaired is an invalid pixel point, if so, epsilon (p) =0, and if not, epsilon (p) =1.
Finally, selecting depth priority P d The largest repaired pixel point p is taken as the first repaired pixel point.
Further, the specific method for using the depth value corresponding to the small pixel block with the highest texture similarity as the depth value of the small pixel block around the pixel point to be repaired with the highest interpolation repair priority is as follows:
firstly, selecting the size S of a small pixel block, and then, selecting the adjacent pixel block N of a pixel point p to be repaired in the small pixel block and the super pixel block to be repaired p And (3) performing similarity comparison, and selecting a small pixel block with the highest similarity degree for filling and repairing, wherein the similarity χ calculating method comprises the following steps:
Figure BDA0001955935570000033
wherein: c (C) S (p) and D S Respectively representing a color value and a depth value of any pixel point p in the small pixel block S; delta is a custom constant;
and then selecting a small pixel block with the smallest χ (S) as an adjacent pixel block for repairing the pixel point p to be repaired, scanning whether the super pixel block to be repaired still has invalid pixel points, if not, completely repairing the surface, and if still, recalculating the pixel point to be repaired with the highest priority, and repairing the pixel point.
Further, in the joint bilateral filtering of step 5, the depth image is used as an input image, and the color image is used as a guide image, so that the weights of the pixels in the depth image are related to not only the distance but also the pixel value.
The beneficial effects of the invention are as follows:
the conventional depth image restoration algorithm generally has the problems that a large cavity cannot be restored, boundaries are not aligned and the like, and the use of a subsequent depth image is affected. It is proposed herein to repair depth images by means of super-pixel segmentation refill. The super pixel blocks to be repaired are filled by utilizing similar texture blocks in the adjacent super pixel blocks, so that the repairing efficiency can be ensured, the problem that large holes are difficult to repair is solved, the repaired image is repaired again by adopting the combined bilateral filtering, the problems of blurring edges and unaligned boundaries can be solved, meanwhile, only the super pixel blocks with the large holes are filtered, the scanning time of bilateral filtering is shortened, and the repairing efficiency is improved. As shown in fig. 2, the method for segmenting the color image through the super pixels fills the holes by using the adjacent similar texture blocks and finally repairing the depth image by combining the bilateral filtering not only can effectively process the holes and the edge blurring phenomenon of the Realsense depth camera when the depth data are acquired, but also can ensure the instantaneity of processing the image, and provides beneficial help for the development and the use of the Realsense depth camera.
Drawings
The invention is described in further detail below with reference to the attached drawings and detailed description:
fig. 1 is a flow chart of depth image restoration.
Fig. 2 is a restored image using the algorithm of the present invention. (a) is an image before repair; (b) is a repaired image.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
FIG. 1 is a flow chart of depth image restoration, wherein SLIC super-pixel segmentation is adopted to divide a color image, and coordinates corresponding to each super-pixel block are mapped into a depth image; repairing small holes by adopting a rapid travelling method, and repairing large holes by adopting similar texture blocks in adjacent pixel blocks; the problem of edge blurring and boundary misalignment is repaired by adopting joint bilateral filtering.
Step 1: and acquiring a depth image and a color image in the target scene in real time through a Realsense depth camera.
Step 2: the coordinates of the depth image and the color image are registered.
Step 3: the color image pixel point is divided into pixel blocks by a SLIC super-pixel segmentation algorithm, and pixel block coordinates are mapped into a depth image.
The SLIC super-pixel segmentation algorithm mainly comprises the following steps:
1) The seed point is initialized. Setting the number of super pixels and uniformly distributing seed points in the image.
2) Seed points are screened. And calculating gradient values of all pixel points in the neighborhood, and moving the seed point to the place with the minimum gradient in the neighborhood.
3) Class labels are assigned. Class labels are assigned to each pixel point within a neighborhood around each seed point.
4) Color, spatial distance are measured. And for each searched pixel point, calculating the color distance and the space distance between the pixel point and the seed point, and taking the seed point corresponding to the minimum value as the clustering center of the pixel point.
5) And (5) iterative optimization.
When super-pixel segmentation is carried out, a lab color space is used, the maximum number of finally segmented super-pixels is set to 400, the maximum possible value of the lab space distance is set to 5, the iteration times are set to 4, and finally segmented pixel points are stored in the form of super-pixel blocks.
Step 4: and scanning each pixel block in the depth image in sequence, if a small hole exists in the pixel block, repairing by adopting a fast travelling method, if a large hole exists in the pixel block, selecting an adjacent similar pixel block, judging whether the adjacent similar pixel block and the super pixel block to be repaired are the foreground or the background, if the adjacent similar pixel block and the super pixel block to be repaired are the background or the foreground, selecting a texture block with the highest similarity, performing interpolation calculation on the super pixel block to be repaired, filling pixel points, and if no effective pixel value exists in the adjacent pixel block of the large hole, temporarily not repairing.
(1) Small hole repair based on fast marching method:
calculating the position weight w (p, p) and gradient value of the space between the neighborhood pixel point and the central point by using the effective pixel point p in the neighborhood N (p) of the boundary of the small hole area in the depth image
Figure BDA0001955935570000053
Then calculating to obtain the pixel value C of the pixel point to be repaired p And (3) repairing from the boundary of the small cavity each time, then repairing inwards layer by layer, and finally finishing the repairing.
The specific calculation formula is as follows:
Figure BDA0001955935570000051
wherein: the position weight w (p, p) of the neighborhood pixel and the center point space is defined as the product of the direction parameter dir (p, p), the geometric distance parameter dst (p, p), and the level set parameter lev (p, p). The specific calculation formula is as follows:
w(p,p*)=dir(p,p*)·dst(p,p*)·lev(p,p*)
wherein: the direction parameter dir (p, p) concentrates the pixel points with heavy weight in the direction close to the normal line; the geometrical distance parameter dst (p, p) concentrates the pixel points with heavy weight in the area with a relatively close geometrical distance from the pixel points to be repaired; the level set parameter lev (p, p x) concentrates the pixels with heavy weights in the area close to the outline where the pixel to be repaired is located. The specific calculation formula is as follows:
Figure BDA0001955935570000052
Figure BDA0001955935570000061
Figure BDA0001955935570000062
wherein: t represents the distance value between the pixel point and the initial repair boundary, and satisfies the condition in a small cavity
Figure BDA0001955935570000063
And T of all pixel points on the boundary of the small hole is 0; d, d 0 、T 0 Is a custom constant.
(2) Large hole repairing based on pixel block filling method:
1) The neighboring superpixel blocks are categorized. Firstly, judging that the area of the pixel block is foreground or background, and then selecting super pixel blocks which are adjacent to the foreground or background and have no invalid pixel points as similar pixel blocks. The method for specifically calculating whether the super pixel block is a foreground or a background is as follows:
firstly, setting the neighborhood of the super pixel block as the super pixel blocks containing effective pixel points in the theta range around the pixel point p to be repaired, then calculating the average depth values of the super pixel blocks, and selecting the maximum and minimum depth values as D respectively BG And D FG And calculates the average pixel value of each as C BG And C FG Defining the probability E that the neighborhood super-pixel blocks respectively belong to the background and the foreground BG And E is FG The method comprises the following steps:
Figure BDA0001955935570000064
Figure BDA0001955935570000065
wherein: c represents the average pixel value of a super pixel block; d represents the average depth value of a super pixel block; alpha and beta are custom constants.
And finally, selecting the larger probability as the probability of the foreground or the background of the super pixel block.
2) A repair priority is determined. And calculating the priority of all the pixels to be repaired on the boundary of the super pixel block to be repaired, and selecting the pixel with the highest priority for repairing. Specifically calculating depth priority P of pixel point P to be repaired on boundary d The formula of (2) is as follows:
Figure BDA0001955935570000066
wherein: n (N) p A neighboring pixel block representing a pixel point p to be repaired; d (p) represents a depth value of the repair pixel point p; epsilon (p) indicates whether the pixel point p to be repaired is an invalid pixel point, if so, epsilon (p) =0, and if not, epsilon (p) =1.
Finally, selecting depth priority P d The largest repaired pixel point p is taken as the first repaired pixel point.
3) Similar texture block filling. Selecting textures of super pixel blocks in a color image of the same type of super pixel blocks, dividing the textures into small pixel blocks, matching the textures with the textures of the small pixel blocks around the pixel point to be repaired with the highest priority in the boundary of the super pixel block to be repaired, selecting the small pixel block with the highest texture similarity as an adjacent surface of the same object as the super pixel block to be repaired, so that the corresponding depth values are similar, and then using the depth value corresponding to the small pixel block with the highest texture similarity as the depth value of the small pixel block around the pixel point to be repaired with the highest interpolation repair priority. The method for filling the empty holes by using the small pixel block with highest similarity in the adjacent super pixel blocks specifically comprises the following steps:
firstly, selecting the size S of a small pixel block, and then, selecting the adjacent pixel block N of a pixel point p to be repaired in the small pixel block and the super pixel block to be repaired p And (3) performing similarity comparison, and selecting a small pixel block with the highest similarity degree for filling and repairing, wherein the similarity χ calculating method comprises the following steps:
Figure BDA0001955935570000071
wherein: c (C) S (p) and D S Respectively representing a color value and a depth value of any pixel point p in the small pixel block S; delta is a custom constant.
And then selecting a small pixel block with the smallest χ (S) as an adjacent pixel block for repairing the pixel point p to be repaired, scanning whether the super pixel block to be repaired still has invalid pixel points, if not, completely repairing the surface, and if still, recalculating the pixel point to be repaired with the highest priority, and repairing the pixel point.
Step 5: and (3) checking whether invalid pixel points still exist in the depth image, if so, continuing to execute the step (4) until all the pixel points are repaired, and if not, repairing each pixel block with a large hole by combining the combined bilateral filtering with the color image.
The specific calculation formula of the joint bilateral filtering algorithm is as follows:
Figure BDA0001955935570000072
wherein: f is the weight of the filter airspace, which concentrates the pixel points with large weight in the area with small space distance; g is the weight of the filter value domain, which gives more weight to the pixel points with similar pixel values; c is the pixel value of the depth image;
Figure BDA0001955935570000073
is the pixel value of the incoming color image; k (k) p Is a custom constant.
In summary, the invention provides a depth camera depth image restoration algorithm, which comprises the following steps: 1. acquiring a depth image and a color image in a target scene in real time through a Realsense depth camera; 2. registering the depth image and the color image; 3. dividing a color image pixel point into pixel blocks through a SLIC super-pixel segmentation algorithm, and mapping pixel block coordinates into a depth image; 4. scanning each pixel block in the depth image in sequence, if a small hole exists in the pixel block, repairing by adopting a fast travelling method, if a large hole exists in the pixel block, selecting an adjacent similar pixel block, judging whether the adjacent similar pixel block and the super pixel block to be repaired are the foreground or the background, if the adjacent similar pixel block and the super pixel block to be repaired are the foreground or the background, selecting a texture block with the highest similarity, filling and repairing the super pixel block to be repaired, and if no effective pixel value exists in the adjacent pixel block of the large hole, temporarily not repairing; 5. and (3) checking whether invalid pixel points still exist in the depth image, if so, continuing to execute the step (4) until all the pixel points are repaired, and if not, repairing each pixel block with a large hole by combining the combined bilateral filtering with the color image. The method can improve the repairing speed while ensuring the repairing effect.

Claims (5)

1. The depth camera depth image restoration method is characterized by comprising the following steps of:
step 1, acquiring a depth image and a color image in a target scene in real time through a depth camera; step 2, registering coordinates of the depth image and the color image; step 3, dividing the color image pixel point into pixel blocks by SLIC super-pixel segmentation algorithm, and mapping the pixel block coordinates into a depth image; step 4, scanning each pixel block in the depth image in sequence, repairing by adopting a fast traveling method if small holes exist in the pixel blocks, selecting adjacent similar pixel blocks if large holes exist in the pixel blocks, judging whether the adjacent similar pixel blocks and the super pixel blocks to be repaired are foreground or background, selecting a texture block with the highest similarity to be repaired, filling and repairing the super pixel blocks to be repaired if the adjacent pixel blocks of the large holes have no effective pixel values, and temporarily not repairing; step 5, checking whether invalid pixel points still exist in the depth image, if so, continuing to execute step 4 until all the pixel points are repaired, and if not, repairing each pixel block with a large hole by combining bilateral filtering with a color image, so that the repairing speed can be improved while the repairing effect is ensured
The specific process of the step 4 is as follows:
step 4.1, repairing small holes based on a fast traveling method: calculating the position weight and gradient value of the neighborhood pixel point and the central point space by utilizing the effective pixel points in the neighborhood of the boundary of the small hole area in the depth image, then obtaining the invalid pixel point value to be filled, repairing the boundary of the small hole each time by an algorithm, repairing the boundary layer by layer inwards, and finally finishing the repairing;
step 4.2, repairing a large hole based on a pixel block filling method: if no effective pixel point exists in the super pixel block, firstly judging that the area of the super pixel block is foreground or background, then selecting the super pixel block which is adjacent to the non-ineffective pixel point which is foreground or background, selecting the texture of the super pixel block in the color image, dividing the texture into small pixel blocks, matching the texture of the small pixel blocks around the pixel point to be repaired with the highest priority in the boundary of the super pixel block to be repaired, selecting the small pixel block with the highest texture similarity as the adjacent surface of the same object as the super pixel block to be repaired, so that the theoretical depth values are similar, and then using the depth value corresponding to the small pixel block with the highest texture similarity as the depth value of the small pixel block around the pixel point to be repaired with the highest interpolation repair priority;
the specific method for calculating whether the super pixel block is foreground or background is as follows:
firstly, setting the neighborhood of the super pixel block as the super pixel blocks containing effective pixel points in the theta range around the pixel point p to be repaired, then calculating the average depth values of the super pixel blocks, and selecting the maximum and minimum depth values as D respectively BG And D FG And calculates the average pixel value of each as C BG And C FG Defining the probability E that the neighborhood super-pixel blocks respectively belong to the background and the foreground BG And E is FG The method comprises the following steps:
Figure FDA0004065783270000021
Figure FDA0004065783270000022
wherein: c represents the average pixel value of a super pixel block; d represents the average depth value of a super pixel block; alpha and beta are custom constants;
and finally, selecting the larger probability as the probability of the foreground or the background of the super pixel block.
2. The depth camera depth image restoration method according to claim 1, wherein: the step 3 specifically includes:
when super-pixel segmentation is carried out, a lab color space is used, the maximum number of super-pixels finally segmented is set to 400, the maximum possible value of the lab space distance is set to 5, the iteration times are set to 4, and finally segmented pixel points are stored in a pixel block mode.
3. The depth camera depth image restoration method according to claim 1, wherein: the specific method for matching the textures of the small pixel blocks around the pixel point to be repaired with the highest priority in the boundary of the super pixel block to be repaired comprises the following steps:
firstly, calculating the depth priority P of a pixel point P to be repaired on a boundary d :
Figure FDA0004065783270000023
Wherein: n (N) p A neighboring pixel block representing a pixel point p to be repaired; d (p) represents a depth value of the repair pixel point p; epsilon (p) indicates whether the pixel point p to be repaired is an invalid pixel point, if so, epsilon (p) =0, and if not, epsilon (p) =1;
finally, selecting depth priority P d The largest repaired pixel point p is taken as the first repaired pixel point.
4. The depth camera depth image restoration method according to claim 1, wherein: the specific method for using the depth value corresponding to the small pixel block with the highest texture similarity as the depth value of the small pixel block around the pixel point to be repaired with the highest interpolation repair priority comprises the following steps:
firstly, selecting the size S of a small pixel block, and then, selecting the adjacent pixel block N of a pixel point p to be repaired in the small pixel block and the super pixel block to be repaired p And (3) performing similarity comparison, and selecting a small pixel block with the highest similarity degree for filling and repairing, wherein the similarity χ calculating method comprises the following steps:
Figure FDA0004065783270000031
wherein: c (C) S (p) and D S Respectively representing a color value and a depth value of any pixel point p in the small pixel block S; delta is a custom constant;
and then selecting a small pixel block with the smallest χ (S) as an adjacent pixel block for repairing the pixel point p to be repaired, scanning whether the super pixel block to be repaired still has invalid pixel points, if not, completely repairing the surface, and if still, recalculating the pixel point to be repaired with the highest priority, and repairing the pixel point.
5. The depth camera depth image restoration method according to claim 1, wherein: in the joint bilateral filtering of step 5, the depth image is used as an input image, and the color image is used as a guide image, so that the weight of the pixels in the depth image is related to not only the distance but also the pixel value.
CN201910066697.5A 2019-01-24 2019-01-24 Depth camera depth image restoration method Active CN109903322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910066697.5A CN109903322B (en) 2019-01-24 2019-01-24 Depth camera depth image restoration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910066697.5A CN109903322B (en) 2019-01-24 2019-01-24 Depth camera depth image restoration method

Publications (2)

Publication Number Publication Date
CN109903322A CN109903322A (en) 2019-06-18
CN109903322B true CN109903322B (en) 2023-06-09

Family

ID=66944089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910066697.5A Active CN109903322B (en) 2019-01-24 2019-01-24 Depth camera depth image restoration method

Country Status (1)

Country Link
CN (1) CN109903322B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827209A (en) * 2019-09-26 2020-02-21 西安交通大学 Self-adaptive depth image restoration method combining color and depth information
CN110751605B (en) * 2019-10-16 2022-12-23 深圳开立生物医疗科技股份有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111598817B (en) * 2020-04-26 2023-07-18 凌云光技术股份有限公司 Filling method and system for missing pixels of depth image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310420A (en) * 2013-06-19 2013-09-18 武汉大学 Method and system for repairing color image holes on basis of texture and geometrical similarities
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310420A (en) * 2013-06-19 2013-09-18 武汉大学 Method and system for repairing color image holes on basis of texture and geometrical similarities
CN107240073A (en) * 2017-05-12 2017-10-10 杭州电子科技大学 A kind of 3 d video images restorative procedure merged based on gradient with clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于K-means和近邻回归算法的Kinect植株深度图像修复;沈跃等;《农业工程学报》;20161031;第32卷(第19期);第188-194页 *

Also Published As

Publication number Publication date
CN109903322A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN108682026B (en) Binocular vision stereo matching method based on multi-matching element fusion
CN108596975B (en) Stereo matching algorithm for weak texture region
CN109903322B (en) Depth camera depth image restoration method
US9412040B2 (en) Method for extracting planes from 3D point cloud sensor data
TWI489418B (en) Parallax Estimation Depth Generation
CN106780590A (en) The acquisition methods and system of a kind of depth map
CN107578430B (en) Stereo matching method based on self-adaptive weight and local entropy
CN111899295B (en) Monocular scene depth prediction method based on deep learning
CN110322572B (en) Binocular vision-based underwater culvert and tunnel inner wall three-dimensional information recovery method
CN107622480B (en) Kinect depth image enhancement method
CN110544294B (en) Dense three-dimensional reconstruction method based on panoramic video
CN108038887B (en) Binocular RGB-D camera based depth contour estimation method
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
CN110544300B (en) Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
CN113034568A (en) Machine vision depth estimation method, device and system
CN109859249B (en) Scene flow estimation method based on automatic layering in RGBD sequence
CN104537668B (en) A kind of quick parallax image computational methods and device
CN112435267B (en) Disparity map calculation method for high-resolution urban satellite stereo image
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN111223059A (en) Robust depth map structure reconstruction and denoising method based on guide filter
Lo et al. Depth map super-resolution via Markov random fields without texture-copying artifacts
CN114549669B (en) Color three-dimensional point cloud acquisition method based on image fusion technology
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant