CN113223070A - Depth image enhancement processing method and device - Google Patents
Depth image enhancement processing method and device Download PDFInfo
- Publication number
- CN113223070A CN113223070A CN202110521554.6A CN202110521554A CN113223070A CN 113223070 A CN113223070 A CN 113223070A CN 202110521554 A CN202110521554 A CN 202110521554A CN 113223070 A CN113223070 A CN 113223070A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- depth
- electrically connected
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 36
- 238000001914 filtration Methods 0.000 claims abstract description 21
- 238000006243 chemical reaction Methods 0.000 claims abstract description 18
- 238000000926 separation method Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 238000013135 deep learning Methods 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 230000002708 enhancing effect Effects 0.000 claims description 8
- 238000003062 neural network model Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 230000009977 dual effect Effects 0.000 claims description 3
- 230000003190 augmentative effect Effects 0.000 description 5
- 238000000638 solvent extraction Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 230000008439 repair process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a depth image enhancement processing method and a device thereof in the technical field of depth image enhancement processing devices, wherein an image conversion module is electrically connected with a plane image separation module in an output way, the plane image separation module is electrically connected with an edge extraction module in an output way, the edge extraction module is electrically connected with a mask image generation module in an output way, the mask image generation module is electrically connected with a filling restoration module in an output way, the filling restoration module is electrically connected with a double filtering module in an output way, an image selection processing module is electrically connected with a depth learning module in an output way, the depth learning module is electrically connected with a pixel coordinate and depth value extraction module in an output way, the pixel coordinate and depth value extraction module is electrically connected with a three-dimensional coordinate calculation module, the three-dimensional coordinate calculation module is electrically connected with an image enhancement module in an output way, and the resolution ratio of a depth image can be effectively improved, and the edge definition of the target after enhancement in different scenes can be ensured.
Description
Technical Field
The invention relates to the technical field of depth image enhancement processing devices, in particular to a depth image enhancement processing method and device.
Background
The depth information plays an important role in many computer vision applications, such as AR, scene reconstruction, 3D television auxiliary sensor and the like, the AR technology can make the world richer, interesting and more efficient in reality by overlapping rich text and multimedia information, the AR scene is provided with three-dimensional information, and seamless fit with the real scene is realized by overlapping virtual objects in real time in a real three-dimensional space, however, the detection efficiency and accuracy of the image recognition technology depended on by the existing AR technology still cannot meet the requirements of most application scenes, which directly causes that the AR technology can only be applied to some specific scenes, greatly limits the use limitation of the AR technology, and the data of the AR technology needs to be preprocessed when the actual visual image is applied, and needs to be filled with hollow pixels and smooth image noise, the existing technical scheme is mostly based on bilateral filtering conducted by taking RGB images and corresponding edge information as guidance, the method and the device for enhancing the depth image rarely consider a scene with depth discontinuity invisible in RGB, the problem of edge blur under the scene cannot be effectively solved, meanwhile, due to the fact that image over-enhancement easily causes the phenomenon of blocky discontinuity of the image, and the problem of target edge blur after the depth image is enhanced is solved.
Disclosure of Invention
The present invention is directed to a depth image enhancement method and apparatus, so as to solve the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a depth image enhancement processing device comprises an image acquisition module, wherein the image acquisition module is electrically connected with an image conversion module in an output mode, the image conversion module is electrically connected with a plane image separation module in an output mode, the plane image separation module is electrically connected with an edge extraction module in an output mode, the edge extraction module is electrically connected with a mask image generation module in an output mode, the mask image generation module is electrically connected with a filling restoration module in an output mode, the filling restoration module is electrically connected with a double filtering module in an output mode, the double filtering module is electrically connected with an image selection processing module in an output mode, the image selection processing module is electrically connected with a deep learning module in an output mode, the deep learning module is electrically connected with a pixel coordinate and depth value extraction module in an output mode, the pixel coordinate and depth value extraction module is electrically connected with a three-dimensional coordinate calculation module in an output mode, and the three-dimensional coordinate calculation module is electrically connected with the image enhancement module in an output mode.
Preferably, the deep learning module includes a training partitioning module, the training partitioning module is electrically connected to the weight parameter adjusting module, and the weight parameter adjusting module is electrically connected to the virtual object overlaying module.
The depth image enhancement processing method comprises the following specific steps:
s1: the image acquisition module comprises a depth camera, a depth image is acquired through the depth camera, the original image needing to be enhanced is enhanced through a traditional enhancement algorithm to obtain an original image and an enhanced image pair, meanwhile, camera parameters of the depth camera are acquired, and then the image conversion module performs format conversion on the acquired original depth image to obtain a depth image in a target format;
s2: the planar image separation module is used for separating a planar image from a depth image acquired by the depth camera, the edge extraction module can perform edge extraction on the separated planar enhanced image to obtain first edge information, and perform edge extraction on an original depth image acquired by the depth camera to obtain second edge information, the mask image generation module is used for determining a target mask generation mode according to whether a region to be repaired in the depth image in a target format is located at the edge position of the image, obtaining a mask of the region to be repaired in the depth image in the target format based on the target mask generation mode, and setting a color threshold range of the region to be repaired by determining an inverse binary thresholding function of the mask of the region to be repaired when the region to be repaired in the depth image in the target format is located at the edge of the image to obtain a mask of the region to be repaired;
s3: the filling and repairing module can fill the hole in the region to be repaired in the depth image of the target format by combining the mask of the region to be repaired with a fast advancing algorithm to obtain a repaired depth image, and the dual filtering module performs filtering processing on the hole-repaired depth image twice in order to remove image edge noise and obtain a depth image after image enhancement processing;
s4: the image selection processing module can cut the depth image after the enhancement processing into image blocks which are convenient to process and stack the image blocks into a certain number of image batches, simultaneously calculate the mean square error of each pair of image blocks after the cutting, and select an image batch with an error value larger than a threshold value to input into a network for training, so that the deep learning module carries out deep learning on a plane image through a pre-trained neural network model, determines a target in the plane image, takes an original image as input and an enhanced image as a target, trains a depth neural network capable of enhancing the original image, carries out deep learning training on an acquired sample image, the neural network comprises a plurality of network structures and corresponding weight parameters, and determines an edge information weighted value according to first edge information and second edge information;
s5: the pixel coordinate and depth value extraction module is used for extracting the pixel coordinates and the corresponding depth values of all pixels of the target in the depth image, fusing the first edge information and the second edge information to obtain edge information weighted values corresponding to all the pixel points, the three-dimensional coordinate calculation module calculates the three-dimensional coordinates of the target according to the pixel coordinates and the depth values, and the image enhancement module performs enhancement processing on the depth image according to the edge information weighted values.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention collects a depth image through an image acquisition module, the image conversion module carries out format conversion on the acquired original depth image to obtain a depth image in a target format, a plane image separation module separates a plane image from the depth image collected by a depth camera, an edge extraction module can carry out edge extraction on the separated plane enhanced image to obtain first edge information and carry out edge extraction on the original depth image collected by the depth camera to obtain second edge information, a mask image generation module is used for determining a target mask generation mode according to whether a region to be repaired in the depth image in the target format is positioned at the edge position of the image or not, a mask of the region to be repaired in the depth image in the target format is obtained based on the target mask generation mode, when the region to be repaired in the depth image in the target format is positioned at the edge of the image, setting a color threshold range of the region to be repaired by determining an inverse binary thresholding function of the mask of the region to be repaired to obtain the mask of the region to be repaired;
2. the filling and repairing module can fill the hole in the to-be-repaired area in the depth image of the target format by combining the mask of the to-be-repaired area with a fast advancing algorithm to obtain a repaired depth image, the dual-filtering module performs filtering processing on the depth image after hole repairing twice to remove edge noise of the image and obtain the depth image after image enhancement processing, and the hole and noise in the depth image can be obviously removed by adopting a hole filling and repairing method and a noise smoothing method based on median filtering, so that the image enhancement effect is better;
3. the image selection processing module can cut the depth image after the enhancement processing into image blocks which are convenient to process and stack the image blocks into a certain number of image batches, simultaneously calculate the mean square error of each pair of image blocks after the cutting, and select the image batch with the error value larger than the threshold value to input into the network for training, so that the deep learning module carries out the deep learning on the plane image through the pre-trained neural network model, determines the target in the plane image, takes the original image as the input and the enhanced image as the target, trains the depth neural network capable of enhancing the original image, carries out the training of the deep learning on the collected sample image, the neural network comprises a plurality of network structures and corresponding weight parameters, determines the weighted value of edge information according to the first edge information and the second edge information, and carries out the enhancement processing on the depth image according to the weighted value of the edge information, the problem that the target edge is fuzzy after the depth image is enhanced when the target color is similar to the color of the background or the texture is the same in the related technology can be solved, the edge information extracted from the two images is combined to be used as an edge information weighted value to enhance the depth image, the resolution ratio of the depth image can be effectively improved, and the edge definition of the target after the target is enhanced in different scenes can be ensured;
4. the pixel coordinate and depth value extraction module is used for extracting the pixel coordinate and the corresponding depth value of each pixel of a target in a depth image, fusing first edge information and second edge information to obtain an edge information weighted value corresponding to each pixel, the three-dimensional coordinate calculation module calculates the three-dimensional coordinate of the target according to the pixel coordinate and the depth value, and the image enhancement module enhances the depth image according to the edge information weighted value to further realize augmented reality, so that the realization of AR gets rid of the limitation of an application scene, the traditional AR technology is upgraded into an augmented reality method based on deep learning, and the application scene of the augmented reality and the target detection capability of the AR are greatly expanded.
Drawings
FIG. 1 is a block diagram of the working principle of the present invention;
FIG. 2 is a block diagram of the deep learning module of the present invention.
In the figure: 1. an image acquisition module; 2. an image conversion module; 3. a planar image separation module; 4. an edge extraction module; 5. a mask image generation module; 6. filling and repairing the module; 7. a double filtering module; 8. an image selection processing module; 9. a deep learning module; 91. a training division module; 92. a weight parameter adjustment module; 93. a virtual object superposition module; 10. a pixel coordinate and depth value extraction module; 11. a three-dimensional coordinate calculation module; 12. and an image enhancement module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: a depth image enhancement processing device comprises an image acquisition module 1, wherein the image acquisition module 1 is electrically connected with an image conversion module 2 in an output mode, the image conversion module 2 is electrically connected with a plane image separation module 3 in an output mode, the plane image separation module 3 is electrically connected with an edge extraction module 4 in an output mode, the edge extraction module 4 is electrically connected with a mask image generation module 5 in an output mode, the mask image generation module 5 is electrically connected with a filling and repairing module 6 in an output mode, the filling and repairing module 6 is electrically connected with a double filtering module 7 in an output mode, the double filtering module 7 is electrically connected with an image selection processing module 8 in an output mode, the image selection processing module 8 is electrically connected with a deep learning module 9 in an output mode, the deep learning module 9 is electrically connected with a pixel coordinate and depth value extraction module 10 in an output mode, and the pixel coordinate and depth value extraction module 10 is electrically connected with a three-dimensional coordinate calculation module 11 in an output mode, the three-dimensional coordinate calculation module 11 is electrically connected with the image enhancement module 12.
Referring to fig. 2, the deep learning module 9 includes a training partitioning module 91, the training partitioning module 91 is electrically connected to a weight parameter adjusting module 92, the weight parameter adjusting module 92 is electrically connected to a virtual object overlaying module 93, the training partitioning module 91 is configured to construct a neural network to perform deep learning training on an acquired sample image, cut a network structure of the neural network according to weight parameters to obtain a neural network model, the weight parameter adjusting module 92 adjusts the weight parameters of the neural network model according to accuracy, and the virtual object overlaying module 93 overlays virtual objects in a deep image according to a plane image;
the depth image enhancement processing method comprises the following specific steps:
s1: the image acquisition module 1 comprises a depth camera, acquires a depth image through the depth camera, enhances an original image to be enhanced by using a traditional enhancement algorithm to obtain an original image and an enhanced image pair, simultaneously acquires camera parameters of the depth camera, and then the image conversion module 2 performs format conversion on the acquired original depth image to obtain a depth image in a target format;
s2: the plane image separation module 3 is used for separating a plane image from a depth image collected by the depth camera, the edge extraction module 4 can extract the edge of the separated plane enhanced image to obtain first edge information, and performing edge extraction on the original depth image acquired by the depth camera to obtain second edge information, wherein the mask image generation module 5 is used for determining a target mask generation mode according to whether the region to be repaired in the depth image in the target format is located at the edge position of the image, obtaining a mask of the region to be repaired in the depth image in the target format based on the target mask generation mode, when the region to be repaired in the depth image in the target format is positioned at the edge of the image, determining an inverse binary thresholding function of a mask of the region to be repaired, setting a color threshold range of the area to be repaired to obtain a mask of the area to be repaired;
s3: the filling and repairing module 6 can fill the hole in the to-be-repaired area in the depth image of the target format by combining the mask of the to-be-repaired area with a fast advancing algorithm to obtain a repaired depth image, and the dual filtering module 7 performs filtering processing on the depth image after hole repairing twice to remove image edge noise and obtain a depth image after image enhancement processing;
s4: the image selection processing module 8 can cut the depth image after enhancement processing into image blocks which are convenient to process, stack the image blocks into a certain number of image batches, calculate the mean square error of each pair of image blocks after cutting, and select an image batch with an error value larger than a threshold value to input into a network for training, so that the deep learning module 9 performs deep learning on a plane image through a pre-trained neural network model, determines a target in the plane image, takes an original image as input and an enhanced image as a target, trains a depth neural network capable of enhancing the original image, performs deep learning training on an acquired sample image, the neural network comprises a plurality of network structures and corresponding weight parameters, and determines an edge information weighted value according to first edge information and second edge information;
s5: the pixel coordinate and depth value extraction module 10 is configured to extract pixel coordinates and corresponding depth values of pixels of a target in a depth image, fuse first edge information and second edge information to obtain an edge information weighted value corresponding to each pixel, calculate a three-dimensional coordinate of the target according to the pixel coordinates and the depth values by the three-dimensional coordinate calculation module 11, and enhance the depth image according to the edge information weighted value by the image enhancement module 12;
the invention collects a depth image through an image acquisition module 1, an image conversion module 2 carries out format conversion on the obtained original depth image to obtain a depth image in a target format, a plane image separation module 3 separates a plane image from the depth image collected by a depth camera, an edge extraction module 4 can carry out edge extraction on the separated plane enhanced image to obtain first edge information and carry out edge extraction on the original depth image collected by a depth camera to obtain second edge information, a mask image generation module 5 is used for determining a target mask generation mode according to whether a region to be repaired in the depth image in the target format is positioned at the edge position of the image or not, a mask of the region to be repaired in the depth image in the target format is obtained based on the target mask generation mode, when the region to be repaired in the depth image in the target format is positioned at the edge of the image, the mask of the area to be repaired is obtained by determining the inverse binary thresholding function of the mask of the area to be repaired and setting the color threshold range of the area to be repaired, the filling repair module 6 in the invention can combine the mask of the area to be repaired with a fast marching algorithm to fill the hole in the area to be repaired in the depth image of a target format to obtain the repaired depth image, the double filtering module 7 carries out filtering processing twice in front and back on the depth image after hole repair to remove the edge noise of the image and obtain the depth image after image enhancement processing, the hole filling repair method and the noise smoothing method based on median filtering are adopted to obviously remove the hole and the noise in the depth image, the image enhancement effect is better, the image selection processing module 8 in the invention can cut the depth image after enhancement processing into image blocks which are convenient to process, stacking the image blocks into a certain number of image batches, calculating the mean square error of each pair of cut image blocks, selecting an image batch with an error value larger than a threshold value, inputting the image batch into a network for training, carrying out deep learning on a plane image by a pre-trained neural network model by a deep learning module 9, determining a target in the plane image, taking an original image as input and an enhanced image as a target, training a deep neural network capable of enhancing the original image, carrying out deep learning training on an acquired sample image by the deep neural network, determining an edge information weighted value according to first edge information and second edge information, carrying out enhancement processing on the depth image according to the edge information weighted value, and solving the problem that when the target color is similar to the background color or has the same texture in the related technology, the problem of fuzzy target edge occurs after the depth image is enhanced, the edge information extracted from the two images is combined to be used as an edge information weighted value to enhance the depth image, the resolution ratio of the depth image can be effectively improved, and the edge definition of the target after enhancement under different scenes can be ensured, the pixel coordinate and depth value extracting module 10 in the invention is used for extracting the pixel coordinate and the corresponding depth value of each pixel of the target in the depth image, the first edge information and the second edge information are fused to obtain the edge information weighted value corresponding to each pixel point, the three-dimensional coordinate calculating module 11 calculates the three-dimensional coordinate of the target according to the pixel coordinate and the depth value, the image enhancing module 12 enhances the depth image according to the edge information weighted value, thereby enhancing the reality and enabling the realization of AR to get rid of the limitation of application scenes, the traditional AR technology is upgraded to the augmented reality method based on deep learning, so that the application scene of augmented reality is greatly expanded, and the target detection capability of AR is enhanced.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (3)
1. A depth image enhancement processing apparatus comprising an image acquisition module (1), characterized in that: the image acquisition module (1) is electrically connected with the image conversion module (2), the image conversion module (2) is electrically connected with the plane image separation module (3), the plane image separation module (3) is electrically connected with the edge extraction module (4), the edge extraction module (4) is electrically connected with the mask image generation module (5), the mask image generation module (5) is electrically connected with the filling and repairing module (6), the filling and repairing module (6) is electrically connected with the double filtering module (7), the double filtering module (7) is electrically connected with the image selection processing module (8), the image selection processing module (8) is electrically connected with the deep learning module (9), and the deep learning module (9) is electrically connected with the pixel coordinate and depth value extraction module (10), the pixel coordinate and depth value extraction module (10) is electrically connected with the three-dimensional coordinate calculation module (11), and the three-dimensional coordinate calculation module (11) is electrically connected with the image enhancement module (12).
2. The depth image enhancement processing device according to claim 1, wherein: the deep learning module (9) comprises a training dividing module (91), the training dividing module (91) is electrically connected with a weight parameter adjusting module (92), and the weight parameter adjusting module (92) is electrically connected with a virtual object superposition module (93).
3. The depth image enhancement processing apparatus according to any one of claims 1 to 2, wherein: the depth image enhancement processing method comprises the following specific steps:
s1: the image acquisition module (1) comprises a depth camera, a depth image is acquired through the depth camera, the original image needing to be enhanced is enhanced by using a traditional enhancement algorithm to obtain an original image and an enhanced image pair, meanwhile, camera parameters of the depth camera are acquired, and then the image conversion module (2) performs format conversion on the acquired original depth image to obtain a depth image in a target format;
s2: the plane image separation module (3) is used for separating a plane image from a depth image collected by the depth camera, the edge extraction module (4) can carry out edge extraction on the separated plane enhanced image to obtain first edge information, and edge extraction is carried out on the original depth image acquired by the depth camera to obtain second edge information, a mask image generation module (5) is used for determining a target mask generation mode according to whether the region to be repaired in the depth image in the target format is positioned at the edge position of the image or not, a mask of the region to be repaired in the depth image in the target format is obtained based on the target mask generation mode, when the region to be repaired in the depth image in the target format is positioned at the edge of the image, determining an inverse binary thresholding function of a mask of the region to be repaired, setting a color threshold range of the area to be repaired to obtain a mask of the area to be repaired;
s3: the filling and repairing module (6) can perform hole filling on the to-be-repaired area in the depth image of the target format by combining a mask of the to-be-repaired area with a fast advancing algorithm to obtain a repaired depth image, and the dual filtering module (7) performs filtering processing on the depth image subjected to hole repairing twice in a front-back mode to remove image edge noise and obtain a depth image subjected to image enhancement processing;
s4: the image selection processing module (8) can cut the depth image after enhancement processing into image blocks which are convenient to process, stack the image blocks into a certain number of image batches, calculate the mean square error of each pair of image blocks after cutting, and select the image batch with the error value larger than the threshold value to input into the network for training, so that the deep learning module (9) performs deep learning on the plane image through a pre-trained neural network model, determines a target in the plane image, takes the original image as input and the enhanced image as a target, trains a depth neural network capable of enhancing the original image, performs deep learning training on the collected sample image, the neural network comprises a plurality of network structures and corresponding weight parameters, and determines an edge information weighted value according to the first edge information and the second edge information;
s5: the pixel coordinate and depth value extraction module (10) is used for extracting the pixel coordinate and the corresponding depth value of each pixel of the target in the depth image, the first edge information and the second edge information are fused to obtain an edge information weighted value corresponding to each pixel, the three-dimensional coordinate calculation module (11) calculates the three-dimensional coordinate of the target according to the pixel coordinate and the depth value, and the image enhancement module (12) performs enhancement processing on the depth image according to the edge information weighted value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110521554.6A CN113223070A (en) | 2021-05-13 | 2021-05-13 | Depth image enhancement processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110521554.6A CN113223070A (en) | 2021-05-13 | 2021-05-13 | Depth image enhancement processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113223070A true CN113223070A (en) | 2021-08-06 |
Family
ID=77095482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110521554.6A Pending CN113223070A (en) | 2021-05-13 | 2021-05-13 | Depth image enhancement processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113223070A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113854780A (en) * | 2021-09-30 | 2021-12-31 | 重庆清微文化旅游有限公司 | Intelligent dispensing method and system without cross contamination of materials |
CN114022999A (en) * | 2021-10-27 | 2022-02-08 | 北京云迹科技有限公司 | Method, device, equipment and medium for detecting shortage of goods of vending machine |
CN114359123A (en) * | 2022-01-12 | 2022-04-15 | 广东汇天航空航天科技有限公司 | Image processing method and device |
CN114494095A (en) * | 2022-01-28 | 2022-05-13 | 北京百度网讯科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115063303A (en) * | 2022-05-18 | 2022-09-16 | 大连理工大学 | Image 3D method based on image restoration |
CN117522760A (en) * | 2023-11-13 | 2024-02-06 | 书行科技(北京)有限公司 | Image processing method, device, electronic equipment, medium and product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685732A (en) * | 2018-12-18 | 2019-04-26 | 重庆邮电大学 | A kind of depth image high-precision restorative procedure captured based on boundary |
CN109683699A (en) * | 2019-01-07 | 2019-04-26 | 深圳增强现实技术有限公司 | The method, device and mobile terminal of augmented reality are realized based on deep learning |
CN110223383A (en) * | 2019-06-17 | 2019-09-10 | 重庆大学 | A kind of plant three-dimensional reconstruction method and system based on depth map repairing |
CN110675346A (en) * | 2019-09-26 | 2020-01-10 | 武汉科技大学 | Image acquisition and depth map enhancement method and device suitable for Kinect |
-
2021
- 2021-05-13 CN CN202110521554.6A patent/CN113223070A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685732A (en) * | 2018-12-18 | 2019-04-26 | 重庆邮电大学 | A kind of depth image high-precision restorative procedure captured based on boundary |
CN109683699A (en) * | 2019-01-07 | 2019-04-26 | 深圳增强现实技术有限公司 | The method, device and mobile terminal of augmented reality are realized based on deep learning |
CN110223383A (en) * | 2019-06-17 | 2019-09-10 | 重庆大学 | A kind of plant three-dimensional reconstruction method and system based on depth map repairing |
CN110675346A (en) * | 2019-09-26 | 2020-01-10 | 武汉科技大学 | Image acquisition and depth map enhancement method and device suitable for Kinect |
Non-Patent Citations (1)
Title |
---|
张芳芳 等: "基于边缘信息引导滤波的深度图像增强算法", 计算机应用与软件, 15 August 2017 (2017-08-15) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113854780A (en) * | 2021-09-30 | 2021-12-31 | 重庆清微文化旅游有限公司 | Intelligent dispensing method and system without cross contamination of materials |
CN114022999A (en) * | 2021-10-27 | 2022-02-08 | 北京云迹科技有限公司 | Method, device, equipment and medium for detecting shortage of goods of vending machine |
CN114359123A (en) * | 2022-01-12 | 2022-04-15 | 广东汇天航空航天科技有限公司 | Image processing method and device |
CN114494095A (en) * | 2022-01-28 | 2022-05-13 | 北京百度网讯科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115063303A (en) * | 2022-05-18 | 2022-09-16 | 大连理工大学 | Image 3D method based on image restoration |
CN117522760A (en) * | 2023-11-13 | 2024-02-06 | 书行科技(北京)有限公司 | Image processing method, device, electronic equipment, medium and product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113223070A (en) | Depth image enhancement processing method and device | |
Rematas et al. | Soccer on your tabletop | |
CN101588445B (en) | Video area-of-interest exacting method based on depth | |
CN112543317B (en) | Method for converting high-resolution monocular 2D video into binocular 3D video | |
CN109462747B (en) | DIBR system cavity filling method based on generation countermeasure network | |
EP2595116A1 (en) | Method for generating depth maps for converting moving 2d images to 3d | |
US10834379B2 (en) | 2D-to-3D video frame conversion | |
CN106060509B (en) | Introduce the free view-point image combining method of color correction | |
CN107369204B (en) | Method for recovering basic three-dimensional structure of scene from single photo | |
CN107944459A (en) | A kind of RGB D object identification methods | |
US10127714B1 (en) | Spherical three-dimensional video rendering for virtual reality | |
CN108648264A (en) | Underwater scene method for reconstructing based on exercise recovery and storage medium | |
CN112734914A (en) | Image stereo reconstruction method and device for augmented reality vision | |
Kuo et al. | Depth estimation from a monocular view of the outdoors | |
CN114677479A (en) | Natural landscape multi-view three-dimensional reconstruction method based on deep learning | |
KR101125061B1 (en) | A Method For Transforming 2D Video To 3D Video By Using LDI Method | |
CN107958489B (en) | Curved surface reconstruction method and device | |
CN113724273A (en) | Edge light and shadow fusion method based on neural network regional target segmentation | |
CN109218706A (en) | A method of 3 D visual image is generated by single image | |
CN102708570B (en) | Method and device for obtaining depth map | |
CN115063303A (en) | Image 3D method based on image restoration | |
Kuo et al. | 2D-to-3D conversion for single-view image based on camera projection model and dark channel model | |
Tran et al. | Spatially consistent view synthesis with coordinate alignment | |
Liu et al. | Stereoscopic view synthesis based on region-wise rendering and sparse representation | |
Ramirez et al. | An effective inpainting technique for hole filling in DIBR synthesized images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |