CN111179333B - Defocus blur kernel estimation method based on binocular stereo vision - Google Patents

Defocus blur kernel estimation method based on binocular stereo vision Download PDF

Info

Publication number
CN111179333B
CN111179333B CN201911247805.5A CN201911247805A CN111179333B CN 111179333 B CN111179333 B CN 111179333B CN 201911247805 A CN201911247805 A CN 201911247805A CN 111179333 B CN111179333 B CN 111179333B
Authority
CN
China
Prior art keywords
parallax
image
blur
kernel
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911247805.5A
Other languages
Chinese (zh)
Other versions
CN111179333A (en
Inventor
陈则津
葛宝臻
陈雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201911247805.5A priority Critical patent/CN111179333B/en
Publication of CN111179333A publication Critical patent/CN111179333A/en
Application granted granted Critical
Publication of CN111179333B publication Critical patent/CN111179333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a defocus blur kernel estimation method based on binocular stereoscopic vision. The invention comprises the following steps: initial fuzzy kernel calculation; equivalent blurred images are acquired; three-dimensional matching calculation: the gray information and the fuzzy core information are synthesized to serve as data items of an energy function of a global matching method, and the global matching method is used for carrying out three-dimensional matching on the equivalent fuzzy images through optimizing the energy function to obtain a dense parallax image; parallax post-processing: carrying out weighted average filtering treatment on the dense disparity map; final fuzzy kernel calculation: and (3) taking the parallax image subjected to parallax post-processing as an initial parallax image, and carrying out fuzzy kernel calculation again to obtain a final fuzzy kernel. The invention fully considers the relation between parallax and the blur kernel in binocular stereo vision, improves the accuracy of blur kernel calculation by utilizing the binocular image, and is suitable for the defocus blur kernel estimation problem based on binocular stereo vision.

Description

Defocus blur kernel estimation method based on binocular stereo vision
Technical Field
The invention relates to a defocus blur kernel estimation method, in particular to a defocus blur kernel estimation method based on binocular stereo vision.
Background
In the process of image acquisition, the image is degraded to different degrees due to the imperfection of an imaging system and the interference of the external environment. Defocus blur is a common form of blur, which refers to image blur due to depth of field or focus errors of the imaging system. In addition, the image is affected by atmospheric turbulence during long-distance imaging, which also results in the generation of defocus blur. The imaging system is difficult to acquire clear images due to the reasons, and the binocular images have defocused blurring degradation with different degrees, so that the three-dimensional reconstruction accuracy is reduced. Whether it is a deblurring algorithm or a stereo matching algorithm for blurred images, estimation of the blur kernel has always been a key problem in the field of blurred images.
Most existing fuzzy core estimation methods use a single image or image sequence shot by a single view angle to calculate, calculate a fuzzy core corresponding to an edge region pixel by extracting edge texture information of the image, and then calculate a fuzzy core map of the whole image by a propagation algorithm. On this basis, tang et al propose a method of calculating a blur kernel from the spectrum contrast, and since defocus blur affects the spectrum amplitude of the object edge region, the blur kernel map [2] can be calculated from the relation between the blur degree of the secondary blurred image and the spectrum. The method is only suitable for defocusing blur kernel estimation of a monocular image, and cannot effectively utilize more information provided by a binocular image to perform blur kernel estimation.
Disclosure of Invention
The invention provides a defocus blur kernel estimation method based on binocular stereoscopic vision, aiming at solving the defect of the prior art, aiming at the defocus blur kernel estimation problem of a binocular degraded image.
The technical scheme of the invention is described in detail below:
A defocus blur kernel estimation method based on binocular stereo vision, the method comprising the following five steps:
Step one, initial fuzzy kernel calculation. Performing stereo matching calculation on the binocular image to obtain an initial parallax image, and calculating an initial blur kernel of the binocular image by using parallax values corresponding to edge pixels;
and step two, obtaining equivalent blurred images. Calculating a relative blur kernel according to the initial blur kernel of the binocular image, and performing self-adaptive Gaussian filtering on the binocular image to obtain an equivalent blur image;
And thirdly, calculating stereo matching. And integrating gray information and fuzzy core information as data items of an energy function of the global matching method. Performing three-dimensional matching on the equivalent blurred images by optimizing an energy function by using a global matching method to obtain a dense parallax image;
And step four, parallax post-processing. Carrying out weighted average filtering treatment on the dense disparity map obtained in the step;
and step five, finally, calculating a fuzzy kernel. And (3) taking the parallax image subjected to parallax post-processing as an initial parallax image, and carrying out fuzzy kernel calculation again to obtain a final fuzzy kernel.
The initial fuzzy core calculation step specifically comprises the following steps:
According to the initial parallax map of the binocular image, calculating an average parallax value d k of edge pixels in the binocular image by using a K-means clustering algorithm, wherein k=1 and 2 respectively represent corresponding information of the left image and the right image, and calculating an initial blur kernel sigma k corresponding to each pixel of the binocular image according to camera calibration parameters and the average parallax d k.
The step of obtaining the equivalent blurred image specifically comprises the following steps:
computing relative blur kernels from initial blur kernels for pixels in a binocular image Where σ 1(p)、σ2 (q) represents the initial blur kernel of the left and right images corresponding to an arbitrary pixel p, respectively. And carrying out Gaussian filtering on a region with a smaller blur kernel in a corresponding region of the binocular image according to the relative blur kernel delta sigma (p) to obtain an equivalent blur image, and selecting the relative blur kernel delta sigma (p) as a standard deviation parameter of a Gaussian function filtering process, so as to obtain an equivalent blur binocular image with the similar blur degree of the corresponding region.
The global matching method specifically comprises the following steps:
The global matching method is a basic method in stereo matching, and the stereo matching calculation is performed by optimizing an energy function. The energy function of the global matching method consists of a data item and a smoothing item, wherein the weighted sum of the relative blurring kernel information and the pixel gray information of the binocular image is used as the data item of the energy function, and the pixel gray information is used as the smoothing item of the energy function. And according to the equivalent blurred binocular image and the energy function, constructing a corresponding graph model, processing the parallax value as a label, and selecting an optimal label for each pixel by calculating the minimum value of the global energy function so as to obtain a dense parallax graph.
The weighted average filtering specifically comprises the following steps:
and selecting a supporting window from the parallax map, taking the weighted average value of parallaxes of all pixels in the window as the parallax value of the central pixel according to the fuzzy kernel difference corresponding to each pixel and the central pixel in the window as the weight value, and obtaining the final parallax map.
The invention has the following beneficial effects:
1. The invention fully considers the relation between the fuzzy core and the parallax, and improves the accuracy of fuzzy core calculation through the parallax constraint of binocular stereoscopic vision. The method provided by the invention has clear principle, more accurate and reliable calculation result and can provide a new thought for defocus blur kernel calculation in binocular stereoscopic vision.
2. In the process of calculating the fuzzy core, the method and the device can select more accurate supporting points by using a K-means algorithm and a weighted mean filtering post-processing algorithm, ignore the interference of partial irrelevant points, and improve the accuracy of calculating the fuzzy core.
Drawings
FIG. 1 is a flow chart of a defocus blur kernel estimation method based on binocular stereo vision of the present invention;
FIG. 2 shows a defocused binocular image for use in an example of the present invention, wherein (a) is a left view and (b) is a right view;
fig. 3 shows the output results in the example of the present invention, where (a) and (b) are the left and right views of the equivalent blurred image obtained in step 2, and (c) is the defocus map obtained in the present invention.
Detailed Description
In order to make the technical scheme of the invention more clear, the specific implementation method of the invention will be further described with reference to the accompanying drawings.
The following describes the specific processing procedure of the defocus blur kernel estimation method based on binocular stereo vision according to the present invention, taking the defocus binocular images shown in fig. 2 (a) and 2 (b) as an example:
1. An initial fuzzy core calculation step:
The defocused binocular images shown in fig. 2 (a) and 2 (b) are respectively denoted as I 1、I2, a disparity map P 0 of the binocular image is calculated by using a local matching algorithm based on SAD, edge pixels of the binocular image are extracted by using a Canny edge detection algorithm, and disparity values of the edge pixels form a set ψ k, wherein k=1 and 2 represent corresponding information of left and right images respectively. Calculating an average parallax value d k of the edge pixel set phi k by using a K-means clustering algorithm, and further calculating an initial fuzzy kernel corresponding to each pixel of the binocular image And forming the initial fuzzy core map. Wherein D k represents the entrance pupil diameter of the imaging system, B represents the system baseline, D represents the parallax value of the pixel in the initial parallax map, D k represents the parallax value of the edge pixel, and the parameters are set as D 1=D2 =5.6 and b= 3080.117 according to the calibration information of the image.
2. Equivalent blurred image acquisition:
calculating relative blur kernel according to the initial blur kernel of each pixel obtained by the steps Where σ 1(p)、σ2 (q) represents the blur kernel of the corresponding pixel of the left and right images, respectively. And carrying out Gaussian filtering G Δσ on the region with smaller blur kernel in the corresponding region of the binocular image according to the relative blur kernel to obtain an equivalent blur image:
wherein I 1(p)、I2 (p) represents the corresponding pixels of the left and right original pictures, Each representing a corresponding pixel of the equivalent blurred image, G Δσ represents a Gaussian point spread function with a standard deviation Δσ,/>Representing a convolution operation. As shown in fig. 3 (a) and 3 (b), the same blurred image/>, obtained by adaptively blurring compensation is performed on the binocular image I 1、I2
3. And a three-dimensional matching calculation step:
The global matching method is used for optimizing the energy function to obtain the equivalent blurred image in the steps And (3) performing a stereo matching experiment to obtain a dense parallax map P 1.
In the global matching algorithm, a corresponding graph model can be constructed according to the image and the energy function, and the parallax value is treated as a label. The network graph G= < V, E > is composed of a node set V and an edge set E, one cut can divide the network graph G into a set S containing a source node S 0 and a set T containing a sink node T 0, the sum of the capacities of the edges in the cut set is called the capacity of the cut, the capacity of the minimum cut is obtained through a method of calculating the maximum flow of the network, and therefore the global energy function is optimized, and an optimal label is selected for each pixel to obtain a dense disparity graph.
The energy function of the global matching algorithm consists of a data item and a smoothing item, the fuzzy core information of the binocular image and the pixel gray information are combined to be used as a data item E data (f) of the energy function, the pixel gray information is used as a smoothing item E smoothness (f) of the energy function, and the parallax map P 1 is obtained by calculating the minimum value of the energy function.
The data item E data (f) can be defined as:
Edata(f)=∑D(a)
Where α is a scaling factor of the adjustment weight and T is a threshold of the data item.
The smoothing term E smoothness (f) can be defined as:
Where λ is the gray threshold parameter.
In this embodiment, the algorithm parameters are selected to be α=0.15, t=30, and λ=5.
4. A parallax post-processing step:
And (3) performing optimization processing on the parallax map P 1 obtained in the step by using a parallax post-processing method of weighted average filtering. Selecting a window from the parallax map P 1, taking the fuzzy kernel difference corresponding to each pixel and the central pixel in the window as a weight omega pq=exp[-(σpq)2, and taking the weighted average value of the parallaxes of all the pixels in the window As the disparity value of the center pixel, a disparity map P 2 is obtained. Wherein σ p、σq represents the center pixel and the support pixel, N (p) represents the support window of the center pixel p, and d q represents the parallax value of the pixel q in the support window, respectively.
5. And a final fuzzy core calculation step:
and (3) taking the parallax map P 2 obtained after parallax post-processing as an initial parallax map, repeating the fuzzy core calculation step in the step (1), and recalculating the fuzzy core as a final fuzzy core to obtain a fuzzy core map as shown in fig. 3 (c).
The above invention is described with respect to only this example, which is used as a preferred embodiment to describe the steps of the present invention, and should not be used to limit the scope of the claims. All equivalent changes that come within the principles of the invention are intended to be embraced therein.

Claims (1)

1. A defocus blur kernel estimation method based on binocular stereo vision is characterized by comprising the following five steps:
Step one, initial fuzzy kernel calculation: performing stereo matching calculation on the binocular image to obtain an initial parallax image, and calculating an initial blur kernel of the binocular image by using parallax values corresponding to edge pixels;
Step two, obtaining equivalent blurred images: calculating a relative blur kernel according to the initial blur kernel of the binocular image, and performing self-adaptive Gaussian filtering on the binocular image to obtain an equivalent blur image;
step three, three-dimensional matching calculation: the gray information and the fuzzy core information are synthesized to serve as data items of an energy function of a global matching method, and the global matching method is used for carrying out three-dimensional matching on the equivalent fuzzy images through optimizing the energy function to obtain a dense parallax image;
Step four, parallax post-processing: carrying out weighted average filtering treatment on the dense disparity map obtained in the step;
Step five, final fuzzy kernel calculation: the parallax image after parallax post-processing is used as an initial parallax image, and fuzzy kernel calculation is carried out again to obtain a final fuzzy kernel;
the initial blur kernel for calculating the binocular image is specifically:
According to an initial parallax image of the binocular image, calculating an average parallax value d k of edge pixels in the binocular image by using a K-means clustering algorithm, wherein k=1 and 2 respectively represent corresponding information of left and right images, and calculating an initial fuzzy kernel sigma k corresponding to each pixel of the binocular image according to camera calibration parameters and the average parallax d k;
the step of obtaining the equivalent blurred image specifically comprises the following steps:
computing relative blur kernels from initial blur kernels for pixels in a binocular image Wherein sigma 1(p)、σ2 (q) respectively represents initial blur kernels of arbitrary pixels p corresponding to left and right images, gaussian filtering is carried out on a region with smaller blur kernels in a corresponding region of a binocular image according to relative blur kernels delta sigma (p) to obtain an equivalent blur image, and the relative blur kernels delta sigma (p) are selected as standard deviation parameters of a Gaussian function filtering process, so that an equivalent blur binocular image with approximately the same blur degree of the corresponding region is obtained;
the global matching method specifically comprises the following steps:
performing three-dimensional matching calculation by optimizing an energy function, wherein the energy function of the global matching method consists of a data item and a smoothing item, taking the weighted sum of relative fuzzy core information of a binocular image and pixel gray information as the data item of the energy function, taking the pixel gray information as the smoothing item of the energy function, constructing a corresponding graph model according to the equivalent fuzzy binocular image and the energy function, processing a parallax value as a label, and selecting an optimal label for each pixel by calculating the minimum value of the global energy function so as to obtain a dense parallax map;
The weighted average filtering process specifically comprises the following steps:
and selecting a supporting window from the parallax map, taking the weighted average value of parallaxes of all pixels in the window as the parallax value of the central pixel according to the fuzzy kernel difference corresponding to each pixel and the central pixel in the window as the weight value, and obtaining the final parallax map.
CN201911247805.5A 2019-12-09 2019-12-09 Defocus blur kernel estimation method based on binocular stereo vision Active CN111179333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911247805.5A CN111179333B (en) 2019-12-09 2019-12-09 Defocus blur kernel estimation method based on binocular stereo vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911247805.5A CN111179333B (en) 2019-12-09 2019-12-09 Defocus blur kernel estimation method based on binocular stereo vision

Publications (2)

Publication Number Publication Date
CN111179333A CN111179333A (en) 2020-05-19
CN111179333B true CN111179333B (en) 2024-04-26

Family

ID=70657245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911247805.5A Active CN111179333B (en) 2019-12-09 2019-12-09 Defocus blur kernel estimation method based on binocular stereo vision

Country Status (1)

Country Link
CN (1) CN111179333B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258635B (en) * 2020-10-26 2023-07-21 北京石油化工学院 Three-dimensional reconstruction method and device based on improved binocular matching SAD algorithm
CN113965742B (en) * 2021-02-28 2022-04-19 北京中科慧眼科技有限公司 Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal
CN112950698B (en) * 2021-03-18 2024-03-26 北京拙河科技有限公司 Depth estimation method, device, medium and equipment based on binocular defocused image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110024966A (en) * 2009-09-03 2011-03-09 포항공과대학교 산학협력단 Stereo matching apparatus and its method
CN105809185A (en) * 2015-12-31 2016-07-27 天津大学 High-spectrum image nonlinear demixing method based on neural network and differential search
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106225774A (en) * 2016-08-26 2016-12-14 武汉大学 A kind of unmanned agriculture tractor road measurement apparatus based on computer vision and method
CN106683174A (en) * 2016-12-23 2017-05-17 上海斐讯数据通信技术有限公司 3D reconstruction method, apparatus of binocular visual system, and binocular visual system
CN107749051A (en) * 2017-10-17 2018-03-02 长沙全度影像科技有限公司 A kind of unzoned lens space-variant blur core smoothing method based on mean filter
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN110223377A (en) * 2019-05-28 2019-09-10 上海工程技术大学 One kind being based on stereo visual system high accuracy three-dimensional method for reconstructing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8203602B2 (en) * 2009-02-06 2012-06-19 Robert Bosch Gmbh Depth-aware blur kernel estimation method for iris deblurring
US8885941B2 (en) * 2011-09-16 2014-11-11 Adobe Systems Incorporated System and method for estimating spatially varying defocus blur in a digital image

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110024966A (en) * 2009-09-03 2011-03-09 포항공과대학교 산학협력단 Stereo matching apparatus and its method
CN105809185A (en) * 2015-12-31 2016-07-27 天津大学 High-spectrum image nonlinear demixing method based on neural network and differential search
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106225774A (en) * 2016-08-26 2016-12-14 武汉大学 A kind of unmanned agriculture tractor road measurement apparatus based on computer vision and method
CN106683174A (en) * 2016-12-23 2017-05-17 上海斐讯数据通信技术有限公司 3D reconstruction method, apparatus of binocular visual system, and binocular visual system
CN107749051A (en) * 2017-10-17 2018-03-02 长沙全度影像科技有限公司 A kind of unzoned lens space-variant blur core smoothing method based on mean filter
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
CN108510529A (en) * 2018-03-14 2018-09-07 昆明理工大学 A kind of figure based on adaptive weight cuts solid matching method
CN110223377A (en) * 2019-05-28 2019-09-10 上海工程技术大学 One kind being based on stereo visual system high accuracy three-dimensional method for reconstructing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种基于多邻域非线性扩散的动态规划全局立体匹配算法;耿冬冬;罗娜;华东理工大学学报(自然科学版);20171231(第005期);全文 *
噪声模糊图像盲目反降晰的模糊核准确估计;孙士洁;赵怀慈;郝明国;李波;计算机辅助设计与图形学学报;20161231;第28卷(第5期);全文 *
立体视觉双目图像MAP的优化复原方法;洪汉玉;范艳;时愈;华夏;张天序;武汉科技大学学报;20161231(第001期);全文 *
融入局部信息的直觉模糊核聚类图像分割算法;湛西羊;李良群;张富有;信号处理;20171231;第33卷(第3期);全文 *

Also Published As

Publication number Publication date
CN111179333A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
EP3673461B1 (en) Systems and methods for hybrid depth regularization
US8385630B2 (en) System and method of processing stereo images
CN111179333B (en) Defocus blur kernel estimation method based on binocular stereo vision
TW200840365A (en) Motion-blur degraded image restoration method
CN107481271B (en) Stereo matching method, system and mobile terminal
CN108564620B (en) Scene depth estimation method for light field array camera
CN111223059B (en) Robust depth map structure reconstruction and denoising method based on guide filter
CN111160229B (en) SSD network-based video target detection method and device
CN110070610B (en) Feature point matching method, and feature point matching method and device in three-dimensional reconstruction process
Zhu et al. Image guided depth enhancement via deep fusion and local linear regularizaron
KR20180014149A (en) Apparatus and method for generating depth information
CN112991504A (en) Improved method for filling holes based on TOF camera three-dimensional reconstruction
KR101825218B1 (en) Apparatus and method for generaing depth information
CN116645300A (en) Simple lens point spread function estimation method
Ranipa et al. A practical approach for depth estimation and image restoration using defocus cue
CN112750156B (en) Light field imaging system, processing method and device
Suneetha et al. An Improved Denoising of Medical Images Based on Hybrid Filter Approach and Assess Quality Metrics
RU2716311C1 (en) Device for reconstructing a depth map with searching for similar blocks based on a neural network
Desai Modified fast and enhanced exemplar based inpainting algorithm for solving unknown row filling problem
Shuang et al. Algorithms for improving the quality of underwater optical images: A comprehensive review
Guan et al. Quality estimation based multi-focus image fusion
CN110555907B (en) Non-standardized picture three-dimensional reconstruction method
Tsai et al. An improved adaptive deconvolution algorithm for single image deblurring
Alkinani et al. Non-local means for stereo image denoising using structural similarity
Sakuragi et al. Depth estimation from stereo images using sparsity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant