CN106257537B - A kind of spatial depth extracting method based on field information - Google Patents

A kind of spatial depth extracting method based on field information Download PDF

Info

Publication number
CN106257537B
CN106257537B CN201610578644.8A CN201610578644A CN106257537B CN 106257537 B CN106257537 B CN 106257537B CN 201610578644 A CN201610578644 A CN 201610578644A CN 106257537 B CN106257537 B CN 106257537B
Authority
CN
China
Prior art keywords
spatial depth
depth
region
spatial
viewpoints
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610578644.8A
Other languages
Chinese (zh)
Other versions
CN106257537A (en
Inventor
李晓彤
马壮
岑兆丰
兰顺
陈灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610578644.8A priority Critical patent/CN106257537B/en
Publication of CN106257537A publication Critical patent/CN106257537A/en
Application granted granted Critical
Publication of CN106257537B publication Critical patent/CN106257537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The spatial depth extracting method based on field information that the invention discloses a kind of, includes the following steps: step 1, and reference view is selected in four-dimensional light field data, calculates the spatial depth of image border ingredient at reference view;Step 2, region segmentation operation is carried out to image at reference view, is classified as several regions according to color or brightness homogeneity;Step 3, the spatial depth of each edges of regions after segmentation is corrected, interpolation arithmetic is made according to spatial depth of the revised spatial depth to regional center part.The present invention avoids the problems such as described function selection difficulty that spatial depth described function optimization method is faced is not known with operation time, realizes quick, the exact arithmetic extracted to spatial depth by the methods of TABU search, region segmentation, depth interpolation.

Description

Spatial depth extraction method based on light field information
Technical Field
The invention relates to the field of computer vision and computational photography, in particular to a spatial depth extraction method based on light field information.
Background
Information acquisition of light field imaging belongs to the field of computational photography, and spatial depth extraction belongs to the field of computer vision. The assumption of a recorded light field was first proposed by Gabriel Lippmann in the latin paper published in 1908. In 1996, m.levoy and p.hanrahan proposed the theory of light-field rendering and reduced plenoptic functions to four dimensions and referred to as light-field functions. There are two types of light field information collection, one is in the form of a traditional camera plus a microlens array, and the other is in the form of arranging a plurality of cameras into an array. The two methods have the same principle and can obtain the sampling of the light field information. Compared with the traditional imaging mode, the light field imaging method not only records the intensity of the sensor pixel, but also records the direction of the incident light. The light field imaging can thus acquire parallax images in the horizontal and vertical directions of the imaged object, and thus contains object spatial depth information.
The existing method for extracting the spatial depth from the light field information is mostly realized by optimizing a spatial depth description function, and the operation time is difficult to accurately estimate and is long due to the need of circular optimization. The method for realizing the spatial depth extraction by optimizing the spatial depth description function needs to select a strong and effective description function, the accuracy of the operation result of the method is greatly related to the matching degree of the description function and the actual scene, and the description optimization function which is suitable for all scenes is difficult to find.
Patent document No. 104899870a discloses a depth estimation method based on light field data distribution. According to the method, with reference to the light field data characteristics, tensors related to focal length are extracted from a series of refocused light field images obtained by changing the pixel distribution of an input light field image to estimate the scene depth. Furthermore, a multivariate reliability model is established by utilizing the variation trend of the tensor along with the depth and the gradient information of the scene central sub-aperture texture map to measure the initial depth quality of each point, and the initial estimation result is optimized. The method has large calculation amount and long operation time.
Disclosure of Invention
In order to solve the problems of time complexity and calculation accuracy of a space depth extraction method based on function optimization, the invention provides a space depth extraction method by combining light field information, and the space depth of the whole image is extracted through one-time calculation without iterative calculation.
A spatial depth extraction method based on light field information comprises the following steps:
step 1, selecting a reference viewpoint from four-dimensional light field data, and calculating the spatial depth of image edge components at the reference viewpoint;
step 2, performing region segmentation operation on the image at the reference viewpoint, and dividing the image into a plurality of regions according to color or brightness uniformity;
and 3, correcting the spatial depth of the edge of each divided region, and performing interpolation operation on the spatial depth of the central part of the region according to the corrected spatial depth.
In step 1, the reference viewpoint may be a central viewpoint or an edge viewpoint. In order to obtain the image edge components at the reference viewpoint, the spatial depth of the image edge components at the reference viewpoint can be obtained only by combining the image information of two groups of other viewpoints which are in the same straight line with the reference viewpoint.
The spatial depth refers to the distance from a point on an object corresponding to a certain pixel point in a field of view to the light field recording device.
Unlike the conventional method, the present invention does not require K × K views to form an array, but only 2K-1 views. The K viewpoints are located on one straight line, the other K viewpoints are located on the other straight line, and the intersection point of the two straight lines is the reference viewpoint, so that 2K-1 viewpoints are needed in total. And, there is no special requirement for the straight line included angle of the viewpoint groups on the two lines.
The specific steps of step 1 are as follows:
step 1-1, light field information is obtained, and the four-dimensional light field information is decomposed into images of all viewpoints. Performing gradient operation on the image at the reference viewpoint so as to extract the edge component of the image at the reference viewpoint;
step 1-2, calculating the slope of the spatial depth characteristic line of the image edge component at the reference viewpoint by combining the imaging information of other viewpoints in the same row and column as the reference viewpoint;
and 1-3, calculating the spatial depth of the image edge component at the reference viewpoint according to the slope of the spatial depth characteristic line.
In step 1-1, the method of the invention has no requirement on the acquisition mode of the light field information, and the data meeting the four-dimensional light field characteristic form is effective.
In step 1-2, for the edge position with gradient value larger than the noise critical threshold, searching and matching the corresponding original pixel and adjacent pixel according to the viewpoint; finding a group of matched positions in X and Y viewpoint directions to form a characteristic line of the spatial depth information; the calculation method of the space depth characteristic line comprises the steps of firstly adopting tabu search to narrow the search range of points on the characteristic line, finding out a point with the minimum difference with the determined adjacent point on the characteristic line in the search range, adding the point into a characteristic line point row, and then further determining the search range of the next step through a new point row set.
The noise critical threshold can be set manually, and T is more than or equal to 0 and less than or equal to 0.25Gmax,GmaxIs the maximum gray level.
The tabu search is a result searched according to the previous viewpoint position, and the search range is narrowed, so that the operation efficiency can be greatly improved.
In step 2, the boundary between the regions subjected to the region segmentation operation is the image edge component at the reference viewpoint obtained in step 1. The region division operation adopts a quadtree decomposition and aggregation mode, so that the operation speed and the robustness of a program can be effectively improved.
The discrimination standard of the quadtree separation and aggregation is whether the consistency of pixels in the area exceeds a critical value, and for a gray level image, the consistency is the maximum difference value of pixel gray levels; for color images, the uniformity is the maximum difference between the color and the average color.
In step 3, the spatial depth of each region boundary is corrected according to the spatial depth of the image edge component at the reference viewpoint obtained in step 1 and the region division condition obtained in step 2, so as to adjust the boundary spatial depth error caused by the spatial occlusion relation between the objects. And then according to the corrected space depth of the region boundary, performing space depth interpolation operation on the non-boundary region to obtain the space depth of the whole region.
The specific steps of step 3 are as follows:
and 3-1, traversing the boundary of each obtained area, sequentially recording the positions of boundary pixels, and sequentially recording the corresponding spatial depth values according to the result of the step 1.
Step 3-2, performing differential operation on the spatial depth values of the region boundaries arranged in sequence,
and 3-3, performing integral operation on the result of the differential operation.
When the integration operation is performed in the step 3-3, the difference component exceeding the threshold is calculated according to the threshold, so that the severe change of the spatial depth of the region boundary is eliminated through the operation of the step 3.
The invention does not depend on one of two light field sampling methods in the background technology, avoids the problems of difficult selection of the description function, uncertain operation time and the like faced by a space depth description function optimization method by methods such as tabu search, region segmentation, depth interpolation and the like, and realizes the rapid and accurate operation of the space depth extraction.
Drawings
FIG. 1 is a flow chart diagram of a spatial depth extraction method based on light field information;
FIG. 2 is a diagram of light field information, wherein FIG. 2a and FIG. 2b are diagrams of light rays received from the same scene from different viewpoints, respectively;
fig. 3 is a schematic diagram of selecting a reference viewpoint, where fig. 3a is a schematic diagram of selecting a reference viewpoint when two straight lines are orthogonally bisected, fig. 3b is a schematic diagram of selecting a reference viewpoint when two straight lines are only orthogonally intersected, and fig. 3c is a schematic diagram of selecting a reference viewpoint when two straight lines are arbitrarily intersected;
fig. 4 is a diagram illustrating tabu search.
Detailed Description
The method of the present invention will now be described in detail with reference to specific embodiments and the accompanying drawings.
As shown in fig. 1, the spatial depth extraction based on the light field information includes the following steps:
(1) spatial depth computation of image edge components at reference viewpoints
A reference viewpoint of the four-dimensional light field data is selected. The information recorded by light field imaging is four-dimensional data, the data recording principle is as shown in fig. 2, for an image I (x, y), the four-dimensional light field data R (u, v, x, y) is recorded at different positions (x, y) of the image through different viewpoints (u, v), and R (u, v, x, y) is the light intensity of the point (u, v, x, y).
The method is not limited to the acquisition mode of the light field information, and the spatial depth can be extracted from the obtained light field data which can be represented by the graph 2.
Suppose umin≤u≤umax,vmin≤v≤vmax,xmin≤x≤xmax,ymin≤y≤ymaxSelecting a viewpoint (u) within a defined range of viewpoints (u, v)0,v0) Is a reference viewpoint. The invention does not need K multiplied by K viewpoints to form an array, but only needs 2K-1 viewpoints, wherein the K viewpoints are positioned on one straight line, the other K viewpoints are positioned on the other straight line, and the intersection point of the two straight lines is the reference viewpoint, so that the invention only needs 2K-1 viewpoints, and the selection requirement of the reference viewpoint is shown in three conditions of figure 3.
Performing gradient operation G (u) on the image at the reference viewpoint0,v0,x,y),
Wherein N (x, y) is (x)n,yn) And four-neighbor domain pixel of
N(x,y)={(xn,yn)||xn-x|-|yn-y|=1},
Edge region E (u)0,v0) Is defined as
E(u0,v0)={(u0,v0,x,y)||G(u0,v0,x,y)|>T},
Wherein T is a noise critical threshold, the value of which can be set artificially, and T is more than or equal to 0 and less than or equal to 0.25Gmax,GmaxIs the maximum gray level.
In the edge region, for a certain (v)0,ye) (u, x) can be formed into a set of characteristic lines, and (u, x) passing through the point (u, x)0,xe) (ii) a For determined (u)0,xe) (v, y) can be combined into a set of characteristic lines, and (v, y) the over point (v, y)0,ye)。
Above, a point (u) of the edge area0,v0,xe,ye)∈E(u0,v0)。
Noting a point (u) in the edge region0,v0,xe,ye) The two sets of characteristic lines are respectively
Andthe characteristic line can be obtained by iterationAnd
wherein,
as shown in fig. 3, the tabu search process does not need to traverse the whole world, and after each point on the feature line is determined, a search range containing 3 to 5 pixels is locked at the next iteration position, so that the calculation time of the feature line can be greatly shortened.
After the feature line is determined, the slope S of the feature line can be determinedux(u0,v0,xe,ye) And Svy(u0,v0,xe,ye)。
Finally, the edge pixel spatial depth D (mu) can be determined0,v0,xe,ye) Is composed of
Wherein D0To normalize spatial depth.
(2) Performing region segmentation operation on image at reference viewpoint
The invention adopts a quadtree decomposition and aggregation mode to implement the region segmentation of the image at the reference viewpoint. Firstly, performing quadtree decomposition on the image at the reference viewpoint, wherein the pixel threshold value in the decomposed area is required to be smaller than T, and the threshold value is the same as the threshold value in the first step.
Let RiIs the current rectangular area, then
P(Ri) To judge the region RiA logic of whether to perform division, not dividing if the value is 1, and dividing if the value is 0; max (R)i) Is the maximum value of the current rectangular area, min (R)i) Is the minimum value of the current rectangular area.
If P (R)i) When R is equal to 0, then R isiDividing into 4 mutually disjoint areas to satisfy
Repeating the steps until each subarea is not divisible. Then, any adjacent areas after decomposition are tried to be aggregated if max (R) is satisfiedi∪Rj)-min(Ri∪Rj) If T is less than or equal to T, the two are combined.
(3) Depth filling of pixels in central part of segmented region
And (3-1) firstly correcting the space depth of the edge component of each partition area.
For any one of the merged divided regions RiRetrieving the edge depth in a counterclockwise direction to obtain an edge position array CiAnd edge depth series DiWherein i is 1, 2. Determining the sequence DiDifference d ofi
Corrected difference di' is
Wherein, Δ DmaxIs a spatial depth variation threshold. Let DiMinimum value ofThe corrected spatial depth Di' is
And (3-2) filling the space depth of the non-edge part of the segmentation area according to the corrected space depth of the edge of the segmentation area.
Filling is first done in the x-direction. For any point (x, y) in the divided area, the left edge (x) of the same y coordinate is foundleftY) and right edge (x)rightY), linearly interpolating the (x, y) spatial depth according to the corrected spatial depth of the edge of the divided region. Interpolated x-direction spatial depth Dx(u0,v0X, y) is
Wherein D' (u)0,v0,xrightY) is a pixel (u)0,v0X, y) corresponding to the right boundary of the region and corrected spatial depth, D' (u)0,v0,xleftY) is a pixel (u)0,v0X, y) corresponding to the region left boundary.
For data in the same row in the region, only the left and right edges need to be retrieved once, so that the time complexity of calculation can be greatly reduced.
Similarly, the spatial depth D in the y direction obtained after interpolation in the y directiony(u0,v0X, y) is
Wherein D (u)0,v0,x,yup) Is a pixel (u)0,v0X, y) corresponding region boundary corrected spatial depth D (u)0,v0,x,ydown) Is a pixel (u)0,v0X, y) corresponding to the region.
The final interpolation result is Dx(u0,v0X, y) and Dy(u0,v0Average value D (u) of x, y)0,v0X, y), i.e.
The embodiment of the invention has wide adaptability, does not depend on a specific type of light field acquisition device, and can realize the extraction of the spatial depth by the invention when the light field data meeting the description of the figure 1 is acquired.
The invention has the following advantages:
(1) the invention does not need to set a central viewpoint, and the prior art mostly depends on the central viewpoint to extract the spatial depth. Different from the prior art, the spatial depth is extracted by adopting a method aiming at a reference viewpoint, and the reference viewpoint can be positioned at the center or at one side.
(2) The operation time of the present invention is short, and assuming that the data size of the image at the reference viewpoint is N and the total viewpoint number is a constant, the time complexity of the steps 1,2, and 3 in the description of the present invention is N, NlogN, and N, respectively, and the total time complexity is t (N) ═ O (2N + NlogN) ═ O (NlogN). Therefore, the invention has short operation time and practical operability.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A spatial depth extraction method based on light field information is characterized by comprising the following steps:
step 1, selecting a reference viewpoint from four-dimensional light field data, and calculating the spatial depth of image edge components at the reference viewpoint;
step 2, performing region segmentation operation on the image at the reference viewpoint, and dividing the image into a plurality of regions according to color or brightness uniformity;
step 3, correcting the space depth of the edge of each divided region, and performing interpolation operation on the space depth of the central part of the region according to the corrected space depth;
the specific steps of step 1 are as follows:
step 1-1, acquiring light field information, decomposing the four-dimensional light field information into images of all viewpoints, and performing gradient operation on the images at the reference viewpoints so as to extract edge components of the images at the reference viewpoints;
step 1-2, calculating the slope of the spatial depth characteristic line of the image edge component at the reference viewpoint by combining the imaging information of other viewpoints in the same row and column as the reference viewpoint;
and 1-3, calculating the spatial depth of the image edge component at the reference viewpoint according to the slope of the spatial depth characteristic line.
2. The light-field-information-based spatial depth extraction method of claim 1, wherein the reference viewpoints are 2K-1 viewpoints, wherein K viewpoints are located on a straight line, the other K viewpoints are located on another straight line, and an intersection point of the two straight lines is the reference viewpoint, thereby forming 2K-1 viewpoints.
3. The spatial depth extraction method based on light field information as claimed in claim 1, wherein in step 1-2, for the edge position with gradient value greater than the noise critical threshold, the corresponding original pixel and adjacent pixel are searched and matched according to the view point; finding a group of matched positions in X and Y viewpoint directions to form a characteristic line of the spatial depth information;
t is more than or equal to 0 and less than or equal to 0.25G in noise critical threshold selectionmax,GmaxIs the maximum gray level.
4. The method for extracting spatial depth based on light field information according to claim 1, wherein in step 1-2, the method for calculating the spatial depth feature line first adopts tabu search to narrow the search range of the points on the feature line, finds the point with the smallest difference from the determined adjacent point on the feature line in the search range, adds the point to the feature line point column, and then further determines the search range of the next step through a new point column set.
5. The light-field-information-based spatial depth extraction method according to claim 1, wherein in step 2, the boundary between the regions subjected to the region segmentation operation is an image edge component at the reference viewpoint obtained in step 1.
6. The spatial depth extraction method based on light field information as claimed in claim 1, wherein in step 2, the region segmentation operation adopts a quadtree decomposition and aggregation mode; the discrimination standard of the quadtree separation and aggregation is whether the consistency of pixels in the area exceeds a critical value, and for a gray level image, the consistency is the maximum difference value of pixel gray levels; for color images, the uniformity is the maximum difference between the color and the average color.
7. The light-field-information-based spatial depth extraction method according to claim 1, wherein in step 3, the spatial depth of each region boundary is corrected based on the spatial depth of the image edge component at the reference viewpoint obtained in step 1 and the region segmentation obtained in step 2;
and according to the corrected space depth of the region boundary, performing space depth interpolation operation on the non-boundary region to obtain the space depth of the whole region.
8. The spatial depth extraction method based on light field information according to claim 1, wherein the specific steps of step 3 are as follows:
step 3-1, traversing the boundary of each obtained area, sequentially recording the positions of boundary pixels, and sequentially recording corresponding spatial depth values according to the result of the step 1;
step 3-2, performing differential operation on the spatial depth values of the region boundaries arranged in sequence;
and 3-3, performing integral operation on the result of the differential operation.
9. The light-field-information-based spatial depth extraction method according to claim 8, wherein in step 3-3, a difference component exceeding a threshold is calculated as a threshold when performing the integration operation.
CN201610578644.8A 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information Active CN106257537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610578644.8A CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610578644.8A CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Publications (2)

Publication Number Publication Date
CN106257537A CN106257537A (en) 2016-12-28
CN106257537B true CN106257537B (en) 2019-04-09

Family

ID=57713781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610578644.8A Active CN106257537B (en) 2016-07-18 2016-07-18 A kind of spatial depth extracting method based on field information

Country Status (1)

Country Link
CN (1) CN106257537B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991637B (en) * 2017-02-28 2019-12-17 浙江大学 Method for realizing multi-resolution light field decomposition by utilizing GPU (graphics processing Unit) parallel computation
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107330930B (en) * 2017-06-27 2020-11-03 晋江市潮波光电科技有限公司 Three-dimensional image depth information extraction method
CN108846473B (en) * 2018-04-10 2022-03-01 杭州电子科技大学 Light field depth estimation method based on direction and scale self-adaptive convolutional neural network
CN109360235B (en) * 2018-09-29 2022-07-19 中国航空工业集团公司上海航空测控技术研究所 Hybrid depth estimation method based on light field data
CN110662014B (en) * 2019-09-25 2020-10-09 江南大学 Light field camera four-dimensional data large depth-of-field three-dimensional display method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851089A (en) * 2015-04-28 2015-08-19 中国人民解放军国防科学技术大学 Static scene foreground segmentation method and device based on three-dimensional light field
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851089A (en) * 2015-04-28 2015-08-19 中国人民解放军国防科学技术大学 Static scene foreground segmentation method and device based on three-dimensional light field
CN104966289A (en) * 2015-06-12 2015-10-07 北京工业大学 Depth estimation method based on 4D light field
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Accurate Depth Map Estimation from a Lenslet Light Field Camera;Hae-Gon Jeon等;《 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20151015;第1547-1555页
Variational Light Field Analysis for Disparity Estimation and Super-Resolution;Sven Wanner;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20140331;第36卷(第3期);第606-619页
基于光场相机深度信息获取技术的研究;赵兴荣;《中国优秀硕士学位论文全文数据库信息科技辑》;20140815(第8期);I138-1456

Also Published As

Publication number Publication date
CN106257537A (en) 2016-12-28

Similar Documents

Publication Publication Date Title
CN106257537B (en) A kind of spatial depth extracting method based on field information
Kim et al. Scene reconstruction from high spatio-angular resolution light fields.
Jeon et al. Accurate depth map estimation from a lenslet light field camera
US10699476B2 (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN108074218B (en) Image super-resolution method and device based on light field acquisition device
US9497437B2 (en) Digital refocusing method
US9092890B2 (en) Occlusion-aware reconstruction of three-dimensional scenes from light field images
US10136116B2 (en) Object segmentation from light field data
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
Yang et al. All-in-focus synthetic aperture imaging
US9569853B2 (en) Processing of light fields by transforming to scale and depth space
US9818199B2 (en) Method and apparatus for estimating depth of focused plenoptic data
Li et al. Epi-based oriented relation networks for light field depth estimation
Liu et al. High quality depth map estimation of object surface from light-field images
Vu et al. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing
EP3216006B1 (en) An image processing apparatus and method
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
Zhao et al. Double propagation stereo matching for urban 3-d reconstruction from satellite imagery
CN107578419B (en) Stereo image segmentation method based on consistency contour extraction
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN115239886A (en) Remote sensing UAV-MVS image point cloud data processing method, device, equipment and medium
Alazawi et al. Adaptive depth map estimation from 3D integral image
Tran et al. Variational disparity estimation framework for plenoptic images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant