CN103793911A - Scene depth obtaining method based on integration image technology - Google Patents

Scene depth obtaining method based on integration image technology Download PDF

Info

Publication number
CN103793911A
CN103793911A CN201410035541.8A CN201410035541A CN103793911A CN 103793911 A CN103793911 A CN 103793911A CN 201410035541 A CN201410035541 A CN 201410035541A CN 103793911 A CN103793911 A CN 103793911A
Authority
CN
China
Prior art keywords
depth
view
image
obtaining
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410035541.8A
Other languages
Chinese (zh)
Inventor
伍春洪
杨岸夫
张默
张怡
李成前
柳强
尤佳
杨淑华
陈静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN201410035541.8A priority Critical patent/CN103793911A/en
Publication of CN103793911A publication Critical patent/CN103793911A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a scene depth obtaining method based on an integration image technology. The method comprises the following steps of (1) obtaining an integration image, (2) extracting a view, (3) carrying out parallax analysis and depth calculation on the view, and (4) carrying out multi-base-line matching and calculation. The method is a depth obtaining method of passive visual inspection, does not need to use a special light source and can obtain depth information under natural light. Compared with traditional binocular machine visual equipment, the system is compact and small in structure, is convenient to image, and does not need to carry out calibration and correction among different cameras. The depth of field is larger than that of a traditional camera with the same camera aperture, and under the action of auxiliary adjustment of a main lens, a better imaging effect can be achieved for a far-field scene. A multi-base-line algorithm is adopted, and the reliability and stability for obtaining the depth information can be further improved.

Description

A kind of scene depth acquisition methods based on integrated image technology
Technical field
The present invention relates to a kind of scene depth acquisition methods based on integrated image technology, belong to the vision measurement technology branch of optical field.
Background technology
Obtaining at aspects such as commercial measurement, security monitoring, pattern-recognition and 3-D display of three dimensions depth information of scene has a wide range of applications.Obtaining of contactless depth information of scene can be divided into passive ranging sensor and the large class of initiative range measurement sensor two.Initiative range measurement sensor refers to that vision system is initiatively to scene emitted energy, by accepting the reflected energy compute depth information of scene.All belong to initiative range measurement method as supersonic sounding, radar imagery, structure light source, Moire technology, holographic interferometry etc.Technique of Initiative Range Measurement has advantages of that measuring accuracy is high generally, strong anti-interference performance and real-time good, but owing to needing the environment of special light sources and control, range of application is restricted.Passive ranging method does not only need to utilize special light sources, just can complete the method for obtaining of depth information, thereby have a wide range of applications yet under natural lighting.
Technique of binocular stereoscopic vision is modal one in passive ranging method.Its ultimate principle and human eye range measurement principle are similar.In system, the optical axis of two cameras is parallel, Fig. 1, Z iand Z rthe optical axis that represents respectively left and right cameras, f represents camera focus, b is the distance between the two image coordinate system initial points of left and right, is called baseline.D is the distance of target object point P to imaging plane, P iand P rbe respectively the imaging point of target object point P in two images of left and right, corresponding x direction value is respectively x l, x r, x r-x lfor this parallax between the two width figure of left and right, represent with d, from triangle relation
D = bf d Formula 1)
Wherein, b and f need to obtain by camera calibration, and parallax can obtain by solid matching method.Therefore, in the case of the parameter of known video camera, by calculating the parallax of target object point on left and right two width images, just can determine the distance of this target object point apart from camera plane, thereby realize the measurement to target range.
Formula 1) be parallel at the optical axis of two cameras, and the focal length of two video cameras all draws in consistent situation.The inconsistent situation of optical axis focal length not parallel or two video cameras at two cameras need to be carried out space geometry conversion to left and right two width views according to the parameter of video camera.So in the time of concrete application, generally need to know the parameter of two video cameras and two groups of left and right image is demarcated accordingly, making troubles to use.On the basis that completes camera calibration, the accuracy of the parallax that the accuracy of the depth information obtaining in binocular stereo vision is obtained with the length of baseline and by Stereo matching is relevant with reliability.Its quantization error
Figure BDA0000461650860000021
measuring accuracy is along with the increase of baseline and the increase of Object Depth and decline.
Integration imaging technology (Integral Imaging) comes from the work of Lippmann 1908.This technology adopts a thin slice being made up of microlens array to record three-dimensional body spatial scene; Because each lenticule records a part of object space from different directions, the parallax information of space any point is all recorded in whole recording film by this various element images diffusion.In the time recording film being placed on after a microlens array thin slice with same parameter, can reproduce original three-dimensional scenic.
Be subject to the restriction of theoretical research and photovoltaic fabrication process, early stage integration imaging technology does not come into one's own.1994, the people such as Davies and McCormick of Britain has designed a kind of integration imaging imaging system forming by the two poles of the earth transmitting optical network [Optical Engineering, 33 (11): " Design and analysis of an image transfer system using microlens arrays "].This system has overcome the three-dimensional scenic of the reproduction that integration imaging technology that Lippmann proposes exists with respect to problems such as harassing between the reversion of the degree of depth on original scene Existential Space, element image.1998, the people such as Arai and Okano of NHK broadcaster of Japan further replaces transmission lens array with multimode graded-index optical fiber array, and successfully realized the real-time demonstration of three-dimensional color image, verify the possibility that is realized three-dimensional television by integration imaging.[“Gradient-index?lens-array?method?based?on?real-time?integral?photography?for?three-dimensional?images”,Appl.Opt37(11)]。After this, integration imaging technology causes concern in dimension display technologies field.
Adelson in 1991 etc. have carried out formalized description to space light, have proposed the theory of " full light function ".The three-dimensional properties of object can be by obtaining and express with the expressed distribution of light of full light function and variation.[" The Plenoptic Function and the Elements of Early Vision ", Computation Models of Visual Processing, MIT Press, Cambridge, 1991] full light function can than do be contact three dimensions " light field " distribute and two dimensional image between intermediary.Our traditional different images obtaining means are the record of the specific dimension to " light field " just, is subset or the section of full light function.
Be the information that spatial scene forms after to the transmission projection of individual certain view with respect to traditional two dimensional image, integration imaging technology can realize the light field recording of information to microlens array position place, is a kind of mode of optical field imaging.Specifically, integration imaging technology has not only recorded the light intensity information of microlens array position, has also recorded the directional information of relevant position.Hand-held light field camera [the Ng R et al of Ng, " Light field photography with a hand-held plenoptic camera ", Tech Rep CSTR:Stanford Computer Science Tech Report CSTR, 2005], the light field microscope [Light field microscopy, SIGGRAPH2006:924-934.] of Levoy is all based on integration imaging technology.The light field camera of Ng mainly utilizes digital refocusing technology to solve the problem out of focus of image, can realize the ability that camera is first taken pictures and focused.The light field microscope of Levoy is utilized single exposure to obtain multiple visual angles and is organized image focal plane more, thereby obtains the displaing micro picture of the large depth of field.
The present invention proposes a kind of extracting by view on basic light field camera take integration imaging technology and multi-base stereo matching process obtains the method for depth information of scene.First this method can consider to be applied to medical aspect, as endoscope etc. requires the imaging system that camera lens is less.The depth information increasing not only contributes to observe inner structure, and there is to potential application the aspects such as further graphical analysis, understanding and pattern-recognition.The depth of field of this camera is greater than the traditional camera under same apertures, under the auxiliary adjustment effect of main lens, also can have better imaging effect to far field scene, can be used for traffic control, intelligent monitoring, remote sensing, the fields such as measurement, also can be applicable to the portable entertainment products such as mobile phone, digital camera.
Summary of the invention
Carry out record space scene although integration imaging is the method that is similar to multi-vision visual, be different from the lens arra adopting in traditional multi-vision visual, what in integration imaging recording process, adopt is a lot of very undersized lenticules.Therefore, very low corresponding to the resolution of the element image under each lens.By traditional multi-vision visual method, the parallax information between the element image from each lens directly obtains depth information and becomes almost impossible thing in attempt.
The present invention proposes one take light field camera as basis, extract the method for obtaining the depth information of scene with many baselines matching process by view.The method does not need special light sources in the time obtaining depth information, also need not demarcate, under similarity condition, adopt multi-base stereo matching process can effectively strengthen accuracy and the reliability of in parallax acquisition process, mating, remove the impact on coupling of periodic texture in scene.Degree of depth acquisition device based on the method also will have simple in structure small and exquisite, and imaging is convenient, use flexibly the large advantage of the depth of field under equal image-forming condition.
The present invention includes the obtaining of integrated image, " view " extraction, Disparity Analysis and depth calculation, many baselines and calculate several parts, Fig. 2 is shown in by scheme block diagram.
1. obtaining of integrated image
Integrated image can obtain by the light field camera apparatus of principle shown in Fig. 3, is made up of several parts of optics main lens group (parts 2 in Fig. 3), microlens array (parts 3 in Fig. 3) and photo-sensitive cell (parts 4 in Fig. 3).
The main difference part of this light field camera and traditional camera is before photo-sensitive cell, to have increased a microlens array (parts 3 in Fig. 3).Main lens group plays the effect of the optical delivery in imaging system, and its effect is identical with traditional camera with working method, and the focal length of main lens group can regulate.Microlens array can be arranged at the imaging surface of main lens group.Photo-sensitive cell place plane is located on the focal plane of microlens array.
2. " view " abstracting method.
The point of same position under different lenticules is extracted and forms a width width new " view ".
If there are in the horizontal direction 4 pixels in the integrated image being obtained by digital camera below single lenticule, in the time only considering the extracting of horizontal direction view, can extract 4 width views, Fig. 4.
" view " forming by this methods of sampling, comprises the record along the parallel projection of a certain specific direction to original spatial scene.One width " view " is similar to the image of taking from some special angles with traditional camera.Different " view ", the image of taking corresponding to different angles.
3. Disparity Analysis and depth calculation
Parallax is the range difference to same object space relevant position o'clock in two width differences " view ".Can adopt the method for various couplings to find in corresponding view by search.
Object Depth can be calculated by parallax between view and camera parameter:
formula 2)
Formula 2) in, the amplification coefficient that α is main lens,
Figure BDA0000461650860000042
with f be respectively lenticular aperture and focal length in microlens array, b is the sampling interval between different views, d represents the parallax of this object point between different views.D is the degree of depth of the object point that calculates.
According to formula 2), if the sampling interval of the parameter of microlens array and corresponding two width views is known, the degree of depth of space any point can obtain by solving this parallax between respective view.
4. the coupling of baseline more than and calculating
In coupling, each unique point on piece image can only be corresponding with the unique unique point on another piece image, and this character is called unique constraints.In practice, because most of unique points are not fairly obvious, particularly repeat the appearance of texture, usually can produce corresponding polysemy.Be several corresponding point of the corresponding another piece image of a unique point on piece image.In this case, due to other reasonses such as recording noises, real corresponding point may be covered by other false corresponding point.
Because integrated image has comprised the information from " view " of multiple different directions in Polaroid process simultaneously.Can adopt " view to " of several baselines with different length to eliminate the problem of corresponding point polysemy in coupling.The essence of many baseline coupling the method is the overall relevancy of the degree of depth of several view computations of different base length to accumulate, and then makes matching judgment, Fig. 5.Many Baseline Methods can reduce the mistake coupling because periodically texture causes in scene effectively.
Accompanying drawing explanation
Fig. 1 binocular vision range measurement principle figure
Fig. 2 overall plan block diagram
The light field camera schematic diagram of Fig. 3 based on microlens array, wherein, 1 object, 2 main lens groups, 3 microlens arrays, 4 photo-sensitive cells
Fig. 4 gets the method schematic diagram of view from integrated image by adopting extracting
Many baselines of Fig. 5 matching principle figure, wherein, (a)-(g) view of corresponding different base length result to coupling, (h) for adopting the result of many Baseline Methods coupling
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment and coordinate accompanying drawing 1-5, the present invention is described in more detail.
1. insert microlens array in the imaging surface position of traditional camera, new imaging surface (photo-sensitive cell) moves on the focal plane of microlens array backward.Also can replace microlens array with column original screen panel.
2. form integrated image and extract " view " by sampling from record.Under different lenticules, the point of same position can extract and form a width " view ".If whole image size is 6000*4000 pixel, under each lenticule, there is 10*10 pixel, can extract 100 width " view ", the size of every width " view " is 600*400.If what employing was set is column original screen panel, " view " need only be in the horizontal direction by method sampling above, if whole image size is 6000*4000 pixel, each column grating order side is to there being 10 pixels, can extract 10 width " view ", the size of every width " view " is 600*4000.
3. pair appointment object point, is found the parallax between two width " view " and is estimated the degree of depth by search.
The cost function of searching coupling can have a variety of, as SSD (sum of the squared ifference), and SAD (sum of absolute difference) and CC (cross correlation) function etc.:
SSD : score ( d ) = SSD ( d ) = Σ x , y ∈ w [ I 1 ( x , y ) - I 2 ( x + d , y ) ] 2 Formula 3)
SAD : score ( d ) = SAD ( d ) = Σ x , y ∈ w | I 1 ( x , y ) - I 2 ( x + d , y ) | Formula 4)
CC : score ( d ) = - CC ( d ) = - Σ x , y ∈ w [ I 1 ( x , y ) · I 2 ( x + d , y ) ] Σ x , y ∈ w I 1 2 ( x , y ) · Σ x , y ∈ w I 2 2 ( x + d , y ) Formula 5)
I 1and I 2, represent respectively left and right two width views, (x, y) is the coordinate figure of point to be matched in image, I 1(x, y), I 2(x, y) is the gray-scale value of corresponding point, and w is a wicket centered by point to be matched.Choosing of w size is relevant with the complexity of picture material texture with image size.General optional 3*3,5*5,7*7, these are several for 9*9.D is the possible parallax value of point to be matched.
Estimating disparity is to make scoring function have the position of minimum value:
d * = arg { min d ∈ R { score ( d ) } } Formula 6)
R is the hunting zone while asking coupling.Generally need determine according to the possible depth range of object point, maximum is no more than image.In the time selecting column grating, hunting zone is defined in the horizontal direction.If employing microlens array, same methods search in the vertical direction.
Object Depth can be by the formula 2 in scheme) estimate.Here,
Figure BDA0000461650860000062
with f be fixed value, the imaging amplification coefficient of α and main lens is relevant, b is relevant with the sample position of two width views.
4. more than baseline matching process compute depth
More reliable Object Depth can be calculated by many baselines matching process.By formula 2) in bring in the cumulative function of many baselines about the relation of depth D and parallax d, obtain being applicable to the scoring function from many baselines depth calculation of integrated image:
Figure BDA0000461650860000063
formula 7)
formula 8)
Figure BDA0000461650860000065
formula 9)
N is the number of " view " of extraction.The depth value of the measurement finally obtaining is to make scoring function get the position D of minimum value: D * = arg { min D ∈ R ′ { score ( D ) } } Formula 10)
Advantage or good effect:
The invention discloses a kind of method that light field camera that utilizes integration imaging technology to form obtains depth information of scene.
Major advantage:
This method belongs to the degree of depth acquisition methods of active vision.Do not need to adopt infrared, ultrasonic, the environment of the special light sources such as structure light source and control.
Compare with traditional binocular machine vision, native system compact conformation, small and exquisite, imaging is convenient, does not need to carry out demarcation and the calibration between different cameral.
The depth of field of this camera is greater than the traditional camera in same lens opening, under the booster action of main lens focus adjustment, also can have better imaging effect to far field scene.
" view " obtaining by particular sample mode is corresponding to the parallel projection record from different directions, and the quantification mistake of its depth calculation is one and the irrelevant fixed value of the degree of depth.Be different from square relation that be directly proportional of traditional Stereo matching with respect to quantization error in conventional stereo matching and the degree of depth, can improve to a certain extent the depth range of measurement.
Adopt multiple-baseline, can also further improve the reliability of Stereo matching, eliminate the impact of periodic texture on coupling in scene, improve the reliability and stability of the depth information obtaining.
The scope of application:
First this method can consider to be applied to medical aspect, as endoscope etc. requires the imaging system that camera lens is less.The depth information increasing not only contributes to observe intuitively inner structure, has very to potential application for further graphical analysis, understanding and pattern-recognition aspect.Also can be applicable to the portable entertainment products such as mobile phone, digital camera.Also can be used for traffic control, intelligent monitoring, remote sensing surveys etc. are application widely.
The above is only the preferred embodiment for the present invention; it should be pointed out that the member of ordinary skill for the art, do not departing under the prerequisite of the technology of the present invention principle; can also make some improvement and modification, these improve and modification also should be considered as protection scope of the present invention.

Claims (1)

1. the scene depth acquisition methods based on integrated image technology, the method comprises the following steps:
(1). obtain integrated image
Obtain integrated image by light field camera apparatus, this light field camera apparatus comprises the optics main lens group, microlens array and the photo-sensitive cell that set gradually from the object side to image side;
(2). extract view
The point of same position under different lenticules is extracted and forms several views;
(3). view is carried out to Disparity Analysis and depth calculation
Find the range difference to same object space relevant position o'clock in two width different views, this range difference is parallax, and it is analyzed, and determines the view degree of depth by following formula:
Figure FDA0000461650850000011
Wherein, the amplification coefficient that α is main lens,
Figure FDA0000461650850000012
with f be respectively lenticular aperture and focal length in microlens array, b is the sampling interval between different views, d represents the parallax of this object point between different views, D is the degree of depth of the object point that calculates;
(4). many baseline couplings and calculating
The overall relevancy of the degree of depth of several view computations of different base length is accumulated, and then make matching judgment, concrete steps are: the relation of depth D in above-mentioned formula and parallax d is brought in the cumulative function of many baselines, obtain being applicable to the scoring function from many baselines depth calculation of integrated image, the depth value of the measurement finally obtaining is to make scoring function get the position of minimum value.
CN201410035541.8A 2014-01-24 2014-01-24 Scene depth obtaining method based on integration image technology Pending CN103793911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410035541.8A CN103793911A (en) 2014-01-24 2014-01-24 Scene depth obtaining method based on integration image technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410035541.8A CN103793911A (en) 2014-01-24 2014-01-24 Scene depth obtaining method based on integration image technology

Publications (1)

Publication Number Publication Date
CN103793911A true CN103793911A (en) 2014-05-14

Family

ID=50669534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410035541.8A Pending CN103793911A (en) 2014-01-24 2014-01-24 Scene depth obtaining method based on integration image technology

Country Status (1)

Country Link
CN (1) CN103793911A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104101331A (en) * 2014-07-24 2014-10-15 合肥工业大学 Method used for measuring pose of non-cooperative target based on complete light field camera
CN105023249A (en) * 2015-06-26 2015-11-04 清华大学深圳研究生院 Highlight image restoration method and device based on optical field
WO2016106694A1 (en) * 2014-12-31 2016-07-07 SZ DJI Technology Co., Ltd. System and method for adjusting a baseline of an imaging system with microlens array
CN106500629A (en) * 2016-11-29 2017-03-15 深圳大学 A kind of microscopic three-dimensional measurement apparatus and system
CN104050662B (en) * 2014-05-30 2017-04-12 清华大学深圳研究生院 Method for directly obtaining depth image through light field camera one-time imaging
CN106908016A (en) * 2017-03-06 2017-06-30 中国科学院光电技术研究所 Concave cavity mirror curvature radius measuring method based on light field camera
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method
CN108596965A (en) * 2018-03-16 2018-09-28 天津大学 A kind of light field image depth estimation method
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks
CN109549614A (en) * 2017-09-27 2019-04-02 深圳市绎立锐光科技开发有限公司 Endoscopic system and light supply apparatus
CN110119829A (en) * 2018-02-07 2019-08-13 长沙行深智能科技有限公司 The distribution method based on binocular measurement article volume identification space for spatially-variable cabinet
CN110462686A (en) * 2017-02-06 2019-11-15 弗托斯传感与算法公司 For obtaining the device and method of depth information from scene
CN111551920A (en) * 2020-04-16 2020-08-18 重庆大学 Three-dimensional target real-time measurement system and method based on target detection and binocular matching
CN113587895A (en) * 2021-07-30 2021-11-02 杭州三坛医疗科技有限公司 Binocular distance measuring method and device
CN114643925A (en) * 2022-05-19 2022-06-21 江阴市洪腾机械有限公司 Automobile awning structure driving platform with awning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006093365A1 (en) * 2005-03-02 2006-09-08 Seoul National University Industry Foundation Three-dimensional/ two-dimensional convertible display device
US20080309669A1 (en) * 2007-06-18 2008-12-18 Samsung Electronics Co., Ltd. Method and apparatus for generating elemental image in integral imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006093365A1 (en) * 2005-03-02 2006-09-08 Seoul National University Industry Foundation Three-dimensional/ two-dimensional convertible display device
US20080309669A1 (en) * 2007-06-18 2008-12-18 Samsung Electronics Co., Ltd. Method and apparatus for generating elemental image in integral imaging

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
伍春洪等: "一种基于Intergral Imaging和多基线立体匹配算法的深度测量方法", 《电子学报》 *
徐晶: "基于微透镜阵列的集成成像和光场成像研究", 《万方学位论文数据库》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050662B (en) * 2014-05-30 2017-04-12 清华大学深圳研究生院 Method for directly obtaining depth image through light field camera one-time imaging
CN104101331A (en) * 2014-07-24 2014-10-15 合肥工业大学 Method used for measuring pose of non-cooperative target based on complete light field camera
CN104101331B (en) * 2014-07-24 2016-03-09 合肥工业大学 Based on the noncooperative target pose measurement of all-optical field camera
WO2016106694A1 (en) * 2014-12-31 2016-07-07 SZ DJI Technology Co., Ltd. System and method for adjusting a baseline of an imaging system with microlens array
US10582188B2 (en) 2014-12-31 2020-03-03 SZ DJI Technology Co., Ltd. System and method for adjusting a baseline of an imaging system with microlens array
CN105023249B (en) * 2015-06-26 2017-11-17 清华大学深圳研究生院 Bloom image repair method and device based on light field
CN105023249A (en) * 2015-06-26 2015-11-04 清华大学深圳研究生院 Highlight image restoration method and device based on optical field
CN106500629A (en) * 2016-11-29 2017-03-15 深圳大学 A kind of microscopic three-dimensional measurement apparatus and system
CN106500629B (en) * 2016-11-29 2022-09-27 深圳大学 Microscopic three-dimensional measuring device and system
CN110462686B (en) * 2017-02-06 2023-08-11 弗托斯传感与算法公司 Apparatus and method for obtaining depth information from a scene
CN110462686A (en) * 2017-02-06 2019-11-15 弗托斯传感与算法公司 For obtaining the device and method of depth information from scene
CN106908016A (en) * 2017-03-06 2017-06-30 中国科学院光电技术研究所 Concave cavity mirror curvature radius measuring method based on light field camera
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method
CN107330930B (en) * 2017-06-27 2020-11-03 晋江市潮波光电科技有限公司 Three-dimensional image depth information extraction method
WO2019061819A1 (en) * 2017-09-27 2019-04-04 深圳市绎立锐光科技开发有限公司 Endoscope system and light source apparatus
CN109549614A (en) * 2017-09-27 2019-04-02 深圳市绎立锐光科技开发有限公司 Endoscopic system and light supply apparatus
CN110119829A (en) * 2018-02-07 2019-08-13 长沙行深智能科技有限公司 The distribution method based on binocular measurement article volume identification space for spatially-variable cabinet
CN110119829B (en) * 2018-02-07 2023-05-16 长沙行深智能科技有限公司 Method for distributing volume identification space of articles based on binocular measurement for space variable cabinet
CN108596965B (en) * 2018-03-16 2021-06-04 天津大学 Light field image depth estimation method
CN108596965A (en) * 2018-03-16 2018-09-28 天津大学 A kind of light field image depth estimation method
CN108846473B (en) * 2018-04-10 2022-03-01 杭州电子科技大学 Light field depth estimation method based on direction and scale self-adaptive convolutional neural network
CN108846473A (en) * 2018-04-10 2018-11-20 杭州电子科技大学 Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks
CN111551920A (en) * 2020-04-16 2020-08-18 重庆大学 Three-dimensional target real-time measurement system and method based on target detection and binocular matching
CN113587895A (en) * 2021-07-30 2021-11-02 杭州三坛医疗科技有限公司 Binocular distance measuring method and device
CN113587895B (en) * 2021-07-30 2023-06-30 杭州三坛医疗科技有限公司 Binocular distance measuring method and device
CN114643925A (en) * 2022-05-19 2022-06-21 江阴市洪腾机械有限公司 Automobile awning structure driving platform with awning
CN114643925B (en) * 2022-05-19 2022-07-29 江阴市洪腾机械有限公司 Automobile awning structure driving platform with awning

Similar Documents

Publication Publication Date Title
CN103793911A (en) Scene depth obtaining method based on integration image technology
CN110036410B (en) Apparatus and method for obtaining distance information from view
CN110462686B (en) Apparatus and method for obtaining depth information from a scene
US20190114796A1 (en) Distance estimation method based on handheld light field camera
EP3144880B1 (en) A method and an apparatus for generating data representative of a light field
JP5014979B2 (en) 3D information acquisition and display system for personal electronic devices
Hahne et al. Baseline and triangulation geometry in a standard plenoptic camera
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN103279982B (en) The speckle three-dimensional rebuilding method of the quick high depth resolution of robust
US10715711B2 (en) Adaptive three-dimensional imaging system and methods and uses thereof
CN102494609A (en) Three-dimensional photographing process based on laser probe array and device utilizing same
CN105282443A (en) Method for imaging full-field-depth panoramic image
CN104050662A (en) Method for directly obtaining depth image through light field camera one-time imaging
US20210377432A1 (en) Information processing apparatus, information processing method, program, and interchangeable lens
CN102997891A (en) Device and method for measuring scene depth
CN105744138A (en) Quick focusing method and electronic equipment
CN103365063A (en) Three-dimensional image shooting method and device
WO2020024079A1 (en) Image recognition system
CN105222717A (en) A kind of subject matter length measurement method and device
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
KR20180054737A (en) Apparatus and method for generating data representing a pixel beam
Fachada et al. A calibration method for subaperture views of plenoptic 2.0 camera arrays
KR20180098565A (en) Method and apparatus for generating data representing a pixel beam
CN103873850A (en) Multi-view-point image reconstruction method and device based on integration imaging
Li et al. Metric three-dimensional reconstruction model from a light field and its calibration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140514

RJ01 Rejection of invention patent application after publication