WO2020244273A1 - 双摄像机三维立体成像***和处理方法 - Google Patents

双摄像机三维立体成像***和处理方法 Download PDF

Info

Publication number
WO2020244273A1
WO2020244273A1 PCT/CN2020/079099 CN2020079099W WO2020244273A1 WO 2020244273 A1 WO2020244273 A1 WO 2020244273A1 CN 2020079099 W CN2020079099 W CN 2020079099W WO 2020244273 A1 WO2020244273 A1 WO 2020244273A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
lens
camera lens
resolution
Prior art date
Application number
PCT/CN2020/079099
Other languages
English (en)
French (fr)
Inventor
李应樵
陈增源
Original Assignee
万维科研有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 万维科研有限公司 filed Critical 万维科研有限公司
Publication of WO2020244273A1 publication Critical patent/WO2020244273A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Definitions

  • the invention belongs to the field of stereo imaging, and particularly relates to a dual-camera three-dimensional imaging system and processing method based on light field technology.
  • the system and software attached to the camera will combine the two two-dimensional images into a three-dimensional image, or combine two segments
  • Two-dimensional video is synthesized into three-dimensional video.
  • these two solutions may have the quality of the three-dimensional image and video generated by the asynchronous two-dimensional image or video, or the external factors such as the environmental lighting conditions.
  • More advanced imaging equipment such as light field camera, also known as plenoptic camera, uses microlens array lenses to capture the light field image of the scene at a time, and the depth information of the scene can be extracted through calculation to create a depth map And convert the two-dimensional image into a three-dimensional image.
  • the main disadvantages of this kind of light field camera equipment are that the image resolution will drop significantly, the parallax angle is small, and it is not suitable for shooting video.
  • the latest design is to add a reflecting unit to capture the multi-angle image of the target object. Because the parallax angle is large, it can produce a clearer depth map and three-dimensional image after processing. It is also suitable for shooting video. Attempts still failed to solve the problem of resolution drop.
  • the purpose of the present invention is to provide a dual-camera three-dimensional imaging system and processing method for improving the resolution of three-dimensional video.
  • the imaging system has a wide range of applications.
  • the present invention can also obtain high-quality video and three-dimensional image analysis.
  • the present invention provides a dual-camera three-dimensional imaging system, which is characterized in that it comprises: a light field imaging part for obtaining a first image and a high resolution imaging part for obtaining a second image; wherein the light field imaging part includes the first imaging part.
  • the first camera lens and the second camera or camera lens; the first camera lens and the second camera or camera lens are respectively located at the rear and front of the lens part, and an entrance pupil plane and matching device are placed between the two,
  • the entrance pupil plane and the matching device can be adapted to the different focal lengths of the second camera or camera lens, and an internal reflection unit is formed between the first camera lens and the entrance pupil plane and the matching device.
  • the captured first image is decomposed and refracted into a plurality of secondary images with different angular offsets
  • the high-resolution imaging part further includes a second imaging part and a third camera lens, and at least one capable of adjusting the first
  • the central axis adjustment device of the camera lens and the second camera or the double lens of the camera lens and the single lens of the third camera lens, the central axis adjusting device keeps the axes of the double lens and the single lens parallel; configuring the light
  • the field imaging part and the high-resolution imaging part enable the third camera lens to obtain a second image consistent with the vertical direction of the front view among the plurality of secondary images, and simultaneously output the plurality of secondary images And the second image.
  • the distance between the light field imaging part and the high-resolution imaging part is as close as possible, and their centers are located on the same vertical plane.
  • the angular offset range of the plurality of secondary images with different angular offsets is 10-20 degrees.
  • the angular offset of the front view in the plurality of secondary images is 0 degree.
  • the first imaging part further includes a first image sensor and a fly-eye lens that captures a first image; the fly-eye lens transmits the captured first image to the first image sensor; and the second imaging part further includes Second image sensor; the second image obtained by the third camera lens is transmitted to the second image sensor.
  • the fly-eye lens is a plurality of micro lens arrays, and the radius, thickness, and array pitch of each micro lens are related to the size of the first image sensor.
  • the aperture and focal length of the first camera lens and the second camera or camera lens are adjustable, and the second camera or camera lens and the third camera lens are replaceable lenses, and The aperture of the second camera or camera lens is larger than the size of the internal reflection unit.
  • the entrance pupil plane and the matching device is a pupil lens
  • the diameter of the pupil lens is larger than the diameter of the internal reflection unit
  • the incident light of the light field image is allowed to proceed in the internal reflection unit refraction.
  • each of the secondary images has subtle differences in the scene, and the size of the internal reflection unit and the focal length of each secondary image are calculated based on the following equations (1) and (2):
  • FOV is the field of view of the second camera or camera lens
  • n is the refractive index of the internal reflection unit
  • r is the number of internal reflections
  • Z is the size of the internal reflection unit
  • f lens is the focal length of the second camera or camera lens
  • f sub is the focal length of the secondary image.
  • the present invention also provides a dual-camera three-dimensional imaging processing method, the steps of which are: obtaining the original depth map data of the first image through the light field camera part; correcting the original depth map data; using edge-oriented or directional rendering
  • the method is to obtain a high-resolution depth map generated by interpolation; at the same time, a second image is obtained by using a high-resolution camera part, and a data model is used to combine the second image as reference data on the original depth map data of the first image. Correct until the best interpolated high-resolution depth map is obtained.
  • the three-dimensional imaging system and processing method provided by the present invention can provide higher-resolution two-dimensional and three-dimensional video, and at the same time, compared with the light field camera of the high-resolution image sensor, the cost increase is very limited; in addition, due to the The system does not affect the function of the light field camera part, so the information obtained by the light field camera itself can still be used to calculate the object depth and build a depth map.
  • Figure 1 is a perspective view of the three-dimensional imaging system of the present invention.
  • Figure 2 is a structural diagram of the three-dimensional imaging system of the present invention.
  • FIG. 3 is a schematic diagram of the first image 120 obtained by the three-dimensional imaging system of the present invention.
  • FIG. 4 is a schematic diagram after the three-dimensional imaging system according to the present invention performs normalization processing on the obtained first image 120.
  • FIG. 5 is a flowchart of processing the second image 130 by the three-dimensional imaging system of the present invention.
  • Fig. 6 is a flow chart of obtaining a target image by the three-dimensional imaging system of the present invention.
  • Figure 1 is a perspective view of the three-dimensional imaging system of the present invention.
  • the three-dimensional imaging system of the present invention is composed of a light field imaging part 100 that obtains a first image 120 (not shown in FIG. 1) and a high-resolution imaging part 140 that obtains a second image 130 (not shown in FIG. 1), wherein
  • the light field camera part 100 can adopt the light field camera in Chinese patent application 201711080588.6, which includes a first imaging part 110, a first camera lens 101 and a second camera or camera lens 103, where the first camera lens 101 is a rear camera lens; It has adjustable aperture and focal length.
  • the second camera or camera lens 103 is a front camera or camera lens, and the front and rear cameras or camera lenses can adjust the focal length of the camera.
  • the entrance pupil plane and the matching device 109 may be a pupil lens, and between the pupil lens 109 and the first camera lens 101 is an internal reflection unit 102.
  • the high-resolution imaging part 140 and the light field imaging part 100 are integrated and fixed together.
  • the high-resolution imaging part 140 includes a second imaging part 116.
  • the third camera lens of the high-resolution imaging part 140 is connected through the central axis adjustment device 118.
  • the lens center axis 112a (see FIG. 2) of 117 and the lens center axis 112b (see FIG. 2) of the first camera lens 101 and the second camera lens 103 in the light field imaging part 100 are kept parallel.
  • FIG. 2 is a structural diagram of the dual-camera three-dimensional imaging system of the present invention.
  • the light field imaging part 100 of the three-dimensional imaging system includes a first imaging part 110 and a lens part 111, wherein the first imaging part 110 includes a first image sensor 104; a fly-eye lens 105; wherein the first image sensor 104 uses a higher imaging quality High image sensor; fly-eye lens 105 is formed by a combination of a series of small lenses, capturing information of a certain image from different angles, such as light field image information, so as to strip out three-dimensional information to identify specific objects.
  • the fly-eye lens 105 is composed of a micro lens array and is designed to not only capture a light field image, but also generate a depth map.
  • the fly-eye lens 105 serves the first image sensor 104, so it is related to the parameters of the first image sensor 104.
  • each micro lens parameter of the fly-eye lens 105 has a radius of 0.5 millimeters, a thickness of 0.9 micrometers, and the array pitch of each micro lens is 60 micrometers.
  • the size of the fly-eye lens is retractable. In one embodiment, the size of the C-type image sensor using the advanced photography system is 25 mm ⁇ 17 mm; and in another embodiment, the size of the full-frame image sensor is 37 mm ⁇ 25 mm.
  • the lens part 111 is detachably connected with the first imaging part 110.
  • the pupil lens 109 may be a single lens, which has a condensing effect and can compress the information received by the second camera or camera lens 103.
  • An imaging process is performed at the second camera or camera lens 103, and as the second camera or camera lens 103 is replaced or replaced, the imaging angle is different.
  • the first camera lens 101 is a short-focus lens or a macro lens, which is fixed on a housing (not shown in FIG. 2).
  • the design of the first camera lens 101 determines the size of the imaging system of the present invention.
  • a secondary imaging process is performed at the first camera lens 101.
  • the entrance pupil plane and the matching device 109 are designed to correct light rays.
  • the internal reflection unit 102 decomposes and reflects the image to be taken into a multi-angle image with independent secondary images with different angular offsets.
  • the internal reflection unit 102 is designed to provide multiple virtual images at different viewing angles.
  • the size and ratio of the internal reflection unit 102 are the determining factors of the number of reflections and the reflection image ratio, and images of different angles are produced.
  • the secondary image produced by each reflection has a subtle difference in the scene, and the target image has a slight offset.
  • the size of the internal reflection unit 102 and the focal length of each secondary image can be calculated based on the following equations (1) and (2):
  • FOV is the field of view of the second camera or camera lens
  • n is the refractive index of the internal reflection unit
  • r is the number of internal reflections
  • X, Y, Z are the dimensions of the internal reflection unit, which are width, height, and length respectively;
  • f lens is the focal length of the second camera or camera lens
  • f sub is the focal length of the secondary image.
  • the size of the internal reflection unit 102 can be the same as the size of the first image sensor 104, and in one embodiment, it can be 24 mm (width) x 36 mm (height) x 95 mm (length), that is to say the ratio of the unit It is about 2:3:8.
  • the pupil lens 109 is used to match the size of the secondary image with the size of the internal reflection unit 102, and to perform reflection in the internal reflection unit 102 correctly. To achieve this, the diameter of the pupil lens 109 should be larger than the internal reflection unit 102. In one of the embodiments, the pupil lens 109 has a diameter of approximately 50 mm and a focal length of 50 mm. As long as the aperture of the second camera or camera lens 103 is larger than the size of the internal reflection unit 102, the second camera or camera lens 103 is designed to be able to be replaced by any camera or camera lens.
  • the high-resolution imaging section 140 includes a second imaging section 116, a second image sensor 119, and a third camera lens 117.
  • the adjusting device 118 for adjusting the central axis of the dual lens of the first camera lens 101 and the second camera or the camera lens 103 and the single lens of the third camera lens 117 is located outside the high-resolution imaging part 140 and is independent of light field imaging
  • the part 100 and the high-resolution imaging part 140 can make the axis 112b of the first camera lens 101, the second camera or the camera lens 103 parallel to the axis 112a of the third camera lens 117 by adjusting the adjustment device 118.
  • the second image sensor 119 can be a sensor with the same or different specifications as the first image sensor 104, but the resolution of the second image sensor 119 should be at least 1/9 or more of the resolution of the first camera sensor 104 to achieve higher cost.
  • FIG. 3 is a schematic diagram of the first image 120 obtained by the light field imaging part 100 of the three-dimensional imaging system of the present invention.
  • the internal reflection unit 102 in the light field imaging part 100 decomposes the captured first image 120, that is, the light field image or video picture, and reflects it into multiple secondary images or video pictures with different angular offsets, such as 9
  • Two secondary images or video frames are acquired by the first image sensor 104 of the first imaging part 110 through a fly-eye lens.
  • the secondary image 1 in the middle of the 9 secondary images or video images is the front view of the scene, and the remaining 8 secondary images 2--9 or video images are secondary images or video images offset by a specific angle.
  • each image or video screen has a resolution of 1/9 or lower of the first image sensor 104.
  • 9 secondary images or video frames are segmented and each secondary image is preprocessed.
  • FIG. 4 is a schematic diagram of the first image 120 obtained by the light field imaging part 100 of the three-dimensional imaging system according to the present invention after normalization processing. Normalize each secondary image through the following equation (3):
  • each secondary image is an independent original compound eye image.
  • image processing techniques including but not limited to image noise removal are used for preprocessing, and then synthetic aperture technology is used for decoding, the light field information in the original image of the compound eye can be obtained, and digital refocusing technology can be used to generate the secondary focus image.
  • the synthetic aperture image can be digitally refocused using the following principles:
  • I′(x′,y′) ⁇ L(u,v,kx′+(1-k)u,ky′+(1-k)v)dudv (7)
  • the coordinate system of the secondary imaging surface
  • L and L' represent the energy of the primary and secondary imaging surfaces.
  • the stereo matching algorithm is one of the commonly used binocular stereo vision matching algorithms, which provides good parallax effects and ideal calculation speed.
  • D represents the disparity map
  • p and q are a certain pixel in the image
  • C(p, D p ) represents the cost value of the pixel when the disparity value of the current pixel is D p ;
  • N p represents the pixels adjacent to the pixel p, usually 8.
  • P1 and P2 are penalty coefficients. P1 applies to pixel p and its neighboring pixels with a difference of disparity equal to 1, and P2 applies to pixel p and its neighboring pixels with a difference of disparity greater than 1;
  • T[.] is a function. If the parameter in the function is true, it returns 1, otherwise it returns 0.
  • the final conversion between the disparity value and the depth value can use the following formula:
  • d p represents the depth value of a certain pixel
  • f is the normalized focal length
  • b is the baseline distance between the two secondary images
  • D p represents the disparity value of the current pixel.
  • the second image 130 obtained by the second imaging part 116 is a 2D image or a video screen, which can completely reflect the information of the subject, so the resolution of the second image will not be reduced, and for the same reason, the second image is not required. 130 for standardization and refocusing.
  • each secondary image has only 1/9 or lower resolution of the sensor.
  • the resolution of each secondary image is 1280x720 pixels. If the scene depth map is directly generated And light field video, its resolution will also be limited by 1280x720 pixels. Therefore, first use the secondary image 1-9 to establish a 1280x720 scene depth map, and then refer to the 3840x2160 pixel high-resolution second image 130 obtained by the second image sensor 119, and use the edge-directed interpolation algorithm to improve the depth map.
  • the resolution is up to 3840x2160.
  • the formula used for edge-directed interpolation is as follows:
  • m and n are low-resolution and high-resolution image grids before and after interpolation
  • y[n] represents the depth map generated after interpolation
  • S and R respectively represent the data model of the second image 130 and the operator of the edge-directed rendering step
  • is the gain of the correction process
  • k is the iteration index.
  • the accuracy of the correction step of the interpolation calculation is sufficient to meet the needs of generating a 3D image.
  • the high-resolution second image 130 that is, a 2D image or video image
  • the high-resolution second image 130 is combined with the increased resolution depth map to generate a high-resolution 2D+Z 3D image or video format and output to the display, which can greatly improve the light field image or
  • the resolution of the light field video is up to 3840x2160 pixels.
  • Fig. 5 is a flowchart of the three-dimensional imaging system of the present invention processing a target image.
  • step 501 the original depth map data of the first image 120 is obtained through the first imaging part 100; in step 502, the original depth map data is corrected; in step 503, an edge-oriented or directional rendering method is used, and in step 504 Obtain a high-resolution depth map generated by interpolation; where in step 505, the data model of the second image is used to perform reference data on the second image obtained in step 506; and the original depth map data of the first image 120 Correct until the best interpolated high-resolution depth map is obtained.
  • Fig. 6 is a flow chart of obtaining a target image by the three-dimensional imaging system of the present invention.
  • the first image sensor 104 acquires a first image 120 containing 9 secondary images or video frames; in step 602, the 9 secondary images or video frames are divided and each secondary image is standardized
  • step 603 perform image noise removal processing on each secondary image; in step 604, use synthetic aperture technology to decode the light field information acquired by the 9 secondary images, and then use digital refocusing technology to generate an in-focus image; in step 605.
  • Use 9 in-focus secondary images to establish a lower-resolution scene depth map combine the second image of the high-resolution 2D image or video screen obtained by the second image sensor obtained in step 608; in step 606, referring to the second image, use an edge-oriented or edge-oriented interpolation algorithm to increase the resolution of the depth map; in step 607, combine the depth map with the increased resolution and the second image to generate a high-resolution 3D image or video .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明公开了一种双摄像机三维立体成像***和处理方法,光场摄像部分包括第一成像部分,第一摄像机镜头和第二摄像机或相机镜头;该第一摄像机镜头与该第二摄像机或相机镜头分别位于镜头部分的后部和前部,二者中间放置一个入瞳平面和匹配装置,在第一摄像机镜头和所述入瞳平面和匹配装置之间形成内部反射单元,所述高分辨率摄像部分还包括第二成像部分和第三摄像机镜头;配置所述光场摄像部分与所述高分辨率摄像部分,使所述第三摄像机镜头获得与多个次级图像中的正视图垂直方向一致的第二图像,并同时输出所述多个次级图像和所述第二图像。本发明除了能够获取准确的深度信息,还能获取高分辨率视频对三维影像进行分析。

Description

双摄像机三维立体成像***和处理方法 技术领域
本发明属于立体成像领域,特别涉及一种基于光场技术的双摄像机三维立体成像***和处理方法。
背景技术
现有技术中用于拍摄三维图像及视频的摄像机有多种设计方案。最普遍的方案是将两个相同规格的摄像模组相隔一定距离,例如大约60至65毫米,作线性排列,模拟人眼的立体视觉原理。两个摄像模组的图像传感器会分别记录各自拍摄所得的二维图像或视频,利用软件处理获得的两个二维图像或两段二维视频就可以建立深度图,再转换为三维图像或三维视频。另一个方案是直接使用立体摄像机去拍摄。立体摄像机的主体内有两个图像传感器分别记录来自摄像机镜头两个镜片组的二维图像或二维视频,之后摄像机附带的***及软件会把两个二维图像合成三维图像,或将两段二维视频合成为三维视频。但是这两个方案可能有二维图像或视频不同步、或受环境的照明条件等外在因素影响所产生的三维图像及视频质量。
比较先进的摄像设备例如光场相机(light field camera)又称为全光相机(plenoptic camera),利用微透镜阵列镜片一次捕捉场景的光场图像,通过计算可以提取场景的深度信息以建立深度图并将二维图像转换成三维图像。不过这种光场摄像设备的主要缺点是图像分辨率会显著下跌,视差角度小,而且不太适合拍摄视频。最新型的设计是增加一个反射单元去捕捉目标物体的多角度图像,因为视差角度较大,经过处理后能够产生较清 确的深度图及三维立体图像,也适合用于拍摄视频,可是这种尝试仍然未能解决分辨率下跌的问题。
发明内容
本发明的目的在于提供一种提高三维视频分辨率的双摄像机三维立体成像***和处理方法。在多个领域,例如医疗、生物科技研究、工业设备制造、半导体产品品质检定等等,该成像***均有广泛的应用。本发明除了能够获取准确的深度信息,还能获取优质视频对三维影像分析。
本发明提供一种双摄像机三维立体成像***,其特征在于,包括:获得第一图像的光场摄像部分和获得第二图像的高分辨率摄像部分;其中所述光场摄影部分包括第一成像部分,第一摄像机镜头和第二摄像机或相机镜头;该第一摄像机镜头与该第二摄像机或相机镜头分别位于镜头部分的后部和前部,二者中间放置一个入瞳平面和匹配装置,该入瞳平面和匹配装置能够与第二摄像机或相机镜头的不同焦距相适应,在第一摄像机镜头和所述入瞳平面和匹配装置之间形成内部反射单元,该内部反射单元用于将所述捕获的第一图像分解并折射成具有不同角度偏移的多个次级图像,所述高分辨率摄像部分还包括第二成像部分和第三摄像机镜头,以及至少一个能够调节所述第一摄像机镜头与第二摄像机或相机镜头的双镜头和第三摄像机镜头的单镜头的中轴线调节装置,该中轴线调节装置使所述双镜头和所述单镜头的轴线保持平行;配置所述光场摄像部分与所述高分辨率摄像部分,使所述第三摄像机镜头获得与所述多个次级图像中的正视图垂直方向一致的第二图像,并同时输出所述多个次级图像和所述第二图像。
本发明的一个方面,其中所述光场摄像部分与所述高分辨率摄像部分距离尽量接近,并且二者中心位于同一垂直平面。
本发明的一个方面,其中所述具有不同角度偏移的多个次级图像的角 度偏移范围为10-20度。
本发明的一个方面,其中所述多个次级图像中的正视图的角度偏移为0度。所述第一成像部分还包括第一图像传感器和捕获第一图像的复眼透镜;所述复眼透镜将所捕获的第一图像传输至所述第一图像传感器;并且所述第二成像部分还包括第二图像传感器;所述第三摄像机镜头获得的所述第二图像传输至所述第二图像传感器。
本发明的一个方面,所述复眼透镜为多个微镜头阵列,每个微镜头的半径、厚度和阵列间距与所述第一图像传感器的尺寸相关。
本发明的一个方面,所述第一摄像机镜头和所述第二摄像机或相机镜头的光圈和焦距可调节,所述第二摄像机或相机镜头和所述第三摄像机镜头为可替换的镜头,并且所述第二摄像机或相机镜头的光圈比内部反射单元的尺寸大。
本发明的一个方面,所述入瞳平面和匹配装置为瞳镜头,该瞳镜头的直径大于所述内部反射单元的直径,并且允许所述光场图像的入射光线在所述内部反射单元中进行折射。
本发明的一个方面,每个所述次级图像具有场景的细微不同,基于下列等式(1)和(2)来计算内部反射单元的尺寸和每个次级图像的焦距:
Figure PCTCN2020079099-appb-000001
Figure PCTCN2020079099-appb-000002
其中,FOV是所述第二摄像机或相机镜头的视场;
n是所述内部反射单元的折射率;
r是内部反射的次数;
Z是所述内部反射单元的尺寸;
f lens是所述第二摄像机或相机镜头的焦距;
f sub是所述次级图像的焦距。
本发明还提供一种双摄像机三维成像的处理方法,其步骤为:通过光场摄像部分获得第一图像的原始深度图数据;对所述原始深度图数据进行修正;利用边缘导向或定向渲染的方法,获得插值产生的高分辨率深度图;同时利用高分辨率摄像部分获得第二图像,利用数据模型,结合作为参考数据的所述第二图像对所述第一图像的原始深度图数据进行修正,直到获得最佳的插值产生的高分辨率深度图。
本发明提供的三维立体成像***和处理方法能够提供更高分辨率的二维和三维视频,同时与高分辨率图像传感器的光场摄像机相比,成本增加的非常有限;此外,由于本发明的***不影响光场摄像机部分的功能,因此光场摄像机本身的获得信息仍然能够用来计算物体深度并建立深度图。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实例,对于本领域普通技术人员来讲,在不付出创新性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的三维成像***的立体图。
图2为本发明的三维成像***的结构图。
图3为本发明三维成像***获得第一图像120的示意图。
图4为根据本发明的三维成像***对所获得的第一图像120进行规范化处理之后的示意图。
图5为本发明的三维成像***处理第二图像130的流程图。
图6为通过本发明的三维成像***获得目标图像的流程图。
具体实施方式
现结合相应的附图,对本发明的具体实施例进行描述。然而,本发明可以以多种不同的形式实施,而不应被解释为局限于此处展示的实施例。提供这些实施例只是为了本发明可以详尽和全面,从而可以将本发明的范围完全地描述给本领域的技术人员。附图中说明的实施例的详细描述中使用的措辞不应对本发明造成限制。
图1为本发明的三维成像***的立体图。本发明的三维成像***由获得第一图像120(图1中未示出)的光场摄像部分100和获得第二图像130(图1中未示出)的高分辨率摄像部分140组成,其中光场摄像部分100可以采用中国专利申请201711080588.6中的光场摄像机,其包括第一成像部分110,第一摄像机镜头101和第二摄像机或相机镜头103,其中第一摄像机镜头101为后摄像机镜头;其具有可以调节的光圈和焦距。第二摄像机或相机镜头103为前摄像机或相机镜头,前后摄像机或相机镜头能够调节摄像机的焦距。第一摄像机镜头101和第二摄像机或相机镜头103中间为入瞳平面和匹配装置109,入瞳平面和匹配装置109可以为瞳镜头,瞳镜头109和第一摄像机镜头101之间为内部反射单元102。高分辨率摄像部分140和光场摄像部分100整合固定在一起,所述高分辨率摄像部分140 包括第二成像部分116,通过中轴线调节装置118,将高分辨率摄像部分140的第三摄像机镜头117的镜头中轴线112a(参见图2)和光场摄像部分100中的第一摄像机镜头101和第二摄像机镜头103的镜头中轴线112b(参见图2)保持平行。
图2为本发明的双摄像机三维成像***的结构图。其中,三维成像***的光场摄像部分100包括第一成像部分110和镜头部分111,其中,第一成像部分110包括第一图像传感器104;复眼透镜105;其中第一图像传感器104采用成像质量较高的图像传感器;复眼透镜105由一系列小透镜组合形成,从不同的角度捕获某个图像的信息,例如光场图像信息,从而剥离出三维信息以辨别特定对象。复眼透镜105由微镜头阵列组成,设计为除了捕获光场图像外,还可以产生深度图。并且,复眼透镜105是为了第一图像传感器104服务的,因此,其与第一图像传感器104的参数有关。例如,复眼透镜105的每个微镜头参数具有0.5毫米的半径,0.9微米厚,和每个微镜头的阵列间距为60微米。相对于第一图像传感器104,复眼透镜的尺寸是可伸缩的。在一个实施例中,采用先进摄影***C型图像传感器的尺寸为25毫米×17毫米;而在另一个实施例中,采用全画幅图像传感器的尺寸为37毫米×25毫米。
镜头部分111可拆卸地与第一成像部分110相连接。瞳镜头109可以为单片透镜,起聚光作用,能够压缩第二摄像机或相机镜头103收到的信息。在第二摄像机或相机镜头103处进行一次成像过程,随着第二摄像机或相机镜头103的更换或替换,成像角度有所不同。第一摄像机镜头101为短焦镜头或微距镜头,其固定在外壳(图2中未示出)上,第一摄像机镜头101的设计决定本发明的成像***的大小。在第一摄像机镜头101处进行二次成像过程。该入瞳平面和匹配装置109设计为能够矫正光线。在入瞳平面和匹配装置109与第一镜头101之间为内部反射单元102;该内 部反射单元102将拟摄入图像分解并反射成具有不同角度偏移的独立次级图像的多角度图像。该内部反射单元102设计为以不同视角提供多个虚拟图像。内部反射单元102的尺寸和比例是反射次数和反射图像比的决定因素,产生了不同角度的图像。每个反射产生的次级图像具有场景的细微不同,而且目标图像具有轻微的偏移。可以基于下列等式(1)和(2)来计算内部反射单元102的尺寸和每个次级图像的焦距:
Figure PCTCN2020079099-appb-000003
Figure PCTCN2020079099-appb-000004
其中,FOV是所述第二摄像机或相机镜头的视场;
n是所述内部反射单元的折射率;
r是内部反射的次数;
X、Y、Z是所述内部反射单元的尺寸,分别为宽、高、长;
f lens是所述第二摄像机或相机镜头的焦距;
f sub是所述次级图像的焦距。
内部反射单元102的尺寸可以与第一图像传感器104的尺寸一致,在其中一个实施方式中可以为24毫米(宽)×36毫米(高)×95毫米(长),也就是说该单元的比率大约是2:3:8。使用瞳镜头109是为了令次级图像的尺寸与内部反射单元102尺寸相匹配,并在内部反射单元102中正确地进行反射。为了达到这个目的,瞳镜头109的直径应该比内部反射单元102大。在其中一个实施方式中采取的瞳镜头109直径大约为50毫米,焦距为50毫米。只要第二摄像机或相机镜头103的光圈比内部反射单元102的尺寸大,第二摄像机或相机镜头103设计为能够被任何摄像机或相机镜 头所代替。
高分辨率摄像部分140包括第二成像部分116,第二图像传感器119以及第三摄像机镜头117。用于调节第一摄像机镜头101与第二摄像机或相机镜头103的双镜头和第三摄像机镜头117的单镜头的中轴线的调节装置118位于高分辨率摄像部分140外部,并且独立于光场摄像部分100和高分辨率摄像部分140,只要通过调节该调节装置118能够使第一摄像机镜头101、第二摄像机或相机镜头103的轴线112b与第三摄像机镜头117的轴线112a平行。所述第二图像传感器119可以采用与第一图像传感器104相同或者不同规格的传感器,但是第二图像传感器119的分辨率至少应该为第一相机传感器104分辨率1/9以上,才能达到提高本发明三维成像***的光场视频分辨率的目的。
图3为本发明三维成像***的光场摄像部分100所获得的第一图像120的示意图。光场摄像部分100中的内部反射单元102将所述捕获的第一图像120,即光场图像或视频画面分解,并反射成具有不同角度偏移的多个次级图像或视频画面,例如9个次级图像或视频画面,通过复眼透镜被第一成像部分110的第一图像传感器104所获取。9个次级图像或视频画面中间的次级图像①为被摄场景的正视图,其余8个次级图像②--⑨或视频画面为偏移特定角度的次级图像或视频画面。每个图像或视频画面平均只有第一图像传感器104的1/9或更低的分辨率。产生场景深度图之前,要将9个次级图像或视频画面进行分割并对各个次级图像作预处理。
图4为根据本发明的三维成像***的光场摄像部分100所获得的第一图像120进行规范化处理之后的示意图。通过下列等式(3)对每个次级图像进行规范化处理:
Figure PCTCN2020079099-appb-000005
其中I n(n=1,2,…,9)代表规范化前的图像;I′ n(n=1,2,…,9)代表规范化后的图像;mirror(I m,left,right,up,down)(m=1,2,3,4)代表图像镜像向左、向右、向上和向下翻转;rotate(I k,π)(k=6,…,9)代表图像旋转。
在规范化处理之后,可以识别场景于每个次级图像之中的偏移,同时每个次级图像均为一个独立的原始复眼图像。下一步采用包括但不限于去除图像噪声等图像处理技术进行预处理,之后用合成孔径技术进行解码,就能够获得复眼原始图像中的光场信息,并以数字重聚焦技术产生准焦的次级图像。其中,可以对合成孔径图像利用下列原理进行数字重聚焦:
Figure PCTCN2020079099-appb-000006
Figure PCTCN2020079099-appb-000007
L′(u,v,x′,y′)=L(u,v,kx′+(1-k)u,ky′+(1-k)v)   (6)
I′(x′,y′)=∫∫L(u,v,kx′+(1-k)u,ky′+(1-k)v)dudv   (7)其中,I和I’表示一次和二次成像面的坐标系;
L和L’表示一次和二次成像面的能量。
获取9个准焦的次级图像后可以选取两个次级图像,例如次级图像②和③,采用双目立体视觉匹配(Stereo Matching)算法计算出视差图(Disparity Map),然后进一步建立较低分辨率的场景深度图。其中,半全局匹配(Semi-Global Matching)算法是常用的双目立体视觉匹配算法之一,提供良好视差效果而且计算速度理想。通过设立一个与视差图相关的代价 函数:
Figure PCTCN2020079099-appb-000008
其中,D代表视差图;
p和q是图像中某个像素;
C(p,D p)代表当前像素的视差值为D p时该像素的代价值;
N p代表跟像素p相邻的像素,通常为8个;
P1及P2是惩罚系数,P1适用于像素p及相邻像素的视差值相差等于1,P2适用于像素p及相邻像素的视差值相差大于1;
T[.]为函数,如果函数中的参数为真就返回1,否则返回0。
为图像的每个像素计算其为某一视差值时的最小代价值,并累加8个方向的代价值,累加后带价值最低的视差值就相当于该像素的最终视差值。对每个像素逐一计算后就能获得整个视差图。
最后作视差值与深度值的转换可以采用以下公式:
Figure PCTCN2020079099-appb-000009
其中,d p代表某个像素的深度值;
f是归一化的焦距;
b是两个次级图像之间的基线距离;
D p代表当前像素的视差值。
第二成像部分116所获得的第二图像130为2D图像或者视频画面,能够完整地反应被摄物体的信息,因此不会降低第二图像分辨率,基于同样的原因也不需要对第二图像130进行规范化及重新聚焦等处理。
以第一图像传感器104和第二图像传感器119均采用4K图像传感器为例,即所述第一、第二图像传感器均为3840x2160个像素。假设上述9 个次级图像或视频画面大小相同,平均每个次级图像只有传感器1/9或更低的分辨率,例如每个次级图像的分辨率为1280x720像素,如果直接产生场景深度图和光场视频,其分辨率会同样受限1280x720像素。因此先利用次级图像①-⑨建立1280x720的场景深度图,然后参考第二图像传感器119获得的3840x2160像素高分辨率第二图像130,使用边缘导向插值(Edge-directed interpolation)算法提高深度图的分辨率至3840x2160。边缘导向插值使用的公式如下:
Figure PCTCN2020079099-appb-000010
Figure PCTCN2020079099-appb-000011
Figure PCTCN2020079099-appb-000012
其中,m和n是插值前后的低分辨率和高分辨率图像格;
y[n]代表插值后产生的深度图;
x[m]及
Figure PCTCN2020079099-appb-000013
代表原始深度图及修正后的图;
Figure PCTCN2020079099-appb-000014
代表第二图像130的参考数据;
S及R分别代表第二图像130的数据模型及边缘导向渲染步骤的运算符;
λ是修正过程的增益;
k是迭代索引。
由于有高分辨率第二图像130作为依据,插值计算的修正步骤的准确率足够应付产生3D图像的需要。再将高分辨率第二图像130,即2D图像或者视频画面,结合已提高分辨率的深度图产生高分辨率2D+Z的3D图像或视频格式输出至显示器,就能大幅提高光场图像或光场视频的分辨率至3840x2160像素。
图5为本发明的三维成像***处理目标图像的流程图。在步骤501,通过第一成像部分100获得第一图像120的原始深度图数据;在步骤502, 对所述原始深度图数据进行修正;在步骤503,利用边缘导向或定向渲染的方法,在步骤504获得插值产生的高分辨率深度图;其中在步骤505,利用第二图像的数据模型,对作为参考数据的,在步骤506获得的第二图像;对第一图像120的原始深度图数据进行修正,直到获得最佳的插值产生的高分辨率深度图。
图6为通过本发明的三维成像***获得目标图像的流程图。在步骤601,由第一图像传感器104获取包含9个次级图像或视频画面的第一图像120;在步骤602,将9个次级图像或视频画面进行分割并对各个次级图像作规范化处理;在步骤603,对各次级图像作去除图像噪声处理;在步骤604,利用合成孔径技术解码由9个次级图像获取的光场信息,再使用数字重聚焦技术产生准焦图像;在步骤605,利用9个准焦的次级图像建立较低分辨率的场景深度图;结合在步骤608所获得的由第二图像传感器获得的高分辨率2D图像或视频画面的第二图像;在步骤606,参考所述第二图像,使用边缘导向或边缘定向插值算法提高深度图的分辨率;在步骤607,结合已提高分辨率的深度图和第二图像,产生高分辨率3D图像或视频画面。
以上所述仅用于说明本发明的技术方案,任何本领域普通技术人员均可在不违背本发明的精神及范畴下,对上述实施例进行修饰与改变。因此,本发明的权利保护范围应视权利要求范围为准。本发明已结合例子在上面进行了阐述。然而,在本发明公开范围以内的上述实施例以外的其它实施例也同样可行。本发明的不同的特点和步骤可以以不同于所描述的其它方法进行组合。本发明的范围仅受限于所附的权利要求书。更一般地,本领域普通技术人员可以轻易地理解此处描述的所有的参数,尺寸,材料和配置是为示范目的而实际的参数,尺寸,材料和/或配置将取决于特定应用或本发明教导所用于的应用。

Claims (18)

  1. 一种双摄像机三维立体成像***,其特征在于,包括:
    获得第一图像的光场摄像部分和获得第二图像的高分辨率摄像部分;
    其中所述光场摄影部分包括第一成像部分,第一摄像机镜头和第二摄像机或相机镜头;该第一摄像机镜头与该第二摄像机或相机镜头分别位于镜头部分的后部和前部,二者中间放置一个入瞳平面和匹配装置,该入瞳平面和匹配装置能够与第二摄像机或相机镜头的不同焦距相适应,在第一摄像机镜头和所述入瞳平面和匹配装置之间形成内部反射单元,该内部反射单元用于将所述捕获的第一图像分解并折射成具有不同角度偏移的多个次级图像,
    所述高分辨率摄像部分还包括第二成像部分和第三摄像机镜头,以及至少一个能够调节所述第一摄像机镜头与第二摄像机或相机镜头的双镜头和第三摄像机镜头的单镜头的中轴线调节装置,该中轴线调节装置使所述双镜头和所述单镜头的轴线保持平行;
    配置所述光场摄像部分与所述高分辨率摄像部分,使所述第三摄像机镜头获得与所述多个次级图像中的正视图垂直方向一致的第二图像,并同时输出所述多个次级图像和所述第二图像。
  2. 如权利要求1所述的***,其中所述光场摄像部分与所述高分辨率摄像部分距离尽量接近,并且二者中心位于同一垂直平面。
  3. 如权利要求1所述的***,其中所述具有不同角度偏移的多个次级图像的角度偏移范围为10-20度。
  4. 如权利要求3所述的***,其中所述多个次级图像中的正视图的角度 偏移为0度。
  5. 如权利要求1-4所述的***,其特征在于,
    所述第一成像部分还包括第一图像传感器和捕获第一图像的复眼透镜;所述复眼透镜将所捕获的第一图像传输至所述第一图像传感器;并且
    所述第二成像部分还包括第二图像传感器;所述第三摄像机镜头获得的所述第二图像传输至所述第二图像传感器。
  6. 如权利要求5所述的***,其特征在于,
    所述复眼透镜为多个微镜头阵列,每个微镜头的半径、厚度和阵列间距与所述第一图像传感器的尺寸相关。
  7. 如权利要求1-4,6任意一个所述的***,其特征在于,
    所述第一摄像机镜头和所述第二摄像机或相机镜头的光圈和焦距可调节,所述第二摄像机或相机镜头和所述第三摄像机镜头为可替换的镜头,并且所述第二摄像机或相机镜头的光圈比内部反射单元的尺寸大。
  8. 如权利要求如权利要求1-4,6任意一个所述的***,其特征在于,
    所述入瞳平面和匹配装置为瞳镜头,该瞳镜头的直径大于所述内部反射单元的直径,并且允许所述光场图像的入射光线在所述内部反射单元中进行折射。
  9. 如权利要求如权利要求1-4,6任意一个所述的***,其特征在于,
    每个所述次级图像具有场景的细微不同,基于下列等式(1)和(2)来计算内部反射单元的尺寸和每个次级图像的焦距:
    Figure PCTCN2020079099-appb-100001
    Figure PCTCN2020079099-appb-100002
    其中,FOV是所述第二摄像机或相机镜头的视场;
    n是所述内部反射单元的折射率;
    r是内部反射的次数;
    Z是所述内部反射单元的尺寸;
    f lens是所述第二摄像机或相机镜头的焦距;
    f sub是所述次级图像的焦距。
  10. 一种双摄像机三维成像的处理方法,其步骤为:
    通过光场摄像部分获得第一图像的原始深度图数据;
    对所述原始深度图数据进行修正;
    利用边缘导向或定向渲染的方法,获得插值产生的高分辨率深度图;
    同时利用高分辨率摄像部分获得第二图像,利用数据模型,结合作为参考数据的所述第二图像对所述第一图像的原始深度图数据进行修正,直到获得最佳的插值产生的高分辨率深度图。
  11. 如权利要求10所述的处理方法,其中,
    所述光场摄像部分包括包括第一成像部分,第一摄像机镜头和第二摄像机或相机镜头;该第一摄像机镜头与该第二摄像机或相机镜头分别位于 镜头部分的后部和前部,二者中间放置一个入瞳平面和匹配装置,该入瞳平面和匹配装置能够与第二摄像机或相机镜头的不同焦距相适应,在第一摄像机镜头和所述入瞳平面和匹配装置之间形成内部反射单元,该内部反射单元用于将所述捕获的第一图像分解并折射成具有不同角度偏移的多个次级图像,所述高分辨率摄像部分还包括第二成像部分和第三摄像机镜头,以及至少一个能够调节所述第一摄像机镜头与第二摄像机或相机镜头的双镜头和第三摄像机镜头的单镜头的中轴线调节装置,该中轴线调节装置使所述双镜头和所述单镜头的轴线保持平行;配置所述光场摄像部分与所述高分辨率摄像部分,使所述第三摄像机镜头获得与所述多个次级图像中的正视图垂直方向一致的第二图像,并同时输出所述多个次级图像和所述第二图像。
  12. 如权利要求11所述的处理方法,其中,
    所述第一图像,其为由所述第一图像传感器获取的9个次级图像或视频画面;
    将所述9个次级图像或视频画面进行分割并对各个次级图像作规范化处理;
    对各次级图像作去除图像噪声处理;利用合成孔径技术解码由所述9个次级图像获取的光场信息,再使用数字重聚焦技术产生准焦图像;
    利用9个准焦的次级图像建立较低分辨率的场景深度图;结合所述第二图像,其为由所述第二图像传感器获得的高分辨率2D图像或视频画面;
    参考所述第二图像,使用边缘导向或边缘定向插值算法提高深度图的分辨率;并且结合已提高分辨率的深度图和所述第二图像,产生高分辨率3D图像或视频画面。
  13. 如权利要求12所述的处理方法,其中,通过下列等式对每个次级图像进行规范化处理:
    Figure PCTCN2020079099-appb-100003
    其中I n(n=1,2,…,9)代表规范化前的图像;I′ n(n=1,2,…,9)代表规范化后的图像;mirror(I m,left,right,up,down)(m=1,2,3,4)代表图像镜像向左、向右、向上和向下翻转;rotate(I k,π)(k=6,…,9)代表图像旋转。
  14. 如权利要求12所述的处理方法,其中,对合成孔径图像利用下列原理进行数字重聚焦:
    Figure PCTCN2020079099-appb-100004
    Figure PCTCN2020079099-appb-100005
    L′(u,v,x′,y′)=L(u,v,kx′+(1-k)u,ky′+(1-k)v)
             (6)
    I′(x′,y′)=∫∫L(u,v,kx′+(1-k)u,ky′+(1-k)v)dudv
             (7)
    其中,I和I’表示一次和二次成像面的坐标系;
    L和L’表示一次和二次成像面的能量。
  15. 如权利要求12所述的处理方法,其中,采用双目立体视觉匹配(Stereo  Matching)算法计算出视差图(Disparity Map),然后进一步建立较低分辨率的场景深度图。
  16. 如权利要求15所述的处理方法,其中,所述双目立体视觉匹配算法为半全局匹配(Semi-Global Matching)算法,通过设立下列与视差图相关的代价函数:
    Figure PCTCN2020079099-appb-100006
    其中,D代表视差图;
    p和q是图像中某个像素;
    C(p,D p)代表当前像素的视差值为D p时该像素的代价值;
    N p代表跟像素p相邻的像素,通常为8个;
    P1及P2是惩罚系数,P1适用于像素p及相邻像素的视差值相差等于1,P2适用于像素p及相邻像素的视差值相差大于1;
    T[.]为函数,如果函数中的参数为真就返回1,否则返回0;
    为图像的每个像素计算其为某一视差值时的最小代价值,并累加8个方向的代价值,累加后带价值最低的视差值就相当于该像素的最终视差值,对每个像素逐一计算获得整个视差图。
  17. 如权利要求12所述的处理方法,其中,其中视差值与深度值的转换可以采用以下公式:
    Figure PCTCN2020079099-appb-100007
    其中,d p代表某个像素的深度值;
    f是归一化的焦距;
    b是两个次级图像之间的基线距离;
    D p代表当前像素的视差值。
  18. 如权利要求10所述的处理方法,其中,所述边缘导向插值使用的公式如下:
    Figure PCTCN2020079099-appb-100008
    Figure PCTCN2020079099-appb-100009
    Figure PCTCN2020079099-appb-100010
    其中,m和n是插值前后的低分辨率和高分辨率图像格;
    y[n]代表插值后产生的深度图;
    x[m]及
    Figure PCTCN2020079099-appb-100011
    代表原始深度图及修正后的图;
    Figure PCTCN2020079099-appb-100012
    代表第二图像130的参考数据;
    S及R分别代表第二图像130的数据模型及边缘导向渲染步骤的运算符;
    λ是修正过程的增益;
    k是迭代索引。
PCT/CN2020/079099 2019-06-04 2020-03-13 双摄像机三维立体成像***和处理方法 WO2020244273A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910481518.4 2019-06-04
CN201910481518.4A CN112040214A (zh) 2019-06-04 2019-06-04 双摄像机三维立体成像***和处理方法

Publications (1)

Publication Number Publication Date
WO2020244273A1 true WO2020244273A1 (zh) 2020-12-10

Family

ID=73576536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079099 WO2020244273A1 (zh) 2019-06-04 2020-03-13 双摄像机三维立体成像***和处理方法

Country Status (2)

Country Link
CN (1) CN112040214A (zh)
WO (1) WO2020244273A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381965A1 (en) * 2014-06-27 2015-12-31 Qualcomm Incorporated Systems and methods for depth map extraction using a hybrid algorithm
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法
CN107689050A (zh) * 2017-08-15 2018-02-13 武汉科技大学 一种基于彩色图像边缘引导的深度图像上采样方法
CN107991838A (zh) * 2017-11-06 2018-05-04 万维科研有限公司 自适应三维立体成像***
CN108805921A (zh) * 2018-04-09 2018-11-13 深圳奥比中光科技有限公司 图像获取***及方法
CN109074661A (zh) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 图像处理方法和设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595171B (zh) * 2012-02-03 2014-05-14 浙江工商大学 一种多通道空时编码孔径的动态光场成像方法和成像***
CN102663712B (zh) * 2012-04-16 2014-09-17 天津大学 基于飞行时间tof相机的深度计算成像方法
CN106780383B (zh) * 2016-12-13 2019-05-24 长春理工大学 Tof相机的深度图像增强方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150381965A1 (en) * 2014-06-27 2015-12-31 Qualcomm Incorporated Systems and methods for depth map extraction using a hybrid algorithm
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法
CN107689050A (zh) * 2017-08-15 2018-02-13 武汉科技大学 一种基于彩色图像边缘引导的深度图像上采样方法
CN107991838A (zh) * 2017-11-06 2018-05-04 万维科研有限公司 自适应三维立体成像***
CN109074661A (zh) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 图像处理方法和设备
CN108805921A (zh) * 2018-04-09 2018-11-13 深圳奥比中光科技有限公司 图像获取***及方法

Also Published As

Publication number Publication date
CN112040214A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
US10565734B2 (en) Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
CN106303228B (zh) 一种聚焦型光场相机的渲染方法和***
TWI419551B (zh) 固態全景影像擷取裝置
Micusik et al. Autocalibration & 3D reconstruction with non-central catadioptric cameras
US8897502B2 (en) Calibration for stereoscopic capture system
CN102164298B (zh) 全景成像***中基于立体匹配的元素图像获取方法
TWI660629B (zh) 自我調整三維立體成像系統
JP2019532451A (ja) 視点から距離情報を取得するための装置及び方法
US20040001138A1 (en) Stereoscopic panoramic video generation system
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
JP2009300268A (ja) 3次元情報検出装置
JP2003502925A (ja) 一台の携帯カメラによる3d情景の撮影法
CN102243432A (zh) 全景立体摄像装置
CN201662682U (zh) 一种立体摄像装置
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、***
JP7489253B2 (ja) デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム
CN108805921B (zh) 图像获取***及方法
KR102176963B1 (ko) 수평 시차 스테레오 파노라마를 캡쳐하는 시스템 및 방법
Lin et al. A low-cost portable polycamera for stereoscopic 360 imaging
CN114359406A (zh) 自动对焦双目摄像头的标定、3d视觉及深度点云计算方法
JP2010181826A (ja) 立体画像形成装置
JP2011141381A (ja) 立体画像表示装置及び立体画像表示方法
CN110553585A (zh) 一种基于光学阵列的3d信息获取装置
WO2020244273A1 (zh) 双摄像机三维立体成像***和处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20817940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20817940

Country of ref document: EP

Kind code of ref document: A1