WO2019014845A1 - 一种基于棱镜合成光场的方法 - Google Patents

一种基于棱镜合成光场的方法 Download PDF

Info

Publication number
WO2019014845A1
WO2019014845A1 PCT/CN2017/093348 CN2017093348W WO2019014845A1 WO 2019014845 A1 WO2019014845 A1 WO 2019014845A1 CN 2017093348 W CN2017093348 W CN 2017093348W WO 2019014845 A1 WO2019014845 A1 WO 2019014845A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
camera
image information
spatial
same
Prior art date
Application number
PCT/CN2017/093348
Other languages
English (en)
French (fr)
Inventor
李乔
Original Assignee
辛特科技有限公司
李乔
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 辛特科技有限公司, 李乔 filed Critical 辛特科技有限公司
Priority to PCT/CN2017/093348 priority Critical patent/WO2019014845A1/zh
Priority to CN201780093188.0A priority patent/CN111194430B/zh
Publication of WO2019014845A1 publication Critical patent/WO2019014845A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the invention relates to the technical field of light field synthesis, in particular to a method for synthesizing a light field based on a prism.
  • 3D equipment based on the principle of "polarization” cannot solve the dizziness problem caused by people during use.
  • the left and right parallax and the eye focus system can confirm each other, so that the brain knows that these two functions are tacit cooperation.
  • the two sets of the brain's distance sensing system and the observation in the natural environment are different, and this difference makes the brain very uncomfortable. The sense of vertigo is created.
  • Lytro In order to solve the problem of vertigo in 3D video, the industry has introduced a light field theory solution.
  • Lytro's light field camera uses a microlens array method to solve the position and direction of recording a single light, but the lens array method currently has several major drawbacks: large pixel loss, resolution of 40 million pixel light field photos.
  • the normal photo output resolution is only 2450 ⁇ 1634 (about 4 million pixels); the speed is slow, because the amount of data recorded by a single photo is as high as 50M, when the camera is too fast, the storage operation of the camera can not keep up, each picture It takes a few seconds to load.
  • a representative company in the field of 3D broadcasting is a solution based on light field theory made by Magic Leap, but this solution uses optical fiber scanning technology to realize light field display, fiber Due to the control of the rotation, angle and illumination of the optical fiber, it has certain difficulties in control.
  • the multi-focus display method proposed by Magic Leap uses an eye detection system to detect the eye observation point and then re-render the picture, adjust the picture projected to the eye, and each time an image of the depth information is cast, it is difficult to completely restore the entire light field. It is difficult to perform light field restoration from different space schedules.
  • a plurality of the projection groups in the projection wall respectively play images of different viewing angles, and each of the projection heads in the same projection simultaneously plays images of different spatial depths under the same viewing angle, and the same projection group
  • the images played by the plurality of the projection heads are combined to form a spatial depth image of the same view space region;
  • the isosceles right angle prisms are arranged in the same direction.
  • each of the projection heads in the same projection group plays an image of different spatial depths at the same viewing angle and is projected onto a right angle side of the triangular prism.
  • the complete spatial image information is collected by:
  • A1) forming a camera group by a plurality of camera arrays, wherein the plurality of camera groups are arrayed into a camera wall, and the plurality of cameras in the same camera group have different focal lengths;
  • the camera sends the collected image information to the image processing computer, and the image processing computer performs denoising processing and image information verification on the collected image information, Complete spatial image information.
  • the spatial image information collected by each of the camera groups is in one-to-one correspondence with the spatial image played by each of the projection groups.
  • the camera wall is arrayed by a plurality of the camera groups on a planar or spherical base.
  • the unfocused portion of the acquired image information is removed by a denoising process.
  • the invention provides a method for synthesizing a light field based on a prism, and the image information collected by the camera wall of the array and the image information of different spatial depths is played by the plurality of projection heads through different spatial depths. It can quickly and completely restore the entire light field, and can realize the light field reduction of different depths of different viewing angles in the space region.
  • FIG. 1a to 1b are schematic views showing a light field collecting device of the present invention
  • 3 is a schematic view showing the image of the camera wall of the present invention collecting different viewing angles
  • FIG. 4 is a schematic diagram showing an image of a corresponding spatial region of a camera group in an embodiment of the present invention
  • FIG. 5 is a schematic diagram showing the same camera group collecting different spatial depth images according to the present invention.
  • Figure 6 is a flow chart showing the synthesis of a light field based on a prism according to the present invention.
  • FIG. 7 is a schematic view showing the projection wall of the present invention playing different perspective images
  • Figure 8 is a schematic diagram showing the projection head of the present invention playing an image having different spatial depths onto a synthetic light field on a prism.
  • a schematic diagram of a light field collecting device includes a convex spherical base, a camera wall 100 mounted on the outer side of the spherical base, and
  • a spherical base is used, and the camera wall 100 is mounted on the outer side 100b of the spherical base to enable the camera wall 100 to collect image information of the spatial area in all directions.
  • the base for mounting the camera wall 100 can be planar or spherical. This embodiment preferably employs a spherical base.
  • the camera wall 100 includes a plurality of camera groups 110 of an array, the camera group 110 includes a plurality of cameras 111 of an array, and a plurality of cameras 111 in the same camera group 110 are closely arranged, and each camera can acquire the same viewing angle.
  • the complete image information of the spatial region, the plurality of cameras 111 within the same camera group 110 have different focal lengths.
  • the camera 111 realizes data transmission with the image processing computer on the inner side 100a of the spherical base. Specifically, the data transmission may be a wired connection for data transmission or a wireless data transmission.
  • the plurality of camera groups 110 of the camera wall 100 are used to collect image information of different viewing angles.
  • the plurality of camera heads 111 in the same camera group 110 are used to collect image information of different spatial depths in the same viewing angle, and the image processing computer collects image information. Perform denoising processing and image information verification.
  • the method for collecting the complete spatial image information disclosed in the present invention includes:
  • S101 acquiring image information, wherein the plurality of light field camera arrays are formed into a camera group, the plurality of camera groups are arrayed into a camera wall, and the plurality of cameras in the same camera group have different focal lengths, and the camera wall is arrayed by the plurality of camera groups. Outside the spherical base, it is used to capture image information of the space area.
  • multiple camera groups of the camera wall collect image information of different viewing angles
  • a plurality of cameras in the head group collect image information of different spatial depths in the same viewing angle.
  • the camera wall of the present invention collects images of different viewing angles.
  • the camera wall 100 is mounted on the outer side of the spherical base.
  • Different camera groups 110 collect image information of different viewing angles.
  • the adjacent three camera groups respectively collect image information of the space area A, the space area B, and the space area C.
  • the spatial area acquired between adjacent camera sets should have overlapping portions to ensure the integrity of the acquired spatial image information.
  • FIG. 4 is a schematic diagram showing an image of a corresponding spatial region of a camera group according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of capturing a different spatial depth image of the same camera group according to the present invention.
  • the plurality of cameras in the same camera group simultaneously collect image information of different spatial depths.
  • the plurality of cameras in the same camera group use different image distances to acquire image information of different spatial depths at the same focal length.
  • multiple cameras in the same camera group can acquire image information of different spatial depths with different focal lengths and the same image distance.
  • the spatial region A corresponding to the mth camera group is taken as an example, and the image information of the spatial region A corresponding to the collected spatial region A of the camera group includes a first image (puppy) 201, a second image 202 (tree), and The third image (sun) 203, the first image (puppy) 201 is closest to the camera wall, the second image (tree) is 202, and the third image (sun) 203 is the farthest from the camera wall.
  • the plurality of cameras of the mth camera group array respectively collect image information of different spatial depths. According to the light field collection method disclosed by the present invention, multiple cameras of the same camera group use different focal lengths, and image information with different spatial depths is always Focus imaging in a camera.
  • the first camera in the mth camera group focuses the first image (puppy) 201, and the first image (puppy) 201 is clearly imaged in the image information captured by the first camera.
  • the second image 202 (tree) and the third image (sun) 203 are blurred.
  • the second image (tree) 202 is clearly imaged, and the first image 201 (puppy) and the third image (sun) 203 are blurred;
  • the nth camera captures In the image information, the third image (sun) 203 is clearly imaged, and the first image 201 (puppy) and the second image (tree) 202 are blurredly imaged.
  • each image in the embodiment also has a different spatial depth, and the depth of different spaces of the same image is
  • the cameras respectively acquire image information of different spatial depths.
  • the first image (puppy) 201 the eye of the puppy of the camera wall is closer to the camera wall than the camera wall, and the tail of the puppy is farther from the camera wall, and the first image is respectively acquired by the cameras with different focal lengths ( Puppy) 201 spatial depth image information.
  • the image information of the spatial depth of the complete spatial space is collected.
  • S102 Image information denoising processing, and each camera of the camera wall sends the collected image information to the image processing computer. Since each camera only focuses on image information having a certain depth of space, the image information collected by each camera has only a single focus point, and the other unfocused portions are denoised, and the collected information is removed by denoising processing. The portion of the image information that is not in focus.
  • the method for denoising is denoised according to the prior art of those skilled in the art, and the denoising process is preferably performed by a method of map.
  • S103 Image information verification, and performing image information verification on the image information collected by each camera after the denoising process, thereby ensuring that the image information collected by each camera has only one focus point.
  • S104 spatial image collection, image information collected by multiple cameras in the same camera group is combined with A region image information with full spatial depth, and multiple camera groups in the camera wall synthesize image information of different viewing angles with complete space Deep spatial imagery for complete light field acquisition.
  • Methods of synthesizing the light field include:
  • the projection wall acquires spatial image information with complete depth information, and the plurality of projection head arrays are formed into a projection group, and the plurality of projection groups are arrayed into a projection wall, and a set of isosceles right angle prisms are arranged in front of each projection group, and the projection wall is acquired.
  • Complete spatial imagery information captured by the camera wall.
  • the projection wall of the present invention plays a different perspective image
  • the projection wall 400 arrays the plurality of projection groups 401
  • the plurality of projection groups 401 in the projection wall 400 respectively play different viewing angles.
  • the image for example, image information of the A area, the B area, and the C area collected by three adjacent camera groups, plays the images A', B', and C' through three adjacent projection groups.
  • each projection group 401 is arranged in front of a set of right angle triangular prisms 402, and each isosceles right angle prism is arranged in the same direction.
  • the image information of the A region, the B region, and the C region is restored to the light fields A", B", and C" by the triangular prism group 402.
  • the plurality of projection heads in the same projection group form a spatial depth image of the same view space region through the triangular prism group, and the present invention is described below by taking a reduction process of the spatial image information of the A region as an example.
  • the projection head of the present invention plays a picture with different spatial depths to the synthesized light field on the prism, and the image information of different spatial depths collected by the space area A corresponding to the mth camera group includes the first image. (puppy), second image (tree) and third image (sun).
  • An isosceles right angle prism set 402 is arranged in front of the projection head group 401 for playing the A area image, and each of the projection heads of the projection group playing the A area image simultaneously plays the first image (puppies) 201a of different spatial depths in the same viewing angle.
  • the second image (tree) 202a and the third image (sun) 203a are projected onto a right-angled side of the isosceles right-angled prism, and are reflected by the triangular prism group 402 to restore the spatial image of different depths of the A region to A".
  • S305 Synthesize the entire light field, and spatially deep images of different viewing angles played by the plurality of projection groups are combined to synthesize the entire light field.
  • the invention provides a method for synthesizing a light field based on a prism, and the image information collected by the camera wall of the array and the image information of different spatial depths is played by the plurality of projection heads through different spatial depths.
  • the entire light field can be restored completely and quickly.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种基于棱镜合成光场的方法,包括:a)由多个投影头阵列成投影组(401),多个投影组(401)阵列成投影墙(400),在投影墙(400)前布置一组等腰直角三棱镜(402),投影墙(400)获取由摄像头墙(100)采集的完整的空间影像信息;b)投影墙(400)中的多个投影组(401)分别播放不同视角的影像,同一投影组(401)中的每个投影头同时播放同一视角不同空间纵深的影像,同一投影组(401)中的多个投影头播放的影像构成同一视角下的含有纵深信息的空间纵深影像;c)多个投影组(401)播放的不同视角的空间纵深影像合成整个光场。能够快速完整还原整个光场,并且能够实现空间区域的不同视角不同纵深的光场还原。

Description

一种基于棱镜合成光场的方法 技术领域
本发明涉及光场合成技术领域,特别涉及一种基于棱镜合成光场的方法。
背景技术
至今为止几乎任何3D影像技术都是基于这个偏光原理开发的。1839年,英国科学家温斯特发现了一个奇妙的现象,人的两眼间距约为5cm(欧洲人平均值),看任何物体时,两只眼睛的角度不重合,即存在两个视角。这种细微的视角差异经由视网膜传递到大脑里,就能区别出物体的前后远近,产生强烈的立体感。这便是—偏光原理,至今为止几乎任何3D影像技术都是基于这个原理开发的。
但是基于“偏光原理”的3D设备无法解决人们在使用过程中导致的眩晕问题。在自然环境中,左右视差和眼睛对焦***可以相互印证,以使得大脑知道这两个功能正在默契的配合。当用户在观看基于“偏光原理”的3D影像的时候由于缺乏眼睛对焦***的参与,大脑的两套距离感受***和自然环境中的观察存在差异,这种差异就会让大脑非常不适应,这时候眩晕感就产生了。
为了解决3D视频中的眩晕问题,业界在引入了光场理论解决方案。其中在3D拍摄领域比较有代表性的公司是Lytro。Lytro的光场相机采用的是微型透镜阵列的方法解决记录单根光线的位置和方向,但是透镜阵列的方式目前有几个比较大的缺点:像素损耗大,4000万像素光场照片的解析度普通照片输出分辨率仅为2450×1634(约400万像素);速度慢,由于单张照片它记录的数据量高达50M,当拍摄过快时相机的存储运算跟不上造成的,每张图片都需要几秒钟的加载时间。
3D播放领域比较有代表性的公司是Magic Leap公司做的基于光场理论的解决方案,但是该方案采用光纤扫描技术来实现光场显示,光纤 由于涉及到光纤的旋转、角度以及发光的控制,其在控制方面存在一定难度。另外,Magic Leap提出的多焦点显示方法利用一个眼睛检测***检测眼睛观察点然后再重新渲染图片,调节投向眼睛的图片,每次投一个纵深信息的图像,难以实现完整的还原整个光场,同时难以从不同空间调度进行光场还原。
因此,为了解决上述问题,需要一种能够从不同空间角度快速完整还原整个光场的基于棱镜合成光场的方法。
发明内容
本发明的目的在于提供一种基于棱镜合成光场的方法,所述方法包括:
a)由多个投影头阵列成投影组,多个所述投影组阵列成投影墙,在每个所述投影组前布置一组等腰直角三棱镜,所述投影墙获取由摄像头墙采集的完整的空间影像信息;
b)所述投影墙中的多个所述投影组分别播放不同视角的影像,同一所述投影中的每个所述投影头同时播放同一视角下的不同空间纵深的影像,同一所述投影组中的多个所述投影头播放的影像合成同一视角空间区域的空间纵深影像;
c)多个所述投影组播放的不同视角的的空间纵深影像合成整个光场。
优选地,所述等腰直角三棱镜沿同一方向布置。
优选地,同一所述投影组中的每个投影头对同一视角下不同空间纵深的影像播放,并投影到所述三棱镜的一个直角边。
优选地,所述完整的空间影像信息通过如下方法采集:
a1)由多个摄像头阵列成摄像头组,多个所述摄像头组阵列成摄像头墙,同一所述摄像头组内的多个摄像头具有不同的焦距;
a2)所述摄像头墙的多个所述摄像头组采集不同视角的影像信息,所述同一摄像头组中的多个所述摄像头采集同一视角下不同空间纵深的影像信息;
a3)所述摄像头将采集到的影像信息发送至影像处理计算机,所述影像处理计算机对采集的影像信息进行去噪处理和影像信息校验,得到 完整的空间影像信息。
优选地,每个所述摄像头组采集的空间影像信息与每个投影组播放的空间影像一一对应。
优选地,所述摄像头墙由多个所述摄像头组阵列于平面或球面基座上。
优选地,经过去噪处理去除掉所述采集的影像信息中不聚焦的部分。本发明提供的一种基于棱镜合成光场的方法,将通过阵列的摄像头墙采集到的不同视角,不同空间纵深的影像信息的合集,按区域通过多个投影头经不同的空间纵深进行播放,能够快速完整还原整个光场,并且能够实现空间区域的不同视角不同纵深的光场还原。
应当理解,前述大体的描述和后续详尽的描述均为示例性说明和解释,并不应当用作对本发明所要求保护内容的限制。
附图说明
参考随附的附图,本发明更多的目的、功能和优点将通过本发明实施方式的如下描述得以阐明,其中:
图1a~图1b示意性示出了本发明光场采集装置的示意图;
图2示出了本发明光场采集方法的流程框图;
图3示出了本发明摄像头墙采集不同视角影像的示意图;
图4示出了本发明一个实施例中摄像头组对应空间区域的影像示意图;
图5示出了本发明同一摄像头组采集不同空间纵深影像的示意图;
图6示出了本发明基于棱镜合成光场的流程框图;
图7示出了本发明投影墙播放不同视角影像的示意图;
图8示出了本发明投影头将具有不同空间纵深的影像播放至三棱镜上合成光场的示意图。
具体实施方式
通过参考示范性实施例,本发明的目的和功能以及用于实现这些目 的和功能的方法将得以阐明。然而,本发明并不受限于以下所公开的示范性实施例;可以通过不同形式来对其加以实现。说明书的实质仅仅是帮助相关领域技术人员综合理解本发明的具体细节。
在下文中,将参考附图描述本发明的实施例。在附图中,相同的附图标记代表相同或类似的部件,或者相同或类似的步骤,除非另有说明。
下面结合附图通过具体的实施例对本发明的内容进行详细的描述,为了使本发明光场还原方法得以清晰的说明,首先对本发明光场采集的过程进阐释。如图1a~图1b所示的本发明光场采集装置的示意图,根据本发明实施例中一种光场采集装置包括凸起的球面基座、安装于球面基座外侧的摄像头墙100,以及影像处理计算机,本实施例采用球面基座,摄像头墙100安装于球面基座的外侧100b能够使摄像头墙100全方位采集空间区域的影像信息。在一些实施例中,对用于安装摄像头墙100的基座可以平面或球面。本实施例优选地采用球面基座。
摄像头墙100包括阵列的多个摄像头组110,所述摄像头组110包括阵列的多个摄像头111,同一摄像头组110内的多个摄像头111呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息,同一摄像头组110内的多个摄像头111具有不同的焦距。摄像头111在球面基座的内侧100a与影像处理计算机实现数据传输,具体地数据传输可以是有线连接进行数据传输,也可以是无线数据传输。
摄像头墙100的多个摄像头组110用于采集不同视角的影像信息,同一摄像头组110中的多个摄头头111用于采集同一视角下不同空间纵深的影像信息,影像处理计算机对采集的影像信息进行去噪处理和影像信息校验。
如图2所示本发明光场采集方法的流程框图,本发明所公开的完整空间影像信息采集的方法包括:
S101、采集影像信息,由多个光场摄像头阵列成摄像头组,多个摄像头组阵列成摄像头墙,同一摄像头组内的多个摄像头具有不同的焦距,摄像头墙由多个摄像头组阵列于凸起的球面基座外侧,用于采集空间区域的影像信息。
其中,摄像头墙的多个摄像头组采集不同视角的影像信息,同一摄 像头组中的多个摄头采集同一视角下不同空间纵深的影像信息。如图3所示本发明摄像头墙采集不同视角影像的示意图,摄像头墙100安装于球面基座的外侧,不同的摄像头组110采集不同视角的影像信息。实施例中,相邻的三个摄像头组分别采集空间区域A、空间区域B和空间区域C的影像信息。相邻摄像头组之间采集的空间区域应当具有重叠的部分,从而确保采集的空间影像信息的完整性。
对于同一所述摄像头组内的多个摄像头呈紧密阵列,并且使每个摄像头均能采集到同一视角空间区域的完整影像信息。如图4所示本发明一个实施例中摄像头组对应空间区域的影像示意图,如图5所示本发明同一摄像头组采集不同空间纵深影像的示意图。同一摄像头组内的多个摄像头同时采集不同空间纵深的影像信息,本实施例中同一摄像头组内的多个摄像头采用不同的像距相同焦距采集不同空间纵深的影像信息。在一些实施例中,同一摄像头组内的多个摄像头可以采用不同的焦距相同像距采集不同空间纵深的影像信息。
本实施例以第m个摄像头组对应的空间区域A为例,摄像头组对应采集的空间区域A中空间纵深不同的影像信息包括第一影像(小狗)201、第二影像202(树木)和第三影像(太阳)203,第一影像(小狗)201与摄像头墙的距离最近,第二影像(树木)202次之,第三影像(太阳)203距离摄像头墙的距离最远。第m个摄像头组阵列的多个摄像头分别采集不同空间纵深的影像信息,根据本发明公开的光场采集方法,同一摄像头组的多个摄像头采用不同的焦距,具有不同空间纵深的影像信息总是在某一摄像头中聚焦成像。
下面进行示例性的说明,第m个摄像头组中的第1个摄像头使第一影像(小狗)201聚焦,第1个摄像头拍摄到的影像信息中,第一影像(小狗)201清晰成像,第二影像202(树木)和第三影像(太阳)203模糊成像。同样地,第2个摄像头拍摄到的影像信息中,第二影像(树木)202清晰成像,第一影像201(小狗)和第三影像(太阳)203模糊成像;第n个摄像头拍摄到的影像信息中,第三影像(太阳)203清晰成像,第一影像201(小狗)和第二影像(树木)202模糊成像。应当理解,实施例中每一个影像也具有不同空间纵深,对同一影像的不同空间纵深由多 个摄像头分别获取不同空间纵深的影像信息。举例来说,对于第一影像(小狗)201,相对于摄像头墙小狗的眼睛距离摄像头墙较近,小狗的尾巴距离摄像头墙较远,由具有不同焦距的摄像头分别采集第一影像(小狗)201的空间纵深影像信息。经过多个摄像头的采集,采集到空间区域A完整的空间纵深的影像信息。
S102、影像信息去噪处理,摄像头墙的每一个摄像头将采集的影像信息发送至影像处理计算机。由于每一个摄像头只对具有某一空间纵深的影像信息聚焦,每一个摄像头采集的影像信息有且只有唯一的聚焦点,对于其他不聚焦的部分进行去噪处理,经过去噪处理去除掉采集的影像信息中不聚焦的部分。对于去噪的方法根据本领域技术人员的现有技术进行去噪,优选地采用抠图的方法进行去噪处理。
S103、影像信息校验,对经过去噪处理后每一个摄像头采集的影像信息进行影像信息校验,从而确保每一个摄像头采集的影像信息有且只有唯一的聚焦点。
S104、空间影像合集,同一摄像头组中的多个摄像头采集到的影像信息合成具有完整空间纵深的A区域影像信息,摄像头墙中的多个摄像头组将采集的不同视角的影像信息合成具有完整空间纵深的空间影像,实现完整光场采集。
根据本发明提供的一种基于棱镜合成光场的方法,实施例中下面对光场合成的过程及进行说明,如图6所示本发明基于棱镜合成光场的流程框图,一种基于棱镜合成光场的方法包括:
S301、投影墙获取具有完整纵深信息的空间影像信息,由多个投影头阵列成投影组,多个投影组阵列成投影墙,在每个投影组前布置一组等腰直角三棱镜,投影墙获取由摄像头墙采集的完整的空间影像信息。
S302、分别播放不同视角的影像,如图7所示本发明投影墙播放不同视角影像的示意图,投影墙400阵列多组投影组401,投影墙400中的多个投影组401分别播放不同视角的影像,例如:由三个相邻摄像头组采集的A区域、B区域和C区域的影像信息,通过三个相邻的投影组播放影像A’、B’和C’。
S303、同时播放同一视角下不同空间纵深的影像,根据本发明本实 施例每个投影组401前布置一组等到直角三棱镜402,每个等腰直角三棱镜沿同一方向布置。通过三棱镜组402将A区域、B区域和C区域的影像信息还原成光场A”、B”和C”。
S304、同一投影组中的多个投影头通过三棱镜组构成同一视角空间区域的空间纵深影像,下面以A区域的空间影像信息的还原过程为例对本发明进行说明。如图8所示本发明投影头将具有不同空间纵深的影像播放至三棱镜上合成光场的示意图,由第m个摄像头组对应的空间区域A采集到的空间纵深不同的影像信息包括第一影像(小狗)、第二影像(树木)和第三影像(太阳)。
在播放A区域影像的投影头组401前布置等腰直角三棱镜组402,播放A区域影像的投影组中的每个投影头同时播放同一视角下不同空间纵深的影像第一影像(小狗)201a、第二影像(树木)202a和第三影像(太阳)203a,并投影到等腰直角三棱镜的一个直角边,经过三棱镜组402反射,将A区域不同纵深的空间影像还原成A”。
S305、合成整个光场,多个投影组播放的不同视角的空间纵深影像合成整个光场。
本发明提供的一种基于棱镜合成光场的方法,将通过阵列的摄像头墙采集到的不同视角,不同空间纵深的影像信息的合集,按区域通过多个投影头经不同的空间纵深进行播放,能够快速完整还原整个光场。
结合这里披露的本发明的说明和实践,本发明的其他实施例对于本领域技术人员都是易于想到和理解的。说明和实施例仅被认为是示例性的,本发明的真正范围和主旨均由权利要求所限定。

Claims (7)

  1. 一种基于棱镜合成光场的方法,其特征在于,所述方法包括:
    a)由多个投影头阵列成投影组,多个所述投影组阵列成投影墙,在每个所述投影组前布置一组等腰直角三棱镜,所述投影墙获取由摄像头墙采集的完整的空间影像信息;
    b)所述投影墙中的多个所述投影组分别播放不同视角的影像,同一所述投影组中的每个所述投影头同时播放同一视角不同空间纵深的影像,同一所述投影组中的多个所述投影头播放的影像构成同一视角空间区域的空间纵深影像;
    c)多个所述投影组播放的不同视角的空间纵深影像合成整个光场。
  2. 根据权利要求1所述的方法,其特征在于,所述等腰直角三棱镜沿同一方向布置。
  3. 根据权利要求1所述的方法,其特征在于,同一所述投影组中的每个投影头对同一视角下不同空间纵深的影像播放,并投影到所述三棱镜的一个直角边。
  4. 根据权利要求1所述的方法,其特征在于,所述完整的空间影像信息通过如下方法采集:
    a1)由多个摄像头阵列成摄像头组,多个所述摄像头组阵列成摄像头墙,同一所述摄像头组内的多个摄像头具有不同的焦距;
    a2)所述摄像头墙的多个所述摄像头组采集不同视角的影像信息,所述同一摄像头组中的多个所述摄头采集同一视角下不同空间纵深的影像信息;
    a3)所述摄像头将采集到的影像信息发送至影像处理计算机,所述影像处理计算机对采集的影像信息进行去噪处理和影像信息校验,得到完整的空间影像信息。
  5. 根据权利要求4所述的方法,其特征在于,每个所述摄像头组采集的空间影像信息与每个投影组播放的空间影像一一对应。
  6. 根据权利要求4所述的方法,其特征在于,所述摄像头墙由多个 所述摄像头组阵列于平面或者球面基座上。
  7. 根据权利要求4所述的方法,其特征在于,经过去噪处理去除掉所述采集的影像信息中不聚焦的部分。
PCT/CN2017/093348 2017-07-18 2017-07-18 一种基于棱镜合成光场的方法 WO2019014845A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/093348 WO2019014845A1 (zh) 2017-07-18 2017-07-18 一种基于棱镜合成光场的方法
CN201780093188.0A CN111194430B (zh) 2017-07-18 2017-07-18 一种基于棱镜合成光场的方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/093348 WO2019014845A1 (zh) 2017-07-18 2017-07-18 一种基于棱镜合成光场的方法

Publications (1)

Publication Number Publication Date
WO2019014845A1 true WO2019014845A1 (zh) 2019-01-24

Family

ID=65014920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093348 WO2019014845A1 (zh) 2017-07-18 2017-07-18 一种基于棱镜合成光场的方法

Country Status (2)

Country Link
CN (1) CN111194430B (zh)
WO (1) WO2019014845A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008012821A2 (en) * 2006-07-25 2008-01-31 Humaneyes Technologies Ltd. Computer graphics imaging
CN101644884A (zh) * 2009-07-13 2010-02-10 浙江大学 拼接视场体视三维显示装置及方法
CN201947404U (zh) * 2010-04-12 2011-08-24 范治江 一种全景视频实时拼接显示***
CN103986917A (zh) * 2014-06-03 2014-08-13 中科融通物联科技无锡有限公司 多视角热像监控***
CN104535182A (zh) * 2014-12-09 2015-04-22 中国科学院上海技术物理研究所 一种物方视场拼接红外高光谱成像***
CN104867140A (zh) * 2015-05-13 2015-08-26 中国科学院光电技术研究所 一种基于仿生复眼的大视场定位***

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1983363B1 (en) * 2006-02-06 2020-01-15 Nippon Telegraph And Telephone Corporation 3-dimensional display device and image presentation method
TWI476449B (zh) * 2012-04-24 2015-03-11 Univ Minghsin Sci & Tech 裸眼三維背投影顯示裝置
EP3088954A1 (en) * 2015-04-27 2016-11-02 Thomson Licensing Method and device for processing a lightfield content

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008012821A2 (en) * 2006-07-25 2008-01-31 Humaneyes Technologies Ltd. Computer graphics imaging
CN101644884A (zh) * 2009-07-13 2010-02-10 浙江大学 拼接视场体视三维显示装置及方法
CN201947404U (zh) * 2010-04-12 2011-08-24 范治江 一种全景视频实时拼接显示***
CN103986917A (zh) * 2014-06-03 2014-08-13 中科融通物联科技无锡有限公司 多视角热像监控***
CN104535182A (zh) * 2014-12-09 2015-04-22 中国科学院上海技术物理研究所 一种物方视场拼接红外高光谱成像***
CN104867140A (zh) * 2015-05-13 2015-08-26 中国科学院光电技术研究所 一种基于仿生复眼的大视场定位***

Also Published As

Publication number Publication date
CN111194430B (zh) 2021-10-26
CN111194430A (zh) 2020-05-22

Similar Documents

Publication Publication Date Title
US7944498B2 (en) Multi-focal camera apparatus and methods and mediums for generating focus-free image and autofocus image using the multi-focal camera apparatus
JP5814692B2 (ja) 撮像装置及びその制御方法、プログラム
KR20150068299A (ko) 다면 영상 생성 방법 및 시스템
CN108600729B (zh) 动态3d模型生成装置及影像生成方法
JP6300346B2 (ja) Ip立体映像推定装置及びそのプログラム
JP2014157425A (ja) 撮像装置及びその制御方法
JP2002232913A (ja) 複眼カメラ及び立体視画像観察システム
Kagawa et al. A three‐dimensional multifunctional compound‐eye endoscopic system with extended depth of field
CN108391116B (zh) 基于3d成像技术的全身扫描装置及扫描方法
JP3676916B2 (ja) 立体撮像装置および立体表示装置
CN110553585A (zh) 一种基于光学阵列的3d信息获取装置
JP3678792B2 (ja) 立体画像撮像装置
CN111194430B (zh) 一种基于棱镜合成光场的方法
US8917316B2 (en) Photographing equipment
CN111183394B (zh) 一种分时还原光场的方法及还原装置
JP5088973B2 (ja) 立体撮像装置およびその撮像方法
CN111164970B (zh) 一种光场采集方法及采集装置
KR101608753B1 (ko) 초점 이동 영상 촬영을 통한 3차원 컨텐츠 생성 방법 및 장치
JP6491442B2 (ja) 画像処理装置、画像処理方法、プログラム及び記録媒体
KR100897305B1 (ko) 요소 영상을 이용한 3차원 집적 영상 방법, 시스템 및 이를실행하는 프로그램이 기록된 기록매체
JP2015084517A (ja) 画像処理装置、画像処理方法、プログラム及び記録媒体
JP2001258050A (ja) 立体映像撮像装置
KR101569787B1 (ko) 멀티 카메라에 의한 3d 동영상 정보 획득방법
JP6106026B2 (ja) 画像処理装置、撮像装置、再生装置、制御方法、及びプログラム
KR102241060B1 (ko) 미소거울배열을 이용하는 무안경 집적 영상 디스플레이 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17918273

Country of ref document: EP

Kind code of ref document: A1