WO2015127847A1 - 一种深度图的超分辨率处理方法 - Google Patents

一种深度图的超分辨率处理方法 Download PDF

Info

Publication number
WO2015127847A1
WO2015127847A1 PCT/CN2015/072180 CN2015072180W WO2015127847A1 WO 2015127847 A1 WO2015127847 A1 WO 2015127847A1 CN 2015072180 W CN2015072180 W CN 2015072180W WO 2015127847 A1 WO2015127847 A1 WO 2015127847A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
depth
super
block
Prior art date
Application number
PCT/CN2015/072180
Other languages
English (en)
French (fr)
Inventor
张磊
李阳光
张永兵
Original Assignee
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学深圳研究生院 filed Critical 清华大学深圳研究生院
Publication of WO2015127847A1 publication Critical patent/WO2015127847A1/zh
Priority to US15/216,332 priority Critical patent/US10115182B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of computer image processing, and in particular to a super-resolution processing method based on image matching depth map.
  • Super-resolution processing technology is one of the research hotspots in the field of computer vision, image and video processing, etc. It is used to process natural images with low resolution and less detailed information to generate high resolution with more detailed information.
  • An image is a technique that improves the resolution of an original image.
  • Super-resolution processing technology has been widely used in high-definition video, image compression, medical imaging, video surveillance, satellite image analysis and other fields. Especially in the past 30 years, super-resolution technology has been extensively and intensively studied.
  • the depth map contains the three-dimensional depth information of the objects in the scene and plays an important role in the construction of the 3D visual scene.
  • a good high-resolution depth map can make the corresponding color image pixel points project into the three-dimensional scene to display clear and complete effects, which is a powerful support for efficient and high-quality stereo scene construction. Therefore, obtaining high-quality high-resolution depth maps is of great significance in stereo vision.
  • the method is obtained by a laser depth scanning method, which can obtain a high-quality and high-resolution depth map, but the acquisition method has high requirements on equipment and technology, resulting in high cost and high cost. Most of them scan only one point at a time, and the acquisition speed is very slow, which is difficult to meet the real-time requirements.
  • depth cameras such as time-of-flight (TOF) cameras, which directly capture and capture scenes, and quickly obtain depth maps in real time.
  • TOF time-of-flight
  • this method can only obtain low-resolution depth maps, and high resolution is required.
  • the rate depth map needs further processing.
  • the super-resolution method is used to directly perform super-resolution processing on the depth map, and the quality of the obtained high-resolution depth map in the actual scene rendering cannot be guaranteed after processing, so there is no practical significance.
  • the technical problem to be solved by the present invention is to make up for the deficiencies of the above prior art, and to provide a super-resolution processing method for the depth map, and the depth information of the obtained high-resolution depth map is more accurate.
  • a super-resolution processing method for a depth map comprising the steps of: first, at a first location and a second location Performing image acquisition on the same scene, respectively acquiring a first original image (S1) and a second original image (S2); acquiring a low-resolution depth map (d) of the first original image (S1); secondly, performing the following processing : 1) dividing the low-resolution depth map (d) into a plurality of depth image blocks; 2) performing the following processing on each depth image block obtained in step 1): 21) using multiple super-resolution processing methods respectively Performing super-resolution processing on the current block to obtain a plurality of initial high-resolution depth image blocks having the same resolution as the first original image (S1); 22) traversing the plurality of high-resolution depth images obtained in step 21) And combining the corresponding image blocks corresponding to the current block in the first original image (S1), respectively, using image synthesis technology, and synthesizing a plurality of corresponding places according to the relative positional relationship between the first position and
  • the super-resolution processing method of the depth map of the present invention separately performs super-resolution on each block of the depth map by using various existing super-resolution methods, and respectively combines the corresponding first originals according to the generated high-resolution depth map results.
  • An image block, a composite image block corresponding to the second original image is generated, and the generated composite image block is respectively matched with the known second original image block, and the highest resolution depth to be found is obtained by the most matched composite image block.
  • Image block
  • the high-resolution depth map is determined, and the determined high-resolution depth map is more closely matched with the actual situation, and the depth information of the high-resolution depth map is more accurate, thereby The resulting high resolution depth map is more practical and useful.
  • FIG. 1 is a flow chart of a super-resolution processing method of a depth map according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the principle of projection and reduction imaging in a super-resolution processing method of a depth map according to an embodiment of the present invention.
  • the idea of the invention is: by super-resolution technology and depth-based image mapping
  • DIBR Depth-Image-Based-Rendering
  • the image is first subjected to block processing, and the depth image block is restored to the same resolution level as the corresponding color image by using various existing super-resolution techniques, and then the restored depth image block and corresponding are utilized.
  • the image block information is projected into the three-dimensional space, and then a new composite image block is obtained through the virtual camera and the three-dimensional scene, and the synthesized image block and the collected initial image are matched, and the desired image is obtained by matching the best-developed image block.
  • Corresponding high resolution depth image block Performing the above processing on each block in the low-resolution depth map, that is, obtaining a high-resolution depth image block of each block, and finally integrating each high-resolution depth image block to obtain a super-resolution processed high-resolution depth map. .
  • a super-resolution processing method of a depth map is provided, and a low-resolution depth map of the first original image S1 is subjected to super-resolution processing.
  • a depth camera such as, but not limited to, a time-of-flight (TOF) camera may be used to perform image acquisition on the scene at the first location, that is, direct acquisition.
  • TOF time-of-flight
  • various super-resolution processing methods include bicubic interpolation, new edge direction interpolation, K-neighbor embedding method and sparse representation method.
  • the above processing methods have their own characteristics and can be applied to this.
  • the current depth image block is super-resolution processed by using r existing super-resolution methods, and processed into a high-resolution image with the same resolution as the first original image S1, and r corresponding high resolutions are obtained.
  • the obtained plurality of high resolution depth image blocks are defined as a set ID, and ⁇ D is any one of the high resolution depth image blocks.
  • the method includes the following steps: a) projecting the corresponding image block in the first original image S1 into a three-dimensional space using a reference camera according to depth information of a high-resolution depth image block by using a depth image mapping method, that is, a DIBR method; ; b) corresponding to the first position with the center of the reference camera, setting the center of the virtual camera according to the relative positional relationship of the second position with respect to the first position, using the virtual camera to obtain the three-dimensional obtained in step a)
  • the scene of the space is imaged into a two-dimensional plane to obtain a composite image block.
  • the relative position of the reference camera and the virtual camera is set according to the relative positional relationship between the second position and the first position.
  • Corresponding to the second position is the second original image, and thus the synthesized image block is an image block corresponding to the second original image.
  • FIG. 3 it is a schematic diagram of the principle of projection and reduction imaging during image synthesis.
  • the center of the reference camera is located at point O
  • the virtual camera is located at point O1
  • the relative positional relationship of point O1 with respect to point O corresponds to the relative positional relationship of the second position with respect to the first position.
  • the reference camera projects the pixel point p 1 in the image block of the first original image S1 into the three-dimensional space, corresponding to the point Pw.
  • the virtual camera restores the point Pw in the three-dimensional space to the two-dimensional plane, corresponding to the pixel point p 2 points.
  • the center of the reference camera is the center of the world coordinate system, that is, the coordinates of the center of the reference camera are (0, 0, 0) T
  • the direction observed from the reference camera is the z-axis direction of the coordinate system.
  • p 1 represents the position information of the pixel point p 1 in the corresponding image block in the first original image S1, and takes the homogeneous form, that is, the value of the third dimension is taken as 1.
  • the position of the pixel point p 1 in the first original image S1 is (x1, y1)
  • p 1 in the equation is (x1, y1, 1 ).
  • d 1 is depth information of a corresponding pixel point p 1 in the high resolution depth image block
  • K 1 is a built-in parameter matrix of the reference camera
  • (X w , Y w , Z w ) is a pixel point p 1 projected into the three-dimensional space
  • the coordinates of the point in the middle as shown in Fig. 2, are the coordinates of the Pw point in the three-dimensional scene.
  • the above equations are only one of the enumerated ones, and other projection methods are also applicable to the step a).
  • the image is restored to the two-dimensional plane in step b)
  • the image is imaged to a two-dimensional plane according to the following equation:
  • the center of the reference camera is the center of the world coordinate system, that is, the coordinates of the center of the reference camera are (0, 0, 0) T , and the direction observed from the reference camera is the z-axis direction of the coordinate system.
  • C 2 is the coordinates of the center of the virtual camera
  • R 2 is the rotation matrix of the virtual camera
  • K 2 is the built-in parameter matrix of the virtual camera
  • P w is the coordinates of the points in the three-dimensional space obtained in step a)
  • p 2 and d 2 respectively for the position and depth information of the corresponding pixel in the composite image block obtained by imaging to the two-dimensional plane; converting the operation result on the right side of the equation into a homogeneous form: m(x, y, 1), p 2 is (x) , y), d 2 is the coefficient m.
  • step P22 After the image synthesis in step P22), that is, for each high-resolution image block, a corresponding new composite image block is synthesized to constitute the set IS.
  • the traversing set IS is used to calculate the matching degree of each corresponding image block and the corresponding block of the current original block in the second original image S2, and the composite image block with the highest matching degree is determined, and the composite image block with the highest matching degree is correspondingly
  • the high resolution depth image block is determined as the final high resolution depth image block of the current block.
  • the method of calculating the degree of image matching may use, but is not limited to, a minimum mean square error (Minimum Mean Square Error) matching method.
  • the first original image S1 is projected to the three-dimensional space by using the generated depth image information of the same resolution by using the DIBR method, and then the new composite image block obtained by using the three-dimensional scene and the collected first The two original images are matched, and the matching result is used as a priori knowledge of the depth map super-resolution, thereby obtaining a reasonable and useful high-resolution depth image.
  • This step is to integrate the high resolution depth image blocks of the blocks processed in step P2) into one complete
  • the image yields a high resolution depth map of the low resolution depth map d.
  • the integrating further comprises smoothing the complete high resolution depth image. Smoothing is mainly considered if there is an image overlap area. Smoothness processing can be used, but not limited to, using commonly used averaging methods.
  • the super-resolution processing method of the depth map of the present embodiment through the above steps, finally obtains the processing result, the high-resolution depth map.
  • the matching result based on the new composite image block and the original image block in the method is a premise that is positively correlated with the matching result of the processed high resolution depth map and the high resolution depth map of the actual situation, thereby determining various Which of the plurality of high-resolution depth image blocks obtained by the super-resolution processing method is the most accurate, and is closest to the actual situation.
  • the processing method of the specific embodiment compared with the existing method for direct super-resolution processing of the low-resolution depth map, the obtained high-resolution depth map is closer to the actual situation, and the depth information of the high-resolution depth map is more Accurate, more practical and useful.
  • each block in the low-resolution depth map is processed separately, and it is ensured that the depth image blocks with different features can obtain the super-resolution processing method that is most suitable for the characteristics of the image. Ensure that multiple processing results cover the closest processing result to the actual situation.
  • the processing method of the specific embodiment fully utilizes the features and advantages of various super-resolution methods, and integrates the advantages of the existing super-resolution method into the super-resolution processing of the depth image, and can recover the High-resolution depth maps of practical significance and use value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种深度图的超分辨率处理方法,首先,分别获取第一原始图像(S1)、第二原始图像(S2)以及第一原始图像(S1)的低分辨率深度图(d),其次,1)将低分辨率深度图(d)划分为多个深度图像块;2)对步骤1)得到的各个深度图像块分别进行如下处理:21)采用多种超分辨率处理方法分别对当前块进行超分辨率处理,得到多个高分辨率深度图像块;22)采用图像合成技术得到新的合成图像块;23)匹配判断,确定最终的高分辨率深度图像块;3)按照各个深度图像块在低分辨率深度图(d)中的位置,将各个深度图像块的高分辨率深度图像块整合成一幅图像。本发明的深度图的超分辨率处理方法,得到的高分辨率深度图的深度信息更精确。

Description

一种深度图的超分辨率处理方法 【技术领域】
本发明涉及计算机图像处理领域,特别是涉及一种基于图像匹配的深度图的超分辨率处理方法。
【背景技术】
超分辨率处理技术是当前计算机视觉、图像视频处理等学科领域的研究热点之一,用于对分辨率比较低、细节信息较少的自然图像进行处理,生成含有更多细节信息的高分辨率图像,是一种提高原有图像分辨率的技术。超分辨率处理技术已广泛应用于高清影视、图像压缩、医学成像、视频监控、卫星图像分析等领域。尤其是近30年来,超分辨率技术更是得到广泛和深入的研究。深度图包含着场景中物体的三维深度信息,在三维视觉场景构建中有着重要作用。一张良好的高分辨率深度图,能够使对应的彩色图像像素点投影到三维立体场景中显示出清晰完整的效果,是对高效优质的立体场景构建的有力支持。因此,获取高质量的高分辨率深度图在立体视觉中有着十分重要的意义。
现有获取深度图的方法中,有通过激光深度扫描方法来获取,该方法能获取高质量高分辨率的深度图,但该获取方法对设备和技术的要求较高,导致成本代价高昂,并且大部分都是一次只扫描一个点,获取的速度很慢,难以满足实时要求。也有通过深度相机,如时间飞行(time-of-flight,简称TOF)相机等,直接对场景进行拍摄采集,实时快速获得深度图,但是该方法只能获得低分辨率深度图,要得到高分辨率深度图还需进一步处理。现有的处理方法中,有采用超分辨率方法直接对深度图进行超分辨率处理,处理后无法保证得到的高分辨率深度图在实际的场景渲染中的质量,因此没有多大的实际意义。
【发明内容】
本发明所要解决的技术问题是:弥补上述现有技术的不足,提出一种深度图的超分辨率处理方法,处理得到的高分辨率深度图的深度信息更精确。
本发明的技术问题通过以下的技术方案予以解决:
一种深度图的超分辨率处理方法,包括以下步骤:首先,在第一位置和第二位置 对同一场景进行图像采集,分别获取第一原始图像(S1)和第二原始图像(S2);获取所述第一原始图像(S1)的低分辨率深度图(d);其次,进行如下处理:1)将所述低分辨率深度图(d)划分为多个深度图像块;2)对步骤1)得到的各个深度图像块分别进行如下处理:21)采用多种超分辨率处理方法分别对当前块进行超分辨率处理,得到多个与所述第一原始图像(S1)相同分辨率的初始的高分辨率深度图像块;22)遍历步骤21)得到的多个高分辨率深度图像块,分别结合所述第一原始图像(S1)中对应当前块的相应图像块,采用图像合成技术,根据所述第一位置和所述第二位置的相对位置关系,合成得到多个对应所述第二原始图像(S2)的图像块,定义为多个合成图像块;23)遍历步骤22)中得到的多个合成图像块,分别计算各个合成图像块与所述第二原始图像(S2)中对应当前块的相应块的匹配程度,确定匹配程度最高的合成图像块,将所述匹配程度最高的合成图像块对应的高分辨率深度图像块确定为当前块的最终的高分辨率深度图像块;3)按照各个深度图像块在所述低分辨率深度图(d)中的位置,将各个深度图像块的高分辨率深度图像块整合成一幅图像,得到所述低分辨率深度图(d)的超分辨率处理图像。
本发明与现有技术对比的有益效果是:
本发明的深度图的超分辨率处理方法,通过采用已有的多种超分辨率方法分别对深度图各块进行超分辨率,根据生成的高分辨率深度图结果分别结合相应的第一原始图像块,生成对应于第二原始图像的合成图像块,利用生成的合成图像块分别和已知的第二原始图像块进行匹配,通过最匹配的合成图像块得出要找的高分辨率深度图像块。方法中基于合成图像与实际图像的匹配程度,确定出高分辨率深度图,则确定的高分辨率深度图与实际情形更匹配,更接近,即高分辨率深度图的深度信息更精确,从而处理得到的高分辨率深度图更有实际意义和使用价值。
【附图说明】
图1是本发明具体实施方式的深度图的超分辨率处理方法的流程图;
图2是本发明具体实施方式的深度图的超分辨率处理方法中的投影和还原成像的原理示意图。
【具体实施方式】
下面结合具体实施方式并对照附图对本发明做进一步详细说明。
本发明的构思是:通过对超分辨率技术和基于深度图像绘图 (Depth-Image-Based-Rendering,简称DIBR)技术的研究,利用合成图像块和原始图像块匹配的结果,来反向验证由超分辨率技术恢复出的高分辨率深度图的质量。本具体实施方式中,首先对图像进行分块处理,通过多种已有的超分辨率技术将深度图像块恢复到和相应的彩色图像相同的分辨率水平,然后利用恢复的深度图像块和相应的图像块信息投影到三维空间,再通过虚拟相机和三维场景获得新的合成图像块,将合成的图像块和采集的初始图像进行匹配,通过匹配效果最好的合成图像块,找到需要的其对应的高分辨率深度图像块。对低分辨率深度图中的各块均进行上述处理,即得到各块的高分辨率深度图像块,最后将各高分辨率深度图像块进行整合即得到超分辨率处理后高分辨率深度图。
本具体实施方式中,提供一种深度图的超分辨率处理方法,对第一原始图像S1的低分辨率深度图进行超分辨率处理。分别在两个不同的位置,第一位置和第二位置对同一场景进行图像采集,即分别获取第一原始图像S1和第二原始图像S2,进而获取第一原始图像S1的低分辨率深度图d。获取低分辨率深度图d时,可采用深度相机,例如(不限于)时间飞行(time-of-flight,简称TOF)相机,在所述第一位置对所述场景进行图像采集,即直接获取到第一原始图像S1的低分辨率深度图d。得到上述处理对象后,进入如图1所示的处理步骤:
P1)将低分辨率深度图d划分为多个深度图像块。该步骤中,考虑深度图的不同区域有着不同的特征(如梯度情况等),则各区域最适应的超分辨率方法也会有所不同,故对深度图进行分块处理,分别寻找各块最适应的超分辨率处理方法。分块处理方法有多种实现方式,均可适用于本具体实施方式中,在此不具体说明。
P2)对各个深度图像块分别进行如下处理:
P21)采用多种超分辨率处理方法分别对当前块进行超分辨率处理,得到多个与所述第一原始图像S1相同分辨率的初始的高分辨率深度图像块。
该步骤中,已有的多种超分辨率处理方法包括双立方插值,新边缘方向插值,K邻域嵌入法和稀疏表示法等,上述处理方法各有特点,均可应用于此。例如,利用r种已有的超分辨率方法分别对当前的深度图像块进行超分辨率处理,处理成与第一原始图像S1相同分辨率的高分辨率图像,得到r个相应的高分辨率深度图像块。将得到的多个高分辨率深度图像块定义为集合ID,设ΔD为其中任意一个高分辨率深度图像块。
P22)采用图像合成技术得到新的合成图像块,具体为:遍历集合ID,分别结合 第一原始图像S1中对应当前块的相应图像块,采用图像合成技术,根据所述第一位置和所述第二位置的相对位置关系,合成得到多个对应第二原始图像S2的图像块,定义为多个合成图像块,设为集合IS。
该步骤中,采用图像合成技术时,先投影到三维空间,再基于三维场景生成新的图像块。具体包括以下步骤:a)采用基于深度图像绘图方法,即DIBR方法,根据高分辨率深度图像块的深度信息,使用参考相机将所述第一原始图像S1中所述相应图像块投影到三维空间;b)以参考相机的中心对应所述第一位置,按照所述第二位置相对于所述第一位置的相对位置关系设置虚拟相机的中心,采用所述虚拟相机将步骤a)得到的三维空间的场景成像到二维平面,从而得到合成图像块。还原二维平面过程中,根据第二位置与第一位置的相对位置关系,设置参考相机和虚拟相机的相对位置。第二位置处对应的是第二原始图像,因此合成的图像块是对应于第二原始图像的图像块。
如图3所示,为图像合成时投影以及还原成像的原理示意图。参考相机的中心位于点O,虚拟相机位于点O1,点O1相对于点O的相对位置关系相当于第二位置相对于第一位置的相对位置关系。如箭头A所示,为投影成三维空间的示意图,参考相机将第一原始图像S1的图像块中的像素点p1投影到三维空间中,对应点Pw。如箭头B所示,为还原成像为二维平面的示意图,虚拟相机将三维空间中的点Pw还原到二维平面,对应像素点p2点。
本具体方式中,具体地,步骤a)中进行投影时,按照如下方程投影到三维空间:
(Xw,Yw,Zw)T=K1 -1d1p1
其中,以参考相机的中心为世界坐标系中心,即参考相机的中心的坐标为(0,0,0)T,从参考相机观察的方向为坐标系的z轴方向。p1表示第一原始图像S1中相应图像块中的像素点p1的位置信息,取齐次形式,即将第三维度的值取为1。例如像素点p1在第一原始图像S1中的位置为(x1,y1),则方程中p1为(x1,y1,1)。d1为所述高分辨率深度图像块中对应像素点p1的深度信息,K1为参考相机的内置参数矩阵,(Xw,Yw,Zw)是像素点p1投影到三维空间中的点的坐标,如图2中所示即为三维场景中Pw点的坐标。当然,DIBR方法中有多种具体方式实现二维图像到三维场景的投影,上述方程仅为列举的一种,其它投影方式也可适用于步骤a)中。
具体地,步骤b)中还原成像到二维平面时,按照如下方程成像到二维平面:
d2p2=K2R2Pw-K2R2C2
其中,以参考相机的中心为世界坐标系中心,即参考相机的中心的坐标为(0,0,0)T,从参考相机观察的方向为坐标系的z轴方向。C2为虚拟相机的中心的坐标,R2为虚拟相机的旋转矩阵,K2为虚拟相机的内置参数矩阵,Pw为步骤a)中得到的三维空间中的点的坐标,p2和d2分别为成像到二维平面得到的合成图像块中相应像素点的位置和深度信息;将方程式右边的运算结果转化成齐次形式:m(x,y,1),p2即为(x,y),d2即为系数m。上述方程中,获取第二原始图像S2的第二位置相对于第一位置的相对位置关系,影响参数C2和R2的具体取值,最终还原成像的图像块即对应第二原始图像S2。同样地,从三维场景中还原出二维图像也有多种实现方式,上述方程也仅为列举的一种,其它还原方式也可适用于步骤b)中。
经过步骤P22)的图像合成,即针对各个高分辨率图像块,均合成了相应的新的合成图像块,构成集合IS。
P23)匹配判断,确定最终的高分辨率深度图像块。具体为:遍历集合IS,分别计算各个合成图像块与第二原始图像S2中对应当前块的相应块的匹配程度,确定匹配程度最高的合成图像块,将所述匹配程度最高的合成图像块对应的高分辨率深度图像块确定为当前块的最终的高分辨率深度图像块。
该步骤中,基于新的合成图像块和原有的图像块的匹配结果,判断步骤P1)中得到多个高分辨率图像块中哪一高分辨率图像块是与实际情形最接近的超分辨率处理结果。合成图像块ΔS与原有的图像块最匹配,则合成图像块ΔS对应的高分辨率图像块即是与实际情形最接近的超分辨率处理结果,从而确定出与实际情形最接近的超分辨率处理后高分辨率图像块。其中计算图像匹配程度的方法,可以用(但不限于)最小均方误差(Minimum Mean Square Error)的匹配方法。
经过步骤P22)和步骤P23),利用DIBR方法将第一原始图像S1通过生成的相同分辨率的深度图像信息投影到三维空间,然后利用此三维场景获得的新的合成图像块和已采集的第二原始图像进行匹配,以这个匹配结果作为对深度图超分辨率的先验知识,从而得到合理的有使用价值的高分辨率深度图像。
P3)按照各个深度图像块在低分辨率深度图d中的位置,将各个深度图像块的高分辨率深度图像块整合成一幅图像,得到所述低分辨率深度图d的超分辨率处理图像。
该步骤,即是将步骤P2)中处理得到的各块的高分辨深度图像块整合成一幅完整 的图像,得到低分辨率深度图d的高分辨率深度图。优选地,整合后还包括对完整的高分辨深度图像进行平滑性处理。进行平滑处理主要是考虑如有图像重叠区域。平滑性处理时可以(但不限于)使用常用的均值方法。
本具体实施方式的深度图的超分辨率处理方法,通过上述步骤,即最终得到处理结果,高分辨率深度图。方法中基于新的合成图像块和原有的图像块的匹配结果,是和处理得到的高分辨率深度图和实际情形的高分辨率深度图的匹配结果正相关的前提,从而确定出多种超分辨率处理方法得到的多个高分辨率深度图像块中哪一图像块最准确,与实际情形最接近。即本具体实施方式的处理方法,相对于现有的对低分辨率深度图直接超分辨率处理的方法,得到的高分辨率深度图与实际情形更接近,高分辨率深度图的深度信息更准确,更具有实际意义和使用价值。另外,利用不同的超分辨率方法的优势和特点,分别对低分辨深度图中的各块进行处理,充分保证具有不同特征的深度图像块能够得到最适合自身图像特性的超分辨率处理方法,确保多个处理结果中涵盖有与实际情形最接近的一个处理结果。本具体实施方式的处理方法,充分发挥多种超分辨率方法的特点和优势,很好地将已有的超分辨率方法的优点融合到深度图像的超分辨率处理之中,能够恢复出具有实际意义和使用价值的高分辨率深度图。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下做出若干替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。

Claims (6)

  1. 一种深度图的超分辨率处理方法,其特征在于:包括以下步骤:首先,在第一位置和第二位置对同一场景进行图像采集,分别获取第一原始图像(S1)和第二原始图像(S2);获取所述第一原始图像(S1)的低分辨率深度图(d);其次,进行如下处理:
    1)将所述低分辨率深度图(d)划分为多个深度图像块;
    2)对步骤1)得到的各个深度图像块分别进行如下处理:
    21)采用多种超分辨率处理方法分别对当前块进行超分辨率处理,得到多个与所述第一原始图像(S1)相同分辨率的初始的高分辨率深度图像块;
    22)遍历步骤21)得到的多个高分辨率深度图像块,分别结合所述第一原始图像(S1)中对应当前块的相应图像块,采用图像合成技术,根据所述第一位置和所述第二位置的相对位置关系,合成得到多个对应所述第二原始图像(S2)的图像块,定义为多个合成图像块;
    23)遍历步骤22)中得到的多个合成图像块,分别计算各个合成图像块与所述第二原始图像(S2)中对应当前块的相应块的匹配程度,确定匹配程度最高的合成图像块,将所述匹配程度最高的合成图像块对应的高分辨率深度图像块确定为当前块的最终的高分辨率深度图像块;
    3)按照各个深度图像块在所述低分辨率深度图(d)中的位置,将各个深度图像块的高分辨率深度图像块整合成一幅图像,得到所述低分辨率深度图(d)的超分辨率处理图像。
  2. 根据权利要求1所述的深度图的超分辨率处理方法,其特征在于:所述步骤22)中采用图像合成技术时,包括以下步骤:a)采用基于深度图像绘图方法,根据高分辨率深度图像块的深度信息,使用参考相机将所述第一原始图像(S1)中所述相应图像块投影到三维空间;b)以参考相机的中心对应所述第一位置,按照所述第二位置相对于所述第一位置的相对位置关系设置虚拟相机的中心,采用所述虚拟相机将步骤a)得到的三维空间的场景成像到二维平面,从而得到合成图像块。
  3. 根据权利要求2所述的深度图的超分辨率处理方法,其特征在于:所述步骤a) 中进行投影时,按照如下方程投影到三维空间:
    (Xw,Yw,Zw)T=K1 -1d1p1
    其中,以参考相机的中心为世界坐标系中心,从参考相机观察的方向为坐标系的z轴方向;p1表示所述第一原始图像(S1)中所述相应图像块中的像素点p1的位置信息,取齐次形式;d1为所述高分辨率深度图像块中对应像素点p1的深度信息,K1为参考相机的内置参数矩阵,(Xw,Yw,Zw)是像素点p1投影到三维空间中的点的坐标。
  4. 根据权利要求2所述的深度图的超分辨率处理方法,其特征在于:所述步骤b)中将三维空间的场景成像到二维平面时,按照如下方程成像到二维平面:
    d2p2-K2R2Pw  K2R2C2
    其中,以参考相机的中心为世界坐标系中心,从参考相机观察的方向为坐标系的z轴方向;C2为虚拟相机的中心的坐标,R2为虚拟相机的旋转矩阵,K2为虚拟相机的内置参数矩阵,Pw为步骤a)中得到的三维空间中的点的坐标,p2和d2分别为成像到二维平面得到的合成图像块中相应像素点的位置和深度信息;将方程式右边的运算结果转化成齐次形式:m(x,y,1),p2即为(x,y),d2即为系数m。
  5. 根据权利要求1所述的深度图的超分辨率处理方法,其特征在于:所述步骤3)中整合成一幅图像之后,还包括对所述图像进行平滑性处理。
  6. 根据权利要求1所述的深度图的超分辨率处理方法,其特征在于:获取所述低分辨率深度图(d)时,采用深度相机在所述第一位置对所述场景进行图像采集,从而获取所述低分辨率深度图(d)。
PCT/CN2015/072180 2014-02-25 2015-02-03 一种深度图的超分辨率处理方法 WO2015127847A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/216,332 US10115182B2 (en) 2014-02-25 2016-07-21 Depth map super-resolution processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410065631.1A CN103810685B (zh) 2014-02-25 2014-02-25 一种深度图的超分辨率处理方法
CN201410065631.1 2014-02-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/216,332 Continuation US10115182B2 (en) 2014-02-25 2016-07-21 Depth map super-resolution processing method

Publications (1)

Publication Number Publication Date
WO2015127847A1 true WO2015127847A1 (zh) 2015-09-03

Family

ID=50707406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/072180 WO2015127847A1 (zh) 2014-02-25 2015-02-03 一种深度图的超分辨率处理方法

Country Status (3)

Country Link
US (1) US10115182B2 (zh)
CN (1) CN103810685B (zh)
WO (1) WO2015127847A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599400A (zh) * 2019-08-19 2019-12-20 西安理工大学 一种基于epi的光场图像超分辨的方法
WO2022007895A1 (zh) * 2020-07-10 2022-01-13 华为技术有限公司 图像帧的超分辨率实现方法和装置
CN113986168A (zh) * 2021-10-14 2022-01-28 深圳Tcl新技术有限公司 一种图像显示方法、装置、设备及可读存储介质

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810685B (zh) 2014-02-25 2016-05-25 清华大学深圳研究生院 一种深度图的超分辨率处理方法
KR101882931B1 (ko) * 2014-07-10 2018-07-30 삼성전자주식회사 다시점 영상 디스플레이 장치 및 그의 디스패리티 측정 방법
DE102014011821A1 (de) * 2014-08-08 2016-02-11 Cargometer Gmbh Vorrichtung und Verfahren zur Volumenbestimmung eines durch ein Flurförderzeug bewegten Objekts
CN104935909B (zh) * 2015-05-14 2017-02-22 清华大学深圳研究生院 一种基于深度信息的多幅图超分辨方法
CN104899830B (zh) * 2015-05-29 2017-09-29 清华大学深圳研究生院 一种图像超分辨方法
CN104867106B (zh) * 2015-05-29 2017-09-15 清华大学深圳研究生院 一种深度图超分辨率方法
CN105427253B (zh) * 2015-11-06 2019-03-29 北京航空航天大学 基于非局部回归和总差分的多视点rgb-d图像超分辨率方法
JP2017099616A (ja) * 2015-12-01 2017-06-08 ソニー株式会社 手術用制御装置、手術用制御方法、およびプログラム、並びに手術システム
CN105869115B (zh) * 2016-03-25 2019-02-22 浙江大学 一种基于kinect2.0的深度图像超分辨率方法
CN106548449A (zh) * 2016-09-18 2017-03-29 北京市商汤科技开发有限公司 生成超分辨率深度图的方法、装置及***
US10451714B2 (en) 2016-12-06 2019-10-22 Sony Corporation Optical micromesh for computerized devices
US10536684B2 (en) 2016-12-07 2020-01-14 Sony Corporation Color noise reduction in 3D depth map
US10178370B2 (en) 2016-12-19 2019-01-08 Sony Corporation Using multiple cameras to stitch a consolidated 3D depth map
US10181089B2 (en) 2016-12-19 2019-01-15 Sony Corporation Using pattern recognition to reduce noise in a 3D map
US10495735B2 (en) 2017-02-14 2019-12-03 Sony Corporation Using micro mirrors to improve the field of view of a 3D depth map
US10795022B2 (en) 2017-03-02 2020-10-06 Sony Corporation 3D depth map
US10979687B2 (en) * 2017-04-03 2021-04-13 Sony Corporation Using super imposition to render a 3D depth map
US11238559B2 (en) * 2017-04-21 2022-02-01 Semiconductor Energy Laboratory Co., Ltd. Image processing method and image receiving apparatus
CN107392852B (zh) * 2017-07-10 2020-07-07 深圳大学 深度图像的超分辨率重建方法、装置、设备及存储介质
WO2019061064A1 (zh) * 2017-09-27 2019-04-04 深圳市大疆创新科技有限公司 图像处理方法和设备
US10484667B2 (en) 2017-10-31 2019-11-19 Sony Corporation Generating 3D depth map using parallax
CN110349083A (zh) * 2018-04-08 2019-10-18 清华大学 一种基于深度相机旋转的图像超分辨方法和装置
US10549186B2 (en) 2018-06-26 2020-02-04 Sony Interactive Entertainment Inc. Multipoint SLAM capture
CN109191554B (zh) * 2018-09-04 2021-01-01 清华-伯克利深圳学院筹备办公室 一种超分辨图像重建方法、装置、终端和存储介质
US11055866B2 (en) 2018-10-29 2021-07-06 Samsung Electronics Co., Ltd System and method for disparity estimation using cameras with different fields of view
CN110428366B (zh) * 2019-07-26 2023-10-13 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN111242087B (zh) * 2020-01-21 2024-06-07 华为技术有限公司 物体识别方法及装置
US11503266B2 (en) 2020-03-06 2022-11-15 Samsung Electronics Co., Ltd. Super-resolution depth map generation for multi-camera or other environments
CN111724303A (zh) * 2020-05-12 2020-09-29 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种具有图像类型适应性的超分辨率图像处理方法与***
US11869167B2 (en) 2020-07-14 2024-01-09 Htc Corporation Method for transmitting reduced depth information and electronic device
CN112188183B (zh) * 2020-09-30 2023-01-17 绍兴埃瓦科技有限公司 双目立体匹配方法
CN112907443B (zh) * 2021-02-05 2023-06-16 深圳市优象计算技术有限公司 一种面向卫星相机的视频超分辨率重建方法及***
CN112804512B (zh) * 2021-04-13 2021-06-29 深圳阜时科技有限公司 3d深度成像方法、主控装置、以及3d成像设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344646A (zh) * 2008-06-12 2009-01-14 上海海事大学 一种高分辨率图像成像方法
CN102831581A (zh) * 2012-07-27 2012-12-19 中山大学 一种超分辨率图像重构的方法
CN103279933A (zh) * 2013-06-07 2013-09-04 重庆大学 一种基于双层模型的单幅图像超分辨率重建方法
CN103810685A (zh) * 2014-02-25 2014-05-21 清华大学深圳研究生院 一种深度图的超分辨率处理方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8665258B2 (en) * 2008-09-16 2014-03-04 Adobe Systems Incorporated Generating a depth map based on a single image
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
DE102011080180B4 (de) * 2011-08-01 2013-05-02 Sirona Dental Systems Gmbh Verfahren zur Registrierung mehrerer dreidimensionaler Aufnahmen eines dentalen Objektes
US8660362B2 (en) * 2011-11-21 2014-02-25 Microsoft Corporation Combined depth filtering and super resolution
CN102722863B (zh) * 2012-04-16 2014-05-21 天津大学 采用自回归模型对深度图进行超分辨率重建的方法
CN102663712B (zh) * 2012-04-16 2014-09-17 天津大学 基于飞行时间tof相机的深度计算成像方法
US8923622B2 (en) * 2012-12-10 2014-12-30 Symbol Technologies, Inc. Orientation compensation using a mobile device camera and a reference marker
US9052746B2 (en) * 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
CN103218776B (zh) * 2013-03-07 2016-06-22 天津大学 基于最小生成树的非局部的深度图超分辨率重建方法
US9417058B1 (en) * 2014-10-24 2016-08-16 Matrox Electronic Systems Ltd. Detecting a position of a sheet of light in an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344646A (zh) * 2008-06-12 2009-01-14 上海海事大学 一种高分辨率图像成像方法
CN102831581A (zh) * 2012-07-27 2012-12-19 中山大学 一种超分辨率图像重构的方法
CN103279933A (zh) * 2013-06-07 2013-09-04 重庆大学 一种基于双层模型的单幅图像超分辨率重建方法
CN103810685A (zh) * 2014-02-25 2014-05-21 清华大学深圳研究生院 一种深度图的超分辨率处理方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599400A (zh) * 2019-08-19 2019-12-20 西安理工大学 一种基于epi的光场图像超分辨的方法
CN110599400B (zh) * 2019-08-19 2022-10-04 西安理工大学 一种基于epi的光场图像超分辨的方法
WO2022007895A1 (zh) * 2020-07-10 2022-01-13 华为技术有限公司 图像帧的超分辨率实现方法和装置
CN113986168A (zh) * 2021-10-14 2022-01-28 深圳Tcl新技术有限公司 一种图像显示方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN103810685B (zh) 2016-05-25
US20160328828A1 (en) 2016-11-10
US10115182B2 (en) 2018-10-30
CN103810685A (zh) 2014-05-21

Similar Documents

Publication Publication Date Title
WO2015127847A1 (zh) 一种深度图的超分辨率处理方法
US11012620B2 (en) Panoramic image generation method and device
JP5739584B2 (ja) 車両周辺視角化のための3次元映像合成装置およびその方法
US10469828B2 (en) Three-dimensional dense structure from motion with stereo vision
WO2019219012A1 (zh) 联合刚性运动和非刚性形变的三维重建方法及装置
JP4828506B2 (ja) 仮想視点画像生成装置、プログラムおよび記録媒体
JP2002524937A (ja) 高解像度カメラと低解像度カメラとを用いて高解像度像を合成する方法および装置
JP2003526829A (ja) 画像処理方法および装置
CN103345736A (zh) 一种虚拟视点绘制方法
CN105262949A (zh) 一种多功能全景视频实时拼接方法
CN110211169B (zh) 基于多尺度超像素和相位相关的窄基线视差的重构方法
TW201426638A (zh) 三維感測方法與三維感測裝置
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、***
KR20120072146A (ko) 파노라마 영상을 이용한 입체 영상 생성 장치 및 방법
CN110738731A (zh) 一种用于双目视觉的3d重建方法和***
US20230394834A1 (en) Method, system and computer readable media for object detection coverage estimation
JP2023505891A (ja) 環境のトポグラフィを測定するための方法
TW200828182A (en) Method of utilizing multi-view images to solve occlusion problem for photorealistic model reconstruction
KR20190044439A (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 방법
CN108090877A (zh) 一种基于图像序列的rgb-d相机深度图像修复方法
WO2020184174A1 (ja) 画像処理装置および画像処理方法
Knorr et al. Stereoscopic 3D from 2D video with super-resolution capability
EP4148658A1 (en) Method and system for super-resolution reconstruction of heterogeneous stereoscopic images
CN107103620B (zh) 一种基于独立相机视角下空间采样的多光编码相机的深度提取方法
CN104463958A (zh) 基于视差图融合的三维超分辨率方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15755543

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15755543

Country of ref document: EP

Kind code of ref document: A1