CN106791773A - A kind of novel view synthesis method based on depth image - Google Patents

A kind of novel view synthesis method based on depth image Download PDF

Info

Publication number
CN106791773A
CN106791773A CN201611251733.8A CN201611251733A CN106791773A CN 106791773 A CN106791773 A CN 106791773A CN 201611251733 A CN201611251733 A CN 201611251733A CN 106791773 A CN106791773 A CN 106791773A
Authority
CN
China
Prior art keywords
newi
pixel
new
image
new viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611251733.8A
Other languages
Chinese (zh)
Other versions
CN106791773B (en
Inventor
冯远静
黄良鹏
李佳镜
陈丰
徐泽楠
叶家盛
陈稳舟
李定邦
汪泽南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201611251733.8A priority Critical patent/CN106791773B/en
Publication of CN106791773A publication Critical patent/CN106791773A/en
Application granted granted Critical
Publication of CN106791773B publication Critical patent/CN106791773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

Texture maps at the reference view of left and right and depth map are carried out three-dimension varying by a kind of novel view synthesis method based on depth image;The edge of object in search left and right reference view depth map, three-dimension varying is carried out at new viewpoint by edge pixel, then erases corresponding depth pixel point at new viewpoint;Depth map to obtaining carries out medium filtering, and the image after after filtering is contrasted with the depth map obtained through three-dimension varying, marks the pixel of change;Pixel to being labeled carries out back projection, projects at original reference view, then the pixel value in initial reference texture maps is assigned in new viewpoint image, with labeled pixel point coordinates identical pixel;The occlusion area of the new viewpoint image to obtaining enters row interpolation again;Cavity to remaining is repaired, and obtains final new viewpoint image.The present invention effectively eliminates cavity and ghost image in new viewpoint image, and experiment effect is good, and the new viewpoint image of generation meets human eye viewing effect.

Description

A kind of novel view synthesis method based on depth image
Technical field
It is used for the present invention relates to the fields such as image procossing, numerical analysis, three-dimensional reconstruction, computer science, especially one kind The rendering intent based on depth map of binocular camera new viewpoint virtual image synthesis.
Background technology
The synthesis of binocular camera new viewpoint virtual image is that a kind of utilization is demarcated inside and outside existing visual point image and video camera Parameter, the technology being reconstructed to image at new viewpoint.Its main method is to combine camera calibration parameter, and has been regarded Corresponding texture maps are projected and re-projection by the depth information of dot image, construct the image at New Century Planned Textbook.It is new building During viewpoint, can there is a problem of it is more, it is such as new and into image occur crack, cavity, ghost image, regional occlusion and The phenomenons such as object incompleteness, the presence of these problems have impact on the quality of new viewpoint image, and the interaction to user is generated greatly Influence.
The content of the invention
In order to overcome problem, the experiment effect of the various influence picture qualities produced in existing new viewpoint image synthesizing procedure Deficiency really poor, that human eye viewing effect cannot be met, the problem for proposing a kind of various influence picture qualities of elimination of the invention, Experiment effect is good, effectively meet the novel view synthesis method based on depth image of human eye viewing effect.
The technical solution adopted in the present invention is as follows:
A kind of novel view synthesis method based on depth image, described visual point synthesizing method is comprised the following steps:
(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
1.1) image is stored into each pixel in coordinate system, with reference to corresponding depth information, projects to world's seat In mark system:
PWi=(XWi, YWi, ZWi)T=(KiRi)-1iPi+KiRiCi), (i=l, r)
Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view image current position exists Three-dimensional coordinate in world coordinate system, l and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,It is depth, diIt is the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,Represent The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right shooting Coordinate of the principal point of machine in image storage coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33ZWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,It is projection matrix, Ti={ Ti∈R3×1| i=l, r } be The translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } it is the homogeneous graph picture storage coordinate of reference view, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image storage coordinate system Mark, CiIt is the centre coordinate of video camera;
Same treatment is done to corresponding depth map:
DWi=(XWi, YWi, ZWi) T=(KiRi)-1iDi+KiRiCi), (i=l, r)
Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view depth map current position Three-dimensional coordinate in world coordinate system, Di={ Di=(u0i, v0i, 1)T| i=l, r are the homogeneous graph picture of reference view depth map Storage coordinate;
1.2) by by projecting each point for obtaining, the inside and outside ginseng of virtual video camera at combining target new viewpoint for the first time Number, and depth information, in projecting to the plane of delineation coordinate system of new viewpoint again:
PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein PNewi=PNewI=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi= {uNewi∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r be respectively current pixel point image storage coordinate system in Horizontal, ordinate, KNewi、RNewi、CNewi、λNewiRepresent that the internal reference matrix of virtual video camera at new viewpoint, spin matrix, center are sat It is marked with and next scale factor;
Corresponding new viewpoint depth map is expressed as:
DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein DNewi={ DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
(2) edge of object in the reference view depth map of left and right is searched for, the method in (1) is utilized, edge pixel is carried out Then three-dimension varying is erased corresponding depth pixel point at new viewpoint at new viewpoint:
DNew_edgei=0, (i=l, r)
Wherein DNew_edgeiIt is the target edges pixel after three-dimension varying;
(3) using 3 × 3 template, the depth map to obtaining carries out medium filtering, the crackle in removal depth image with it is thin Small leak, and the image after after filtering is contrasted with the depth map obtained through three-dimension varying, mark the picture of change Vegetarian refreshments:
INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
Wherein INewiIt is the pixel being labeled, M is 3 × 3 medium filtering function, DNew_ImgiBe by step (1) and The depth map at new viewpoint obtained after step (2) treatment, S for comparing two images, and is marked to compare labeling function The pixel of change;
(4) crack produced because of three-dimension varying is repaired, mending course is as follows:
4.1) back projection is carried out to the pixel being labeled, is projected at original reference view:
PINewi=W (INewi), (i=l, r)
Wherein, PINewiRepresent in the new viewpoint image obtained by back projection, the pixel of cracks, W represents step (1) The three-dimension varying of description;
4.2) again by the pixel value in initial reference texture maps, it is assigned in new viewpoint image, with labeled pixel point coordinates Identical pixel;
(5) using the new viewpoint image by being obtained after step (4) treatment, the cavity to image occlusion area is inserted Value:
PIn_Img=IN (PNew_Imgl, PNew_Imgr)
Wherein, PIN_ImgIt is by the new viewpoint texture maps after interpolation, PNew_ImglRepresent by left reference picture by step (1) the new viewpoint texture maps obtained after all treatment of-(4), PNew_ImgrRepresent by right reference picture by step (1)-(4) All treatment after obtain, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
(6) the inpaint methods of Telea propositions in OpenCV built-in functions are called, the cavity to remaining is repaired, obtained Final new viewpoint image:
PNew_Img=inpaint (PIn_Img)
Wherein inpaint is OpenCV built-in functions, PNew_ImgIt is the final new viewpoint texture image for obtaining.
Further, in the step (2), find target edges pixel the step of it is as follows:
Wherein, PImgReference picture is represented, D represents the depth map of reference view, TdIt is self-defined threshold value, experiments verify that, The 1/4 of maximum depth value typically is taken, when pixel meets above formula, represents that the point is ghost edge.
Further, the processing procedure of the comparing labeling function S in step (3) is comprised the following steps:
3.1) gray value of each pixel in the depth map before and after contrast medium filtering;
3.2) coordinate of the pixel that record gray value is differed.
Further, the step 4.2) in, the process for mending a split is as follows:
Using the mark point coordinates obtained in 4.1), the pixel value of mark point position in initial reference texture maps is chosen, assigned To corresponding mark pixel in new viewpoint image, the pixel of rest position keeps constant:
Wherein PNew_ImgiFor at new viewpoint by the texture maps after crack repairing, PImgiIt is reference texture figure, PONew_Imgi It is the new viewpoint texture maps obtained after being processed by step (1) and step (2), mark represents whether current pixel point is marked Note, mark represents that current point is labeled point when being 1.
Further, in the step (5), the expression formula of IN interpolating functions is as follows:
WhereinIt is proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, The translation vector of left reference view and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r } represent figure It is 1 if in the presence of cavity as whetheing there is cavity at (u, v) place, expression formula is as follows:
Wherein Zi={ Zi(u, v) | i=l, r } it is depth value of the new viewpoint at (u, v) place, ThIt is threshold value.
Beneficial effects of the present invention are embodied in:Ghost profile is eliminated present invention employs three-dimension varying method, is employed anti- To sciagraphy, the crack in new viewpoint depth map is effectively eliminated, employ left and right left and right viewpoint interpolation method, filled up blocked area Domain, and using the inpaint methods in OpenCV storehouses, remaining cavity has effectively been filled up, experiment effect is good, the new viewpoint of generation Image meets human eye viewing effect.
Specific embodiment
The present invention will be further described below.
A kind of novel view synthesis method based on depth image, described visual point synthesizing method is comprised the following steps:
(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
1.1) image is stored into each pixel in coordinate system, with reference to corresponding depth information, projects to world's seat In mark system:
Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent that the pixel of reference view image current position exists Three-dimensional coordinate in world coordinate system, 1 and r represents left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,It is depth, diIt is the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,Represent The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right shooting Coordinate of the principal point of machine in image storage coordinate system, Ri={ Ri∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λi={ λi=Fi31XWi+Fi32YWi+Fi33ZWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,It is projection matrix, Ti={ Ti∈R3×1| i=l, r } It is the translation matrix of video camera, Pi={ Pi=(ui, vi, 1)T| i=l, r } it is the homogeneous graph picture storage coordinate of reference view, ui= {ui∈ R | i=l, r } and vi={ vi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image storage coordinate system Mark, CiIt is the centre coordinate of video camera;
Same treatment is done to corresponding depth map:
DWi=(XWi, YWi, ZWi) T=(KiRi)-1iDi+KiRiCi), (i=l, r)
Wherein DWi={ DWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view depth map current position Three-dimensional coordinate in world coordinate system, Di={ Di∈(u0i, v0i, 1)T| i=l, r } it is the homogeneous graph of reference view depth map As storage coordinate;
1.2) by by projecting each point for obtaining, the inside and outside ginseng of virtual video camera at combining target new viewpoint for the first time Number, and depth information, in projecting to the plane of delineation coordinate system of new viewpoint again:
PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein PNewi={ PNewi=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi= {uNewi∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r be respectively current pixel point image storage coordinate system in Horizontal, ordinate, KNewi、RNewi、CNewi、λNewiRepresent that the internal reference matrix of virtual video camera at new viewpoint, spin matrix, center are sat It is marked with and next scale factor;
Corresponding new viewpoint depth map is expressed as:
DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein DNewi=DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
(2) edge of object in the reference view depth map of left and right is searched for, the method in (1) is utilized, edge pixel is carried out Then three-dimension varying is erased corresponding depth pixel point at new viewpoint at new viewpoint:
DNew_edgei=0, (i=l, r)
Wherein DNew_edgeiIt is the target edges pixel after three-dimension varying;
The process for finding target edges pixel is as follows:
Wherein, PImgReference picture is represented, D represents the depth map of reference view, TdIt is self-defined threshold value, experiments verify that, The 1/4 of maximum depth value typically is taken, when pixel meets above formula, represents that the point is ghost edge;
(3) using 3 × 3 template, the depth map to obtaining carries out medium filtering, the crackle in removal depth image with it is thin Small leak, and the image after after filtering is contrasted with the depth map obtained through three-dimension varying, mark the picture of change Vegetarian refreshments:
INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
Wherein INewiIt is the pixel being labeled, M is 3 × 3 medium filtering function, DNew_ImgiBe by step (1) and The depth map at new viewpoint obtained after step (2) treatment, S for comparing two images, and is marked to compare labeling function The pixel of change;
The processing procedure for comparing labeling function S is comprised the following steps:
3.1) gray value of each pixel in the depth map before and after contrast medium filtering;
3.2) coordinate of the pixel that record gray value is differed;
(4) crack produced because of three-dimension varying is repaired, mending course is as follows:
4.1) back projection is carried out to the pixel being labeled, is projected at original reference view:
PINewi=W (INewi), (i=l, r)
Wherein, PINewiRepresent in the new viewpoint image obtained by back projection, the pixel of cracks, W represents step (1) The three-dimension varying of description;
4.2) again by the pixel value in initial reference texture maps, it is assigned in new viewpoint image, with labeled pixel point coordinates Identical pixel;
The process for mending a split is as follows:Using the mark point coordinates obtained in 4.1), the acceptance of the bid of initial reference texture maps is chosen The pixel value of note point position, is assigned to corresponding mark pixel in new viewpoint image, and the pixel of rest position keeps constant:
Wherein PNew_ImgiFor at new viewpoint by the texture maps after crack repairing, PImgiIt is reference texture figure, PONew_Imgi It is the new viewpoint texture maps obtained after being processed by step (1) and step (2), mark represents whether current pixel point is marked Note, mark represents that current point is labeled point when being 1;
(5) using the new viewpoint image by being obtained after step (4) treatment, the cavity to image occlusion area is inserted Value:
PIn_Img=IN (PNew_Imgl,PNew_Imgr)
Wherein, PIn_ImgIt is by the new viewpoint texture maps after interpolation, PNew_ImglRepresent by left reference picture by step (1) the new viewpoint texture maps obtained after all treatment of-(4), PNew_ImgrRepresent by right reference picture by step (1)- (4) obtained after all treatment, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
The expression formula of IN interpolating functions is as follows:
WhereinIt is proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, a left side The translation vector of reference view and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r } represent image Cavity is whether there is at (u, v) place, is 1 if in the presence of cavity, expression formula is as follows:
Wherein Zi={ Zi(u, v) | i=l, r } it is depth value of the new viewpoint at (u, v) place, ThIt is threshold value;
(6) the inpaint methods of Telea propositions in OpenCV built-in functions are called, the cavity to remaining is repaired, obtained Final new viewpoint image:
PNew_Img=inpaint (PIn_Img)
Wherein inpaint is OpenCV built-in functions, PNew_ImgIt is the final new viewpoint texture image for obtaining.

Claims (5)

1. a kind of novel view synthesis method based on depth image, it is characterised in that described visual point synthesizing method includes following Step:
(1) three-dimension varying is carried out to the texture maps at the reference view of left and right and depth map, conversion process is as follows:
1.1) image is stored into each pixel in coordinate system, with reference to corresponding depth information, projects to world coordinate system In:
PWi=(XWi, YWi, ZWi)T=(KiRi)-1iPi+KiRiCi), (i=l, r)
Wherein PWi={ PWi=(XWi, YWi, ZWi)T| i=l, r } represent the pixel of reference view image current position in the world Three-dimensional coordinate in coordinate system, l and r represent left and right, X respectivelyWiAnd YWiTransverse and longitudinal coordinate respectively in world coordinate system,It is depth, diIt is the gray value of current position depth map, MinZiAnd MaxZiPoint Minimum and maximum gray value that Wei be in the corresponding depth image of present frame,Represent The internal reference matrix of video camera, fxi={ fxi∈ R | i=l, r } and fyi={ fyi∈ R | i=l, r } it is respectively left and right cameras x directions With the focal length on y directions, R is real number, u0i={ u0i∈ R | i=l, r } and v0i={ v0i∈ R | i=l, r } it is respectively left and right shooting Coordinate of the principal point of machine in image storage coordinate system, Ri=Ri{∈R3×3| i=l, r } represent video camera spin matrix, R3×3 Represent the real number field space of matrices of dimension 3 × 3, λii=Fi31XWi+Fi32YWi+Fi33ZWi+Fi34| i=l, r } it is video camera Homogeneous scale factor,It is projection matrix, Ti={ Ti∈R3×1| i=l, r are The translation matrix of video camera, Pi={ Pi(ui, vi, 1)T| i=l, r } it is the homogeneous graph picture storage coordinate of reference view, ui={ ui ∈ R | i=l, r } and vi={ vi∈ R | i=l, r } horizontal, ordinate of the current pixel point in image storage coordinate system is respectively, CiIt is the centre coordinate of video camera;
Same treatment is done to corresponding depth map:
DWi=(XWi, YWi, ZWi)T=(KiRi)-1iDi+KiRiCi), (i=l, r)
Wherein DWi={ DWi=(XWi, YWi, ZWi) T | i=l, r } represent that the pixel of reference view depth map current position is alive Three-dimensional coordinate in boundary's coordinate system, Di={ Di=(u0i, v0i, 1)T| i=l, r } deposit for the homogeneous graph picture of reference view depth map Storage coordinate;
1.2) will be by projecting each point for obtaining for the first time, the inside and outside parameter of virtual video camera at combining target new viewpoint, And depth information, in projecting to the plane of delineation coordinate system of new viewpoint again:
PNewi=(KNewiRNewiPWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein PNewi={ PNewi=(uNewi, vNewi, 1) | i=l, r } it is PiThe coordinate of corresponding points, u at new viewpointNewi={ uNewi ∈ R | i=l, r } and vNewi={ vNewi∈ R | i=l, r } it is respectively horizontal, vertical seat of the current pixel point in image storage coordinate system Mark, KNewi、RNewi、GNewi、λNewiRepresent the internal reference matrix of virtual video camera at new viewpoint, spin matrix, centre coordinate and its Secondary scale factor;
Corresponding new viewpoint depth map is expressed as:
DNewi=(KNewiRNewiDWi-KNewiRNewiCNewi)/λNewi, (i=l, r)
Wherein DNewi={ DNewi=(uNewi, vNewi, 1) | i=l, r } it is DiThe coordinate of corresponding points at new viewpoint;
(2) edge of object in the reference view depth map of left and right is searched for, the method in (1) is utilized, edge pixel is carried out into three-dimensional Transform at new viewpoint, then erase corresponding depth pixel point at new viewpoint:
DNew_edgei=0, (i=l, r)
Wherein DNew_edgeiIt is the target edges pixel after three-dimension varying;
(3) 3 × 3 template is utilized, the depth map to obtaining carries out medium filtering, crackle and tiny leakage in removal depth image Hole, and the image after after filtering is contrasted with the depth map obtained through three-dimension varying, mark the pixel of change:
INewi=S (M (DNew_Imgi), DNew_Imgi), (i=l, r)
Wherein INewiIt is the pixel being labeled, M is 3 × 3 medium filtering function, DNew_ImgiIt is by step (1) and step (2) depth map at new viewpoint obtained after processing, S for comparing two images, and marks change to compare labeling function Pixel;
(4) crack produced because of three-dimension varying is repaired, mending course is as follows:
4.1) back projection is carried out to the pixel being labeled, is projected at original reference view:
PINewi=W (INewi), (i=l, r)
Wherein, PINewiRepresent in the new viewpoint image obtained by back projection, the pixel of cracks, W represents that step (1) is described Three-dimension varying;
4.2) again by the pixel value in initial reference texture maps, it is assigned in new viewpoint image, it is identical with labeled pixel point coordinates Pixel;
(5) using the new viewpoint image by being obtained after step (4) treatment, row interpolation is entered in the cavity to image occlusion area:
PIn_Img=IN(PNew_Imgl,PNew_Imgr)
Wherein, PIn_ImgIt is by the new viewpoint texture maps after interpolation, PNew_ImglRepresent by left reference picture by step (1) the new viewpoint texture maps obtained after all treatment of-(4), PNew_ImgrRepresent by right reference picture by step (1)- (4) obtained after all treatment, the new viewpoint texture maps at same viewpoint, IN is interpolating function;
(6) the inpaint methods of Telea propositions in OpenCV built-in functions are called, the cavity to remaining is repaired, and obtains final New viewpoint image:
PNew_Img=inpaint(PIn_Img)
Wherein inpaint is OpenCV built-in functions, PNew_ImgIt is the final new viewpoint texture image for obtaining.
2. a kind of novel view synthesis method based on depth image as claimed in claim 1, it is characterised in that:The step (2) in, the process for finding target edges pixel is as follows:
∀ u , v ∈ P Im g , Σ i = - 1 1 Σ j = - 1 1 D ( u + i , v + j ) - 9 × ( u , v ) > T d
Wherein, PImgReference picture is represented, D represents the depth map of reference view, TdIt is self-defined threshold value, when pixel meets above formula When, represent that the point is ghost edge.
3. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (3), the processing procedure for comparing labeling function S is comprised the following steps:
3.1) gray value of each pixel in the depth map before and after contrast medium filtering;
3.2) coordinate of the pixel that record gray value is differed.
4. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step It is rapid 4.2) in, the process for mending a split is as follows:
Using the mark point coordinates obtained in 4.1), the pixel value of mark point position in initial reference texture maps is chosen, be assigned to new Corresponding mark pixel in visual point image, the pixel of rest position keeps constant:
P N e w _ Im g i = P Im g i ( P I N e w i ) , m a r k = 1 P O N e w _ Im g i , m a r k = 0 , ( i = l , r )
Wherein PNew_ImgiFor at new viewpoint by the texture maps after crack repairing, PImgiIt is reference texture figure, PONew_ImgiBe through The new viewpoint texture maps obtained after step (1) and step (2) treatment are crossed, mark represents whether current pixel point is labeled, Mark represents that current point is labeled point when being 1.
5. a kind of novel view synthesis method based on depth image as claimed in claim 1 or 2, it is characterised in that:The step Suddenly in (5), the expression formula of IN interpolating functions is as follows:
P I n _ Im g ( u , v ) = ( 1 - α ) P N e w _ Im g l ( u 1 , v 1 ) + αP N e w _ Im g r ( u 2 , v 2 ) , O l ( u , v ) = 0 , O r ( u , v ) = 0 P N e w _ Im g l ( u 1 , v 1 ) , O l ( u , v ) = 0 , O r ( u , v ) = 1 P N e w _ Im g r ( u 2 , v 2 ) , O l ( u , v ) = 1 , O r ( u , v ) = 0 0 , O l ( u , v ) = 1 , O r ( u , v ) = 1
WhereinIt is proportion parameter, TNew_t、Tl_tAnd Tr_tRespectively new viewpoint, left reference The translation vector of viewpoint and right reference view in camera coordinate system, Oi={ Oi(u, v) | i=l, r represent image (u, V) place whether there is cavity, is 1 if in the presence of cavity, and expression formula is as follows:
O l ( u , v ) = 1 ( Z l ( u , v ) < T h ) 0 ( Z l ( u , v ) > T h )
O r ( u , v ) = 1 ( Z r ( u , v ) < T h ) 0 ( Z r ( u , v ) > T h )
Wherein Zi={ Zi(u, v) | i=l, r } it is depth value of the new viewpoint at (u, v) place, ThIt is threshold value.
CN201611251733.8A 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image Active CN106791773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611251733.8A CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611251733.8A CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Publications (2)

Publication Number Publication Date
CN106791773A true CN106791773A (en) 2017-05-31
CN106791773B CN106791773B (en) 2018-06-01

Family

ID=58928104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611251733.8A Active CN106791773B (en) 2016-12-30 2016-12-30 A kind of novel view synthesis method based on depth image

Country Status (1)

Country Link
CN (1) CN106791773B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809630A (en) * 2017-10-24 2018-03-16 天津大学 Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis
CN112291549A (en) * 2020-09-23 2021-01-29 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112308911A (en) * 2020-10-26 2021-02-02 中国科学院自动化研究所 End-to-end visual positioning method and system
CN112513929A (en) * 2019-11-29 2021-03-16 深圳市大疆创新科技有限公司 Image processing method and device
CN112686877A (en) * 2021-01-05 2021-04-20 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
WO2021180204A1 (en) * 2020-03-12 2021-09-16 京东方科技集团股份有限公司 Image inpainting method and apparatus, and electronic device
CN116405650A (en) * 2023-03-10 2023-07-07 珠海莫界科技有限公司 Image correction method, image correction device, storage medium, and display apparatus
CN117061720A (en) * 2023-10-11 2023-11-14 广州市大湾区虚拟现实研究院 Stereo image pair generation method based on monocular image and depth image rendering

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415287B (en) * 2019-07-11 2021-08-13 Oppo广东移动通信有限公司 Depth map filtering method and device, electronic equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010037512A1 (en) * 2008-10-02 2010-04-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intermediate view synthesis and multi-view data signal extraction
CN102307304A (en) * 2011-09-16 2012-01-04 北京航空航天大学 Image segmentation based error concealment method for entire right frame loss in stereoscopic video
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN102625127A (en) * 2012-03-24 2012-08-01 山东大学 Optimization method suitable for virtual viewpoint generation of 3D television
CN102724529A (en) * 2012-05-28 2012-10-10 清华大学 Method and device for generating video sequence of virtual viewpoints
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103581648A (en) * 2013-10-18 2014-02-12 清华大学深圳研究生院 Hole filling method for new viewpoint drawing
CN104270624A (en) * 2014-10-08 2015-01-07 太原科技大学 Region-partitioning 3D video mapping method
CN104869386A (en) * 2015-04-09 2015-08-26 东南大学 Virtual viewpoint synthesizing method based on layered processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010037512A1 (en) * 2008-10-02 2010-04-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intermediate view synthesis and multi-view data signal extraction
CN102307304A (en) * 2011-09-16 2012-01-04 北京航空航天大学 Image segmentation based error concealment method for entire right frame loss in stereoscopic video
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN102625127A (en) * 2012-03-24 2012-08-01 山东大学 Optimization method suitable for virtual viewpoint generation of 3D television
CN102724529A (en) * 2012-05-28 2012-10-10 清华大学 Method and device for generating video sequence of virtual viewpoints
CN103269438A (en) * 2013-05-27 2013-08-28 中山大学 Method for drawing depth image on the basis of 3D video and free-viewpoint television
CN103581648A (en) * 2013-10-18 2014-02-12 清华大学深圳研究生院 Hole filling method for new viewpoint drawing
CN104270624A (en) * 2014-10-08 2015-01-07 太原科技大学 Region-partitioning 3D video mapping method
CN104869386A (en) * 2015-04-09 2015-08-26 东南大学 Virtual viewpoint synthesizing method based on layered processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LONGJUN LIU: "spatio-temporal adaptive 2D to 3D video conversion for 3DTV", 《IEEE》 *
YU CHENG FAN等: "vivid-DIBR BASED 2D-3D image conversion system for 3D display", 《IEEE》 *
曾耀先: "基于DIBR 算法的新视点生成及其图像修复", 《CNKI》 *
王超: "多视点视频中视点绘制技术", 《CNKI》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809630A (en) * 2017-10-24 2018-03-16 天津大学 Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis
CN107809630B (en) * 2017-10-24 2019-08-13 天津大学 Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis
CN112513929A (en) * 2019-11-29 2021-03-16 深圳市大疆创新科技有限公司 Image processing method and device
WO2021180204A1 (en) * 2020-03-12 2021-09-16 京东方科技集团股份有限公司 Image inpainting method and apparatus, and electronic device
CN112291549A (en) * 2020-09-23 2021-01-29 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112291549B (en) * 2020-09-23 2021-07-09 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112308911A (en) * 2020-10-26 2021-02-02 中国科学院自动化研究所 End-to-end visual positioning method and system
CN112686877A (en) * 2021-01-05 2021-04-20 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN112686877B (en) * 2021-01-05 2022-11-11 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN116405650A (en) * 2023-03-10 2023-07-07 珠海莫界科技有限公司 Image correction method, image correction device, storage medium, and display apparatus
CN117061720A (en) * 2023-10-11 2023-11-14 广州市大湾区虚拟现实研究院 Stereo image pair generation method based on monocular image and depth image rendering
CN117061720B (en) * 2023-10-11 2024-03-01 广州市大湾区虚拟现实研究院 Stereo image pair generation method based on monocular image and depth image rendering

Also Published As

Publication number Publication date
CN106791773B (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN106791773A (en) A kind of novel view synthesis method based on depth image
CN109859098B (en) Face image fusion method and device, computer equipment and readable storage medium
CN104574501B (en) A kind of high-quality texture mapping method for complex three-dimensional scene
US8860712B2 (en) System and method for processing video images
CN107481279B (en) Monocular video depth map calculation method
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
CN106600686A (en) Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
US20120032948A1 (en) System and method for processing video images for camera recreation
CN113112612B (en) Positioning method and system for dynamic superposition of real person and mixed reality
CN106791774A (en) Virtual visual point image generating method based on depth map
CN104182968B (en) The fuzzy moving-target dividing method of many array optical detection systems of wide baseline
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
CN115731336B (en) Image rendering method, image rendering model generation method and related devices
CN112598807A (en) Training method and device for face key point detection model, computer equipment and storage medium
CN111091151A (en) Method for generating countermeasure network for target detection data enhancement
CN114519778B (en) Target three-dimensional reconstruction method, device, equipment and medium of multi-angle SAR data
CN108924434B (en) Three-dimensional high dynamic range image synthesis method based on exposure transformation
CN117501313A (en) Hair rendering system based on deep neural network
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium
CN116543109A (en) Hole filling method and system in three-dimensional reconstruction
CN109816765A (en) Texture towards dynamic scene determines method, apparatus, equipment and medium in real time
CN113077504B (en) Large scene depth map generation method based on multi-granularity feature matching
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
JP2019149112A (en) Composition device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Feng Yuanjing

Inventor after: Huang Chenchen

Inventor after: Huang Liangpeng

Inventor after: Li Jiajing

Inventor after: Chen Feng

Inventor after: Pan Shanwei

Inventor after: Yang Yong

Inventor after: Hu Jianqiao

Inventor after: Kong Deping

Inventor after: Chen Hong

Inventor before: Feng Yuanjing

Inventor before: Huang Liangpeng

Inventor before: Li Jiajing

Inventor before: Chen Feng

Inventor before: Xu Zenan

Inventor before: Ye Jiasheng

Inventor before: Chen Wenzhou

Inventor before: Li Dingbang

Inventor before: Wang Zenan

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant