CN106060509A - Free viewpoint image synthetic method introducing color correction - Google Patents

Free viewpoint image synthetic method introducing color correction Download PDF

Info

Publication number
CN106060509A
CN106060509A CN201610334492.7A CN201610334492A CN106060509A CN 106060509 A CN106060509 A CN 106060509A CN 201610334492 A CN201610334492 A CN 201610334492A CN 106060509 A CN106060509 A CN 106060509A
Authority
CN
China
Prior art keywords
view
virtual
virtual view
layer
occlusion areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610334492.7A
Other languages
Chinese (zh)
Other versions
CN106060509B (en
Inventor
焦李成
乔伊果
侯彪
杨淑媛
刘红英
曹向海
马文萍
马晶晶
张丹
霍丽娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610334492.7A priority Critical patent/CN106060509B/en
Publication of CN106060509A publication Critical patent/CN106060509A/en
Application granted granted Critical
Publication of CN106060509B publication Critical patent/CN106060509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a free viewpoint image synthetic method introducing color correction and mainly solves the problems in an existing viewpoint synthetic technology that a synthetic image is discontinuous in color and is fuzzy in cavity edge. The method comprises the realization steps of: input left and right viewpoint views and corresponding depth images thereof, and by means of 3D conversion, obtaining left and right virtual views; according to a position relation, synthesizing the left and right virtual views into a non-blocked region of an intermediate virtual view; utilizing color differences among background regions of the virtual view to replace the color differences among blocked regions of the virtual view, and utilizing a histogram matching algorithm to obtain the blocked regions subjected to color correction; fusing the non-blocked region and the blocked regions subjected to color correction, and obtaining an intermediate viewpoint image having hollow points; and carrying out cavity filling layer by layer on the intermediate viewpoint image having the hollow points, and obtaining a final synthetic virtual image. According to the invention, the quality of the synthetic virtual image is improved, the watching comfort of a 3D video is improved, and the method can be applied to stereo multimedia.

Description

Introduce the free view-point image combining method of color correction
Technical field
The invention belongs to technical field of image processing, particularly to a kind of free view-point image combining method, can be used for standing Body multimedia.
Background technology
Owing to three-dimensional multimedia 3DTV can provide more true nature, lifelike visual environment, in recent years by extensively Big consumer knows accreditation and progressively penetrates into multimedia market, and free view-point synthesis is vast as the core technology in 3DTV field Scientific research personnel's Learning Studies.Owing to viewing location is required by 3D video, i.e. diverse location should receive different regarding Difference i.e. depth information, this means that each viewing location should have a corresponding seat in the plane to carry out scene capture, due to Stereoscopic camera cost is high and shooting condition limits, the most difficult realization of omnidirectional shooting.The synthesis of free view-point image effectively solves This technical barrier, the development to rich solid video resource and 3DTV field is most important.
Free view-point image synthesizes based on left and right viewpoint view and range image integration intermediate-view virtual view thereof One technology.In synthesis new viewpoint image process, due to the conversion of viewpoint, Partial occlusion region can be exposed in the visual field again, This is accomplished by left and right visual point image and carries out synthesizing to fill up new exposed region.2009, Y.Mori et al. proposed first and compares The free view-point synthetic method of scientific system (Y.Mori, N.Fukushima, T.Yendo, T.Fujii and M.Tanimoto, View generation with 3D warping using depth information for FTV,Signal Processing:Image Communication, 24,65 72,2009) following four steps, it are broadly divided into:
1) 3D conversion: 3D converts the process substantially projected, and Mori uses forward projection's mode, becomes based on space Change relation, left and right view and depth image thereof are projected to target view position, obtain the virtual left and right view of target view.
2) round-off error removes: during 3D forward projection, and round up the virtual view after causing conversion to coordinate The disappearance of middle one pixel of appearance, needs detect error pixel and fill with surrounding pixel, and then completes error pixel Removing of point.
3) left and right virtual view synthesis: merge left and right virtual view by position relationship and obtain non-blocking region, and difference Fill up, with left and right virtual view, the occlusion areas that left and right sides in medial view newly exposes, obtain destination virtual view.
4) image repair: owing to the degree of depth is suddenlyd change, there is also the pixel disappearance of another form, claims in 3D conversion process Be cavity, need by some image repair algorithms carry out cavity filling.
The problem of round-off error in converting for 3D, K.J.Oh et al. proposes one and is regarded to the left and right by target view position Backwards projection strategy (K.J.Oh, S.Yea, A.Vetro and Y.S.Ho, the Virtual View of some position projection Synthesis Method and Self-Evaluation Metrics for Free Viewpoint Television and 3D Video,International Journal of Imaging Systems and Technology,20,378– 390,2010), round-off error is taken at the viewpoint position coordinate of left and right, it is ensured that it is right that the target view in each non-blocking region has The mapping pixel answered, the problem that effectively prevent error pixel disappearance.
In the virtual view building-up process of left and right, the occlusion areas owing to exposing to be used left and right view simultaneously and fill out Mending, and left and right viewpoint shooting condition can not be consistent, there is the difference in color, brightness and saturation in left and right view, this just produces Give birth to the discontinuous problem of composograph color.For this problem, K.J.Oh et al. it is also proposed a kind of rectangular histogram in the text Matching algorithm, is considered as front view by one of left and right view, and another is considered as auxiliary view.It is first depending on position relationship, is main regards Figure and auxiliary view synthetic mesophase virtual view;Then virtual view and the histogram frequency distribution diagram of front view are calculated, by directly Side's figure coupling, makes virtual view and front view have identical color characteristic;Finally fill up virtual with front view and auxiliary view to regard Occlusion areas in figure.The method efficiently solves the color discontinuous problem between composograph and front view, but synthesizes The discontinuous situation of color between image and assistant images is the most fairly obvious.
It is the major issue that free view-point image synthesizes and 2D turns in 3D that cavity is filled.K.J.Oh proposes cavity from the back of the body Scape (K.J.Oh, S.Yea and Y.S.Ho, Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-d video,Proc.Picture Coding Symposium, 14,2009), and isolate prospect background according to depth information, and then carry out sky by background information Hole is filled, but concrete filling mode not exhaustive and preferable.M.Solh proposes a kind of simple efficient successively cavity filling side Formula (M.Solh and G.AlRegib, Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video,IEEE Journal on Selected Topics in Signal Processing, 6,495 504,2012), its shortcoming is the necessity that have ignored background filling cavity, causes filling regional edge The consequence that edge is fuzzy.
Summary of the invention
The purpose of the present invention, for the deficiency of above-mentioned prior art, proposes to introduce the free view-point image synthesis of color correction Method, so that in composograph color discontinuous is completely eliminated, improves the definition filling edges of regions.
The technical scheme is that: carry out View synthesis based on color correction by the left and right view after 3D is converted, And synthesis virtual view is carried out successively cavity filling based on depth information, obtain color continuous print high-quality intermediate-view and regard Figure.Its step includes the following:
1) input left and right viewpoint view and the depth map of their correspondences, based on position relationship and projection equation by left and right view And depth map projects to intermediate-view plane, obtain left virtual view WL, right virtual view WR, left virtual depth figure DLWith right void Intend depth map DR
2) by occlusion areas W of left virtual viewL' and occlusion areas W of right virtual viewR' overlap, by left virtual view Non-blocking region MLNon-blocking region M with right virtual viewROverlap;
3) by the non-blocking region M of left virtual viewLNon-blocking region M with right virtual viewRIt is weighted merging, Non-blocking region part M to intermediate virtual view;
4) by the non-blocking region N of left virtual depth figureLNon-blocking region N with right virtual depth figureRIt is weighted melting Close, obtain the non-blocking region N of intermediate virtual depth map;
5) by the non-blocking region N of intermediate virtual depth map, occlusion areas D of left virtual depth figureL' and right virtual depth Occlusion areas D of figureR' merge, obtain final intermediate virtual depth map A0
6) the non-blocking region M of middle virtual view is carried out image segmentation, isolate prospect MfWith background Mb
7) statistics non-blocking regional background and the rectangular histogram of occlusion areas:
7a) by the background area M being partitioned intob, correspondence finds out the background area M in left virtual viewLbWith right virtual view In background area MRb, make intermediate virtual view background area M respectivelybStatistic histogram Hb, left virtual view background MLb's Statistic histogram HLbWith right virtual view background MRbStatistic histogram HRb
7b) make left virtual view occlusion areas WL' statistic histogram HL' and right virtual view occlusion areas WR' statistics Rectangular histogram HR';
8) calculate the difference between the rectangular histogram of background area, substitute the difference between occlusion areas rectangular histogram with it, obtain The statistic histogram C of occlusion areas on the left of intermediate virtual viewLStatistic histogram C with right side occlusion areasR
9) Histogram Matching algorithm is used, by the statistic histogram H of left virtual view occlusion areasL' it is matched to middle void Intend the statistic histogram C of occlusion areas on the left of viewL, obtain occlusion areas Cf on the left of the intermediate virtual view after color correctionL, In like manner by the statistic histogram H of right virtual view occlusion areasR' to be matched to the statistics of occlusion areas on the right side of intermediate virtual view straight Side figure CR, obtain occlusion areas Cf on the right side of the intermediate virtual view after color correctionR
10) by the non-blocking region M of intermediate virtual view, left side occlusion areas CfLWith right side occlusion areas CfRMelt Close, obtain new intermediate virtual view B0
11) according to virtual depth figure A0In depth information, choose background pixel to itself and intermediate virtual view B0Carry out Successively down-sampling, obtains each layer down-sampled virtual depth figure AkWith virtual view Bk, until the virtual depth figure A of end layer S layerS With virtual view BSIn there is no cavity;
12) from the beginning of S layer, down-sampled virtual view B is the most upwards filledk'In cavity, obtain the reparation figure of each layer As Fk', until obtaining initiation layer to repair image F0, the most final free view-point image.
The present invention compared with prior art has the following characteristics that
1. the present invention uses statistical theory and replacement thought to carry out color correction, closes with medial view is non-with left and right view The color distortion between color distortion reaction medial view and left and right view occlusion areas between plug region, and then solve synthesis Color discontinuous problem between virtual view occlusion areas and non-blocking region.
2. the present invention uses image segmentation algorithm, and non-blocking region carries out the segmentation of prospect background, with composograph with The color distortion between color distortion reaction occlusion areas between view background area, left and right, owing to occlusion areas is from the back of the body Scape, reacts the color distortion of occlusion areas more accurately rationally with the color distortion of background area.
3. the present invention uses Histogram Matching algorithm, by the Histogram Matching of occlusion areas in the view of former left and right to color school The rectangular histogram of occlusion areas after just, the occlusion areas image of reconstruct is not only natural, and can hold in the mouth with non-blocking region no color differnece Connect.
4. the present invention uses successively cavity filling algorithm based on the degree of depth, based on depth information, autotelic selection background Neighborhood territory pixel point fills vacancy pixel, and this accurate fill method is effectively improved the picture quality of synthesis virtual view.
The simulation experiment result shows, the present invention combine color correction algorithm based on Histogram Matching and based on the degree of depth by Layer cavity filling algorithm carries out the synthesis of virtual view, can obtain the composograph of true nature, be that one can significantly improve The free view-point View synthesis algorithm of the system perfecting of viewing comfort level.
Accompanying drawing explanation
Fig. 1 be the present invention realize general flow chart;
Fig. 2 is that in the present invention, sub-process figure is filled in successively cavity based on the degree of depth;
Fig. 3 is the test image used in l-G simulation test;
Fig. 4 is to test set Ballet, with the free view-point image of the present invention and existing two kinds of typical methods synthesis with true Comparative result between value;
Fig. 5 is to test set Breakdancing, with the present invention and the free view-point figure of existing two kinds of typical methods synthesis Comparative result between picture and true value.
Detailed description of the invention
Below in conjunction with accompanying drawing, detailed description of the invention and the effect of the present invention are described in further detail:
With reference to Fig. 1, the detailed description of the invention of the present invention is as follows:
Step 1, input left and right viewpoint view and the depth map of their correspondences also carry out 3D conversion.
1a) input the left view point depth map L of left view point view L to be synthesized and its correspondenceD, right viewpoint view R is right with it The right viewpoint depth map R answeredD
1b) based on position relationship and projection equation, they are carried out 3D transformation by reciprocal direction, left view point view L is projected to centre Viewpoint plane obtains left virtual view WL, right viewpoint view R is projected to intermediate-view plane and obtains right virtual view WR, by a left side Viewpoint depth map LDProject to intermediate-view plane and obtain left virtual depth figure DL, by right viewpoint depth map RDProject to Intermediate View Point plane obtains right virtual depth figure DR
Here the left and right viewpoint view used, left and right viewpoint depth map, positional information and projection matrix all derive from micro- The data base that soft academy provides, Microsoft Research, Image-Based Realities-3D Video Download,/http://research.microsoft.com/ivm/3DVideoDownload/S。
Step 2, overlap left virtual view and the occlusion areas of right virtual view, and overlap left virtual view and right virtual view Non-blocking region.
Due to the change at visual angle, the left virtual view W after 3D convertsLWith right virtual view WRWill there is new exposure Region occurs, does not has the image information of the new exposed region of this part in being originally inputted view, this excalation image information New exposed region is occlusion areas, if left virtual view has one piece of occlusion areas, the most artificial erase in right view and The image information of symmetrical region, if right virtual view has one piece of occlusion areas, erase in left view therewith with regard to artificial The image information of symmetrical region, and then make left virtual view and right virtual view have left virtual view occlusion areas M of coincidenceLWith Right virtual view occlusion areas MR, also have the left virtual view non-blocking region W of coincidence simultaneouslyL' and right virtual view non-occlusion area Territory WR'。
Step 3, the non-blocking region M of synthetic mesophase virtual view.
3a) based on the position relationship provided in data base, calculate the left view point geometric distance t to intermediate-viewLRegard with the right side Point arrives the geometric distance t of intermediate-viewR, according to geometric distance, calculate the process in synthetic mesophase virtual view non-blocking region In, the weight coefficient α and the weight coefficient 1-α in right virtual view non-blocking region in left virtual view non-blocking region, wherein:
3b) based on weight coefficient, to left virtual view non-blocking region MLWith right virtual view non-blocking region MRAdd Power merges, and obtains the non-blocking region part of intermediate virtual view: M=α ML+(1-α)·MR
Step 4, the non-blocking region N of synthetic mesophase virtual depth figure.
According to the weight coefficient α in step 3, by the non-blocking region N of left virtual depth figureLNon-with right virtual depth figure Occlusion areas NRIt is weighted merging, obtains the non-blocking region N:N=α N of intermediate virtual depth mapL+(1-α)·NR
Step 5, synthetic mesophase virtual depth figure A0
Merge the non-blocking region N of intermediate virtual depth map, occlusion areas D of left virtual depth figureL' and right virtual depth Occlusion areas D of figureR', obtain final intermediate virtual depth map: A0=N+DL'+DR'。
Step 6, carries out image segmentation to the non-blocking region M of middle virtual view.
Foreground and background can not be isolated exactly owing to relying solely on depth information, existing some can be used to compare Effective image segmentation algorithm, carries out image segmentation to the non-blocking region M of middle virtual view, isolates prospect M accuratelyf With background Mb.Conventional images dividing method can be found in document:
[1].D.Comaniciu,P.Meer,“Mean shift:a robust approach toward feature space analysis,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.24,no.5,pp.603–619,2002;
[2].P.Meer,B.Georgescu,“Edge detection with embedded confidence,”IEEE Trans.Pattern Anal.Machine Intell,vol.2 8,2001;
[3].C.Christoudias,B.Georgescu,P.Meer,“Synergism in low level vision,”International Conference of Pattern Recognition,2001.
Step 7, statistics non-blocking regional background and the rectangular histogram of occlusion areas.
7a) according to intermediate virtual view background area Mb, correspondence finds out the background area M in left virtual viewLbWith right void Intend the background area M in viewRb, according to statistics with histogram algorithm, in the statistics interval that pixel value is [0,255], make respectively Intermediate virtual view background area MbStatistic histogram Hb, left virtual view background MLbStatistic histogram HLbVirtual with the right side regard Figure background MRbStatistic histogram HRb
7b) according to statistics with histogram algorithm, in the statistics interval that pixel value is [0,255], make left virtual view inaccessible Region WL' statistic histogram HL' and right virtual view occlusion areas WR' statistic histogram HR'。
Step 8, makees the left side occlusion areas statistic histogram C of intermediate virtual viewLNogata is added up with right side occlusion areas Figure CR
Due to occlusion areas, the image section being namely blocked derives from background, so left view regards with intermediate virtual The color distortion of figure occlusion areas, can substitute with the color distortion between left view and intermediate virtual view background area, In like manner right view and the color distortion of intermediate virtual view occlusion areas, can use right view and intermediate virtual view background area Between color distortion substitute.
8a) with the statistic histogram H of left virtual view backgroundLbDeduct the statistic histogram H of intermediate virtual view backgroundb, Obtain the statistical average difference value histogram Diff between left virtual view background area and intermediate virtual view background areaL;With Reason, with the statistic histogram H of right virtual view backgroundRbDeduct the statistic histogram H of intermediate virtual view backgroundb, obtain right void Intend the statistical average difference value histogram Diff between view background area and intermediate virtual view background areaR
8b) based on replacement thought above, by statistical average difference value histogram DiffLIt is added in left virtual view occlusion areas Statistic histogram HLOn ', obtain the statistic histogram C of occlusion areas on the left of intermediate virtual viewL;In like manner, by statistical average Difference value histogram DiffRIt is added in the statistic histogram H of right virtual view occlusion areasROn ', obtain on the right side of intermediate virtual view The statistic histogram C of occlusion areasR
Step 9, carries out color by Histogram Matching to left virtual view occlusion areas and right virtual view occlusion areas Correct.
9a) pixel value of pixel in left virtual view occlusion areas is projected to left and right adjacent pixels value, to change a left side The statistic histogram H of virtual view occlusion areasL' so that it is it is equal to the statistics Nogata of occlusion areas on the left of intermediate virtual view Figure CL, i.e. by Histogram Matching, by the statistic histogram H of left virtual view occlusion areasL' it is matched to an intermediate virtual view left side The statistic histogram C of side occlusion areasL, obtain left virtual view occlusion areas Cf through color correctionL
9b) pixel value of pixel in right virtual view occlusion areas is projected to left and right adjacent pixels value, to change the right side The statistic histogram H of virtual view occlusion areasR' so that it is it is equal to the statistics Nogata of occlusion areas on the left of intermediate virtual view Figure CR, i.e. by Histogram Matching, by the statistic histogram H of right virtual view occlusion areasR' it is matched to the intermediate virtual view right side The statistic histogram C of side occlusion areasR, obtain right virtual view occlusion areas Cf through color correctionR
Step 10, synthetic mesophase virtual view B0
Based on left virtual view occlusion areas Cf through color correctionL, right virtual view occlusion areas CfRAfter merging Intermediate virtual view non-blocking region M, synthetic mesophase virtual view: B0=M+CfL+CfR
Step 11, to middle virtual depth figure A0With intermediate virtual view B0Carry out successively down-sampling.
The successively cavity filling algorithm that M.Solh proposes can be used to fill the cavity in synthesis virtual view, first carries out Successively down-sampling, obtains the down-sampling virtual view of each layer, then by the bottom, the most upwards repairs the down-sampling of each layer Virtual view, obtains the reparation image of initiation layer.This algorithm ignores cavity and derives from this advantageous information of background, and obtain repaiies Complex pattern is in hole region edge blurry, and therefore the present invention adds background information during down-sampling, solves hole region limit The problem that edge is fuzzy, its step is as follows:
Shown in the filled arrows of reference Fig. 2, being implemented as follows of this step:
11a) according to virtual depth figure A0In depth information, choose background pixel and it carried out successively down-sampling, to Whole layer does not has empty point, obtains A successively0Each layer down-sampled images A1, A2..., Ak..., AS, wherein kth layer virtual depth Figure AkIt is based on its last layer virtual depth figure Ak-1Obtain, virtual depth figure AkMiddle any point (m, n) to ask for formula as follows:
Wherein, Xm,nIt is-1 layer of virtual depth figure A of kthk-1In a size be the matrix-block of 5 × 5, its central point (2 × M+3,2 × n+3) place;ω is the gaussian kernel of 5 × 5;Qh is used to the threshold value of division prospect and background, and depth value is more than qh For background, it is prospect less than qh;L (x) function is used for choosing background pixel point,Nz (u) representing matrix u The number of middle non-zero points;Num (v) represents the element number meeting condition v;The value of k is incremented by S one by one by 1, S be make virtual Depth map ASIn do not have cavity point end layer.
It is to say, virtual depth figure AkMiddle any point (m, pixel value A n)k(m n), is by virtual depth figure Ak-1In matrix-block Xm,nCarry out anisotropy smothing filtering based on the degree of depth to try to achieve:
As matrix-block Xm,nIn pixel without cavity point, and when all pixels broadly fall into background or prospect, right Xm,nIn all pixels carry out Gaussian smoothing, obtain point (m, pixel value A n)k(m,n);
As matrix-block Xm,nIn containing cavity point, but all non-cavities point is when broadly falling into background or prospect, to Xm,nMiddle institute There is non-cavity point pixel value to be weighted averagely, obtain point (m, pixel value A n)k(m,n);
As matrix-block Xm,nIn non-cavity point partly belong to foreground part when belonging to background, choose Xm,nIn had powerful connections Pixel carries out the weighted average of pixel value, obtains point (m, pixel value A n)k(m,n);
As matrix-block Xm,nIn pixel distribution be not belonging to above-mentioned three kinds of situations and be, point (m, pixel value A n)k(m,n) It is zero.
11b) according to virtual depth figure A0In depth information, choose background pixel to virtual view B0Carry out adopting down Sample, does not has empty point to end layer, obtains B successively0Each layer down-sampled images B1, B2..., Bk..., BS, wherein any kth Layer virtual view BkIt is based on its last layer virtual view Bk-1Obtain, virtual view BkMiddle any point (m, n) ask for formula As follows:
Wherein, Ym,nIt is-1 layer of down-sampling virtual view B of kthk-1In a size be the matrix-block of 5 × 5, its central point exists (2 × m+3,2 × n+3) place, Xm,nStep 11a) in-1 layer of virtual depth figure A of kth of mentioningk-1In matrix-block, its size With position all with Ym,nCorrespondence, is used for providing depth information, say, that virtual view BkMiddle any point (m, pixel value n) Bk(m n), is by virtual view Bk-1In matrix-block Ym,nCarry out anisotropy smothing filtering based on the degree of depth to try to achieve, deeply Degree information is by matrix-block Xm,nThere is provided.
Step 12, successively repairs cavity by up-sampling, obtains final free view-point image F0
Shown in the hollow arrow of reference Fig. 2, being implemented as follows of this step:
12a) according to the down-sampling virtual view B of S layerSIn do not have cavity characteristic, by the virtual view B of S layerSDeng It is same as the reparation image F of S layerS, i.e. FS=BS
12b) by linear interpolation, S layer is repaired image FSUp-sampling, obtain with resolution such as S-1 layers is swollen Swollen virtual view ES-1, wherein ES-1In be positioned at pth row, q row point (p, pixel value E q)S-1(p q) asks as follows Take:
Wherein i' only take even number-2, and 0,2}, j' the most only take even number-2, and 0,2}, i', j' be used for choose reparation image FS In with point (p, q) centered by 3 × 3 matrix-block, by selected 3 × 3 matrix-blocks are carried out based on weight vectorsSmooth filter Ripple, obtains expanding virtual view ES-1Midpoint (p, pixel value E q)S-1(p,q);Weight vectorsIt is used for determining above-mentioned reparation image FSThe shared power of each element in matrix-block selected by Weight:
Work as i'=-2, during j'=-2, elementI.e. FSWeight shared by (p-1, q-1)For:
Work as i'=-2, during j'=0, element FS(p-1, q) shared by weightIt is 0.02;
Work as i'=-2, during j'=2, element FSWeight shared by (p-1, q+1)It is 0.052
Work as i'=0, during j'=-2, element FSWeight shared by (p, q-1)It is 0.02;
Work as i'=0, during j'=0, element FS(p, q) shared by weightFor: 0.042
Work as i'=0, during j'=2, element FSWeight shared by (p, q+1)It is 0.02;
Work as i'=2, during j'=-2, element FSWeight shared by (p+1, q-1)It is 0.052
Work as i'=2, during j'=0, element FS(p+1, q) shared by weightIt is 0.02;
Work as i'=2, during j'=2, element FSWeight shared by (p+1, q+1)It is 0.052
12c) with expanding virtual view ES-1In pixel, fill same layer virtual view BS-1Pixel at middle cavity Point, obtains the reparation image F of S-1 layerS-1, wherein FS-1In be positioned at pth row, q row point (p, pixel value F q)S-1(p q) presses Equation below is asked for:
Step 12b) and 12c) give and be transitioned into S-1 layer by S layer, and obtain S-1 layer and repair image FS-1Process, under Face this process is applied to any kth ' layer, and obtain the reparation image F of k'-1 layerk'-1
12d) by step 12b), to any kth ' layer repairs image Fk'Up-sample, obtain differentiating with k'-1 layer etc. The expansion virtual view E of ratek'-1, then by step 12c) with expanding virtual view Ek'-1In pixel fill same layer virtual View Bk'-1Pixel at middle cavity, obtains the reparation image F of k'-1 layerk'-1, the value of k' is successively decreased to 0 one by one by S-1, i.e. By S-1 layer, the most upwards repetitive cycling step 12b) and 12c), obtain the reparation image F of each layer successivelyS-1, FS-2..., Fk'..., F0, initiation layer repairs image F0It is final free view-point image.
The effect of the present invention can be further illustrated by following experiment:
1. simulated conditions:
CPU be Core (TM), 3.20GHZ, internal memory 4.00G, WINDOWS XP system, on Matlab R2012b platform Emulated.
The present invention selects two groups of test images to emulate, and these two groups test image such as Fig. 3, wherein Fig. 3 (a) is Ballet The left view point view of test set, Fig. 3 (b) is the left view point depth map of Ballet test set, and Fig. 3 (c) is Ballet test set Right viewpoint view, Fig. 3 (d) is the right viewpoint depth map of Ballet test set;Fig. 3 (e) is a left side for Breakdancing test set Viewpoint view, Fig. 3 (f) is the left view point depth map of Breakdancing test set, and Fig. 3 (g) is Breakdancing test set Right viewpoint view, Fig. 3 (h) is the right viewpoint depth map of Breakdancing test set.
Emulation mode: the free view-point image combining method based on 3D conversion that 1. Y.Mori proposes
2. the free view-point image combining method filled based on background cavity that K.J.Oh proposes
3. present invention introduces the free view-point image combining method of color correction
2. emulation content:
Emulation 1, is utilized respectively above-mentioned to the Ballet test set shown in Fig. 3 (a), Fig. 3 (b), Fig. 3 (c) and Fig. 3 (d) Three kinds of methods carry out free view-point image synthesis, result such as Fig. 4, and wherein Fig. 4 (a) is the method synthesis proposed by Y.Mori Free view-point image, Fig. 4 (b) is the free view-point image of the method synthesis proposed by K.J.Oh, and Fig. 4 (c) is by this The free view-point image of bright method synthesis, Fig. 4 (d) is actual reference picture.
Fig. 4 (a) and Fig. 4 (b) demonstrates that the method that Y.Mori and K.J.Oh proposes all exists obvious color and discontinuously asks Topic, can be seen that from Fig. 4 (c) present invention efficiently solves the color discontinuous problem between dancer's the right and left and background, And rationally it is filled with the cavity at dancer's shoulder accurately.Comparison diagram 4 (a), Fig. 4 (b), Fig. 4 (c) and Fig. 4 (d), it can be seen that The empty filling algorithm that the present invention proposes is possible not only to effectively solve color discontinuous problem, it is also possible to filling cavity accurately, Obtain edge clearly.
Emulation 2, to the profit respectively of the Breakdancing test set shown in Fig. 3 (e), Fig. 3 (f), Fig. 3 (g) and Fig. 3 (h) Carrying out free view-point image synthesis, result such as Fig. 5 by above-mentioned three kinds of methods, wherein Fig. 5 (a) is the method proposed by Y.Mori The free view-point image of synthesis, Fig. 5 (b) is the free view-point image of the method synthesis proposed by K.J.Oh, and Fig. 5 (c) is logical Crossing the free view-point image of the inventive method synthesis, Fig. 5 (d) is actual reference picture.
Fig. 5 (a) and Fig. 4 (b) reflects that the method that Y.Mori and K.J.Oh proposes all exists a certain degree of color and do not connects Continuous problem, can be seen that from Fig. 5 (c) present invention solves the color discontinuous problem between dancer's lower limb both sides and background, right Ratio Fig. 5 (a), Fig. 5 (b), Fig. 5 (c) and Fig. 5 (d), it can be seen that the method that the present invention proposes solves color discontinuous problem, Maintaining edge, experimental result is stable, is effectively increased viewing comfort level.

Claims (9)

1. introduce a free view-point image combining method for color correction, including:
1) input left and right viewpoint view and the depth map of their correspondences, based on position relationship and projection equation by left and right view and deep Degree figure projects to intermediate-view plane, obtains left virtual view WL, right virtual view WR, left virtual depth figure DLVirtual with the right side deeply Degree figure DR
2) by occlusion areas W of left virtual viewL' and occlusion areas W of right virtual viewR' overlap, non-by left virtual view Occlusion areas MLNon-blocking region M with right virtual viewROverlap;
3) by the non-blocking region M of left virtual viewLNon-blocking region M with right virtual viewRIt is weighted merging, in obtaining Between non-blocking region part M of virtual view;
4) by the non-blocking region N of left virtual depth figureLNon-blocking region N with right virtual depth figureRIt is weighted merging, Non-blocking region N to intermediate virtual depth map;
5) by the non-blocking region N of intermediate virtual depth map, occlusion areas D of left virtual depth figureL' and right virtual depth figure Occlusion areas DR' merge, obtain final intermediate virtual depth map A0
6) the non-blocking region M of middle virtual view is carried out image segmentation, isolate prospect MfWith background Mb
7) statistics non-blocking regional background and the rectangular histogram of occlusion areas:
7a) by the background area M being partitioned intob, correspondence finds out the background area M in left virtual viewLbWith in right virtual view Background area MRb, make intermediate virtual view background area M respectivelybStatistic histogram Hb, left virtual view background MLbStatistics Rectangular histogram HLbWith right virtual view background MRbStatistic histogram HRb
7b) make left virtual view occlusion areas WL' statistic histogram HL' and right virtual view occlusion areas WR' statistics Nogata Figure HR';
8) calculate the difference between the rectangular histogram of background area, substitute the difference between occlusion areas rectangular histogram with it, obtain centre The statistic histogram C of occlusion areas on the left of virtual viewLStatistic histogram C with right side occlusion areasR
9) Histogram Matching algorithm is used, by the statistic histogram H of left virtual view occlusion areasL' it is matched to intermediate virtual view The statistic histogram C of left side occlusion areasL, obtain occlusion areas Cf on the left of the intermediate virtual view after color correctionL, in like manner will The statistic histogram H of right virtual view occlusion areasR' it is matched to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR, Obtain occlusion areas Cf on the right side of the intermediate virtual view after color correctionR
10) by the non-blocking region M of intermediate virtual view, left side occlusion areas CfLWith right side occlusion areas CfRMerge, To new intermediate virtual view B0
11) according to virtual depth figure A0In depth information, choose background pixel to itself and intermediate virtual view B0Carry out successively Down-sampling, obtains each layer down-sampled virtual depth figure AkWith virtual view Bk, until the virtual depth figure A of end layer S layerSAnd void Intend view BSIn there is no cavity;
12) from the beginning of S layer, down-sampled virtual view B is the most upwards filledk'In cavity, obtain the reparation image F of each layerk', Until obtaining initiation layer to repair image F0, the most final free view-point image.
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 3) in by a left side The non-blocking region M of virtual viewLNon-blocking region M with right virtual viewRIt is weighted merging, carries out as follows:
3a) by distance t of left view point to intermediate-viewLDistance t with right viewpoint to intermediate-viewR, calculate virtual at synthetic mesophase During view non-blocking region, the weight coefficient α in left virtual view non-blocking region and right virtual view non-blocking region Weight coefficient 1-α, wherein:
3b) based on weight coefficient, it is weighted merging to left virtual view non-blocking region and right virtual view non-blocking region, Obtain the non-blocking region part of intermediate virtual view: M=α ML+(1-α)·MR
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 4) in by a left side The non-blocking region N of virtual depth figureLNon-blocking region N with right virtual depth figureRIt is weighted merging, is based on step 3) In the weight coefficient α that obtains, by the non-blocking region N of left virtual depth figureLNon-blocking region N with right virtual depth figureRCarry out Weighting, obtains the non-blocking region part of intermediate virtual depth map: N=α NL+(1-α)·NR
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 8) in, with the back of the body Difference between scene area rectangular histogram substitutes the difference between occlusion areas rectangular histogram, carries out as follows:
8a) with the statistic histogram H of left virtual view backgroundLbDeduct the statistic histogram H of intermediate virtual view backgroundb, obtain Statistical average difference value histogram Diff between left virtual view background area and intermediate virtual view background areaL;In like manner, use The statistic histogram H of right virtual view backgroundRbDeduct the statistic histogram H of intermediate virtual view backgroundb, obtain right virtual view Statistical average difference value histogram Diff between background area and intermediate virtual view background areaR
8b) by statistical average difference value histogram DiffLIt is added in the statistic histogram H of left virtual view occlusion areasLOn ', obtain The statistic histogram C of occlusion areas on the left of intermediate virtual viewL;In like manner, by statistical average difference value histogram DiffRIt is added in right void Intend the statistic histogram H of view occlusion areasROn ', obtain the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 9) in by a left side The statistic histogram H of virtual view occlusion areasL' it is matched to the statistic histogram C of occlusion areas on the left of intermediate virtual viewL, it is By in left virtual view occlusion areas, the pixel value projection of pixel is to left and right adjacent pixels value, inaccessible to change left virtual view The statistic histogram H in regionL' so that it is it is equal to the statistic histogram C of occlusion areas on the left of intermediate virtual viewL
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 9) in by the right side The statistic histogram H of virtual view occlusion areasR' it is matched to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR, it is By in right virtual view occlusion areas, the pixel value projection of pixel is to left and right adjacent pixels value, inaccessible to change right virtual view The statistic histogram H in regionR' so that it is it is equal to the statistic histogram C of occlusion areas on the right side of intermediate virtual viewR
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 11) in basis Virtual depth figure A0In depth information, choose background pixel and it carried out successively down-sampling, be to have cavity point virtual deeply Degree figure A0Carry out the most down-sampled process, to end layer, there is no empty point, obtain A successively0Each layer down-sampled images A1, A2..., Ak..., AS, wherein kth layer virtual depth figure AkIt is based on its last layer virtual depth figure Ak-1Obtain, virtual depth figure AkMiddle any point (m, n) to ask for formula as follows:
Wherein, Xm,nIt is-1 layer of virtual depth figure A of kthk-1In a size be the matrix-block of 5 × 5, its central point (2 × m+3, 2 × n+3) place;ω is the gaussian kernel of 5 × 5;Qh is used to the threshold value of division prospect and background, and depth value is the back of the body more than qh Scape, is prospect less than qh;L (x) function is used for choosing background pixel point,In nz (u) representing matrix u non- The number of zero point;Num (v) represents the element number meeting condition v;The value of k is incremented by S one by one by 1, and S is to make virtual depth Figure ASIn do not have cavity point end layer.
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 11) in basis Virtual depth figure A0In depth information, choose background pixel to virtual view B0Carry out successively down-sampling, to there being cavity point Intermediate virtual view B0Carry out the most down-sampled process, to end layer, there is no empty point, obtain B successively0Each layer down-sampling figure As B1, B2..., Bk..., BS, wherein arbitrarily kth layer virtual view BkIt is based on its last layer virtual view Bk-1Obtain, virtual regard Figure BkMiddle any point (m, n) to ask for formula as follows:
Wherein, Xm,nIt is-1 layer of virtual depth figure A of kthk-1In a size be the matrix-block of 5 × 5, its central point (2 × m+3, 2 × n+3) place;Ym,nIt is-1 layer of down-sampling virtual view B of kthk-1In a size be the matrix-block of 5 × 5, its central point is (2 × m+3,2 × n+3) place;ω is the gaussian kernel of 5 × 5;Qh is used to the threshold value of division prospect and background, and depth value is more than Qh is background, is prospect less than qh;L (x) function is used for choosing background pixel point,Nz (u) representing matrix The number of non-zero points in u;Num (v) represents the element number meeting condition v;The value of k is incremented by S one by one by 1, S be make virtual Depth map BSIn there is no the end layer of cavity point, be also simultaneously to make virtual depth figure ASIn do not have cavity point end layer.
The free view-point image combining method of introducing color correction the most according to claim 1, wherein step 12) in from S layer starts, and the most upwards fills down-sampled virtual view Bk'In cavity, obtain the reparation image F of each layerk', until at the beginning of obtaining Beginning layer repairs image F0, carry out as follows:
12a) according to the down-sampling virtual view B of S layerSIn do not have cavity characteristic, by the virtual view B of S layerSIt is equal to The reparation image F of S layerS, i.e. FS=BS
12b) by linear interpolation, S layer is repaired image FSUp-sample, obtain virtual with the expansion of the resolution such as S-1 layer View ES-1, wherein ES-1In be positioned at pth row, q row point (p, pixel value E q)S-1(p, q) asks for as follows:
WhereinI', j' are used for choosing reparation image FSIn with point (p, q) centered by 3 The matrix-block of × 3, by obtaining expansion virtual view E to this matrix-block smothing filteringS-1Midpoint (p, pixel value E q)S-1(p, Q), the value of i' here, j' is required to be even number, with satisfied reparation image FSIn pointFor effective coordinate points;
12c) with expanding virtual view ES-1In pixel, fill same layer virtual view BS-1Pixel at middle cavity, Obtain the reparation image F of S-1 layerS-1, wherein FS-1In be positioned at pth row, q row point (p, pixel value F q)S-1(p, q) by such as Lower formula is asked for:
12d) by step 12b), to any kth ' layer repairs image Fk'Up-sample, obtain and the resolution such as k'-1 layer Expand virtual view Ek'-1, then by step 12c) with expanding virtual view Ek'-1In pixel fill same layer virtual view Bk'-1Pixel at middle cavity, obtains the reparation image F of k'-1 layerk'-1, the value of k' is successively decreased to 0 one by one by S-1, i.e. by S-1 Layer starts, the most upwards repetitive cycling step 12b) and 12c), obtain the reparation image F of each layer successivelyS-1, FS-2..., Fk'..., F0, initiation layer repairs image F0It is final free view-point image.
CN201610334492.7A 2016-05-19 2016-05-19 Introduce the free view-point image combining method of color correction Active CN106060509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610334492.7A CN106060509B (en) 2016-05-19 2016-05-19 Introduce the free view-point image combining method of color correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610334492.7A CN106060509B (en) 2016-05-19 2016-05-19 Introduce the free view-point image combining method of color correction

Publications (2)

Publication Number Publication Date
CN106060509A true CN106060509A (en) 2016-10-26
CN106060509B CN106060509B (en) 2018-03-13

Family

ID=57177160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610334492.7A Active CN106060509B (en) 2016-05-19 2016-05-19 Introduce the free view-point image combining method of color correction

Country Status (1)

Country Link
CN (1) CN106060509B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194888A (en) * 2018-11-12 2019-01-11 北京大学深圳研究生院 A kind of DIBR free view-point synthetic method for low quality depth map
CN109840912A (en) * 2019-01-02 2019-06-04 厦门美图之家科技有限公司 The modification method of abnormal pixel and equipment is calculated in a kind of image
CN112116602A (en) * 2020-08-31 2020-12-22 北京的卢深视科技有限公司 Depth map repairing method and device and readable storage medium
CN112330545A (en) * 2020-09-08 2021-02-05 中兴通讯股份有限公司 Hole filling method, small region removing method, device and medium
CN113421315A (en) * 2021-06-24 2021-09-21 河海大学 Panoramic image hole filling method based on view scaling

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111729283B (en) * 2020-06-19 2021-07-06 杭州赛鲁班网络科技有限公司 Training system and method based on mixed reality technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102239506A (en) * 2008-10-02 2011-11-09 弗兰霍菲尔运输应用研究公司 Intermediate view synthesis and multi-view data signal extraction
US20140002595A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Apparatus, system and method for foreground biased depth map refinement method for dibr view synthesis
CN103714573A (en) * 2013-12-16 2014-04-09 华为技术有限公司 Virtual view generating method and virtual view generating device
CN104661014A (en) * 2015-01-29 2015-05-27 四川虹微技术有限公司 Space-time combined cavity filling method
CN104809719A (en) * 2015-04-01 2015-07-29 华南理工大学 Virtual view synthesis method based on homographic matrix partition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102239506A (en) * 2008-10-02 2011-11-09 弗兰霍菲尔运输应用研究公司 Intermediate view synthesis and multi-view data signal extraction
US20140002595A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Apparatus, system and method for foreground biased depth map refinement method for dibr view synthesis
CN103714573A (en) * 2013-12-16 2014-04-09 华为技术有限公司 Virtual view generating method and virtual view generating device
CN104661014A (en) * 2015-01-29 2015-05-27 四川虹微技术有限公司 Space-time combined cavity filling method
CN104809719A (en) * 2015-04-01 2015-07-29 华南理工大学 Virtual view synthesis method based on homographic matrix partition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余思文: "面向自由视点***的虚拟视点绘制技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *
钱健: "自由视点视频***中绘制技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194888A (en) * 2018-11-12 2019-01-11 北京大学深圳研究生院 A kind of DIBR free view-point synthetic method for low quality depth map
CN109194888B (en) * 2018-11-12 2020-11-27 北京大学深圳研究生院 DIBR free viewpoint synthesis method for low-quality depth map
CN109840912A (en) * 2019-01-02 2019-06-04 厦门美图之家科技有限公司 The modification method of abnormal pixel and equipment is calculated in a kind of image
CN109840912B (en) * 2019-01-02 2021-05-04 厦门美图之家科技有限公司 Method for correcting abnormal pixels in image and computing equipment
CN112116602A (en) * 2020-08-31 2020-12-22 北京的卢深视科技有限公司 Depth map repairing method and device and readable storage medium
CN112330545A (en) * 2020-09-08 2021-02-05 中兴通讯股份有限公司 Hole filling method, small region removing method, device and medium
CN112330545B (en) * 2020-09-08 2021-10-19 中兴通讯股份有限公司 Hole filling method, small region removing method, device and medium
CN113421315A (en) * 2021-06-24 2021-09-21 河海大学 Panoramic image hole filling method based on view scaling
CN113421315B (en) * 2021-06-24 2022-11-11 河海大学 Panoramic image hole filling method based on view zooming

Also Published As

Publication number Publication date
CN106060509B (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN106060509B (en) Introduce the free view-point image combining method of color correction
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN102592275B (en) Virtual viewpoint rendering method
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
CN101400001B (en) Generation method and system for video frame depth chart
US7260274B2 (en) Techniques and systems for developing high-resolution imagery
CN106447725B (en) Spatial target posture method of estimation based on the matching of profile point composite character
CN103581648B (en) Draw the hole-filling method in new viewpoint
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
US20120001902A1 (en) Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume
CN104850847B (en) Image optimization system and method with automatic thin face function
CN103248911A (en) Virtual viewpoint drawing method based on space-time combination in multi-view video
CN104616286A (en) Fast semi-automatic multi-view depth restoring method
CN104079914A (en) Multi-view-point image super-resolution method based on deep information
CN102368826A (en) Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN115298708A (en) Multi-view neural human body rendering
CN106791774A (en) Virtual visual point image generating method based on depth map
CN113223070A (en) Depth image enhancement processing method and device
CN106447718B (en) A kind of 2D turns 3D depth estimation method
KR20100091864A (en) Apparatus and method for the automatic segmentation of multiple moving objects from a monocular video sequence
Zhu et al. An improved depth image based virtual view synthesis method for interactive 3D video
CN104661013B (en) A kind of virtual viewpoint rendering method based on spatial weighting
Jang et al. Egocentric scene reconstruction from an omnidirectional video
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN105791795A (en) Three-dimensional image processing method and device and three-dimensional video display device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant