CN106303660A - The fill method of insult area in a kind of video - Google Patents

The fill method of insult area in a kind of video Download PDF

Info

Publication number
CN106303660A
CN106303660A CN201610751078.6A CN201610751078A CN106303660A CN 106303660 A CN106303660 A CN 106303660A CN 201610751078 A CN201610751078 A CN 201610751078A CN 106303660 A CN106303660 A CN 106303660A
Authority
CN
China
Prior art keywords
video
zone
ignorance
point
video block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610751078.6A
Other languages
Chinese (zh)
Inventor
张勇
朱立松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCTV INTERNATIONAL NETWORKS WUXI Co Ltd
Original Assignee
CCTV INTERNATIONAL NETWORKS WUXI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCTV INTERNATIONAL NETWORKS WUXI Co Ltd filed Critical CCTV INTERNATIONAL NETWORKS WUXI Co Ltd
Priority to CN201610751078.6A priority Critical patent/CN106303660A/en
Publication of CN106303660A publication Critical patent/CN106303660A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to multimedia information technique field, disclose the fill method of insult area in a kind of video, be calculated in video one picture frame the filling priority of all boundary points between zone of ignorance and known region including: S1;S2 obtains the first video block centered by filling the point that priority is the highest, and finds second video block the highest with the first video block similarity in video known region;First video block is filled with by S3 based on the second video block;Zone of ignorance and the boundary point of known region in S4 more new video;S5 repeats step S1~S4 until the zone of ignorance in all picture frames is filled complete in video.Ensure that the user when playing of the video after filling would not observe that the border filling region, eliminate simultaneously and fill the scintillation occurred in region.

Description

The fill method of insult area in a kind of video
Technical field
The invention belongs to multimedia information technique field, particularly relate to the fill method of insult area in a kind of video.
Background technology
Common video is stained usually to be inserted by information and causes, relatively conventional is insertion copyright side in video Information.Fig. 1 show from video the frame intercepted, it can be seen that comprise insertion information at 3 in this video, i.e. Being stained at three, being respectively as follows: the upper left corner for " CCTV6 " (station symbol of the Chinese Central Television (CCTV) the 6th set movie channel), the upper right corner is " CNTV High definition " station symbol of high definition channel (China Network TV Station) and M shape station symbol that the lower right corner is " movie channel ".
At present, the method that solution video is stained has a lot, e.g., station symbol region carries out fuzzy or mosaic filling, the party Although method makes user cannot see original station symbol clearly, but greatly reduces the ornamental value of video.And for example, picture texture is used The method of synthesis is repaired frame by frame, first samples the picture near territory, area to be filled, obtains and wait to fill out Fill the texture information of areas adjacent, then calculate the texture picture synthesizing region to be filled, but it can only be in region to be filled Produce the pixel representing texture, it is impossible to keep extending object marginal texture;And the method does not considers the phase between frame of video with frame Guan Xing, the video after reparation has significantly flicker being filled region, brings discomfort to spectators.Further, utilize between frame of video Dependency picture is carried out the block of pixels estimation of global or local, then by front some frames or the block of pixels of rear some frames Fill the region to be filled of present frame.But, in general, in video, a certain fixed scene may continue some seconds time, If a certain fixed scene enters again next fixed scene after terminating, then the method is used to be difficult to obtain satisfied reparation knot Really, it is seen then that this motion estimation and compensation method is under many circumstances and inapplicable.
Summary of the invention
For the problems referred to above, it is desirable to provide the fill method of insult area in a kind of video, efficiently solve existing Fill method is had to fill the scintillation that rear insult area occurs.
The technical scheme that the present invention provides is as follows:
The fill method of insult area in a kind of video, including:
S1 is calculated in video one picture frame the filling priority of all boundary points between zone of ignorance and known region;
S2 obtains the first video block centered by filling the point that priority is the highest, and finds in video known region and institute State the second video block that the first video block similarity is the highest;
First video block is filled with by S3 based on the second video block;
S4 updates all zone of ignorancies and the boundary point of known region in video pictures frame;
S5 repeats step S1~S4 until the zone of ignorance in all picture frames is filled complete in video.
In the technical program, it is understood that video is made up of picture frame, above-mentioned zone of ignorance is in picture frame Insult area, region the most to be filled.As can be seen from the above technical solutions, taken into full account in the filling process in video Similarity between frame and frame and continuity, efficiently solve existing fill method and filled the flicker that rear insult area occurs Phenomenon.
It is further preferred that specifically include in step sl:
S11 initializes Confidence C (p) of any point p in video;
S12 calculates known region and Confidence C (p) of all boundary points of zone of ignorance in video;
S13 calculates known region and border irradiation intensity D (p) of all boundary points of zone of ignorance in video;
S14 is based on border irradiation intensity D (p) of boundary point in Confidence C (p) of boundary point in step S12 and step S13 It is calculated filling priority P (p) of boundary point.
It is further preferred that described fill method includes:
Specifically include in step s 11: if p point belongs to known region in video, then C (p)=1;If p point belongs in video Zone of ignorance, then C (p)=0;
Specifically include in step s 12: use formula (1) to calculate known region and all boundary points of zone of ignorance in video Confidence C (p):
C ( p ) = Σ q ∈ Γ p ∩ ( V - Ω v ) C ( q ) | Γ p | - - - ( 1 )
Wherein, | Γp| represent video block ΓpVolume, ΓpRepresent the video block centered by a p, (V-Ωv) represent regard Known region in all picture frames in Pin, V represents video, ΩvRepresent in video zone of ignorance in all picture frames, q represent with The point of known pixels in video block centered by some p.
It is further preferred that specifically include in step s 13:
S131 use formula (2) calculate in video any point p in known region and zone of ignorance boundary point etc. irradiation Vector
▿ I p ⊥ = ( - ∂ S f ∂ y , ∂ S f ∂ x ) / K - - - ( 2 )
Wherein, SfRepresent f frame picture in video, and some p is positioned in this f frame picture;K represents the gray scale of Video coding Value;
S132 uses formula (3) to calculate in video the boundary method of any point p in known region and zone of ignorance boundary point Vector npAnd it is normalized obtains n' for unit vectorp:
n p = ( ∂ M f ∂ x , ∂ M f ∂ y ) - - - ( 3 )
Wherein, MfIt is one and picture frame SfThe two-dimentional two values matrix that size is identical, and known district in this two dimension two values matrix Territory represents with 0, and zone of ignorance represents with 1;
S133 use formula (4) calculating border irradiation intensity D (p):
D ( p ) = I n n e r p r o d u c t ( ▿ I p ⊥ , n p ′ ) - - - ( 4 )
Wherein, the irradiation vector such as Innerprodu ct expressionWith border normal direction unit vector n'pInner product.
It is further preferred that specifically include in step S14: P (p)=C (p) D (p).
It is further preferred that specifically include in step s 2:
S21 obtains filling in step S1 the point p that priority is the highest, and obtains the first video block Γ centered by a pp
S22 uses λpLabelling the first video block ΓpIn be positioned at the known pixels point of known region;
S23 uses formula (5) to find and the first video block Γ in the known region of videopMiddle known pixels point λpEuclidean The second video block B that distance is minimumb
s i m i l a r = Σ ( Γ p - B b ) 2 λ p - - - ( 5 ) .
It is further preferred that specifically include in step s3: use the second video block BbMiddle corresponding first video block ΓpIn The pixel value of zone of ignorance is to the first video block ΓpMiddle zone of ignorance is filled with.
The fill method of insult area in the video that the present invention provides, it has the beneficial effects that:
In the fill method that the present invention provides, comprehensively employ picture recovery technique, taken into full account in video frame with Similarity between frame (each picture frame of composition video) and continuity;Owing to having small difference between frame and frame, in phase The content filled in adjacent frame also meets similarity and continuity, remains the minute differences between frame and frame simultaneously, protects with this Demonstrate,prove filling rear video user when playing and would not observe that the border filling region (above-mentioned zone of ignorance), also eliminated and fill out Fill the scintillation that region occurs after filling.Further, each frame figure that the fill method using the present invention to provide fills out Insult area (zone of ignorance) content of picture is natural, is difficult to be detected by human eye, is suitable to view and admire, and substantially increases the reparation effect of video Really.
Accompanying drawing explanation
Fig. 1 is to have the schematic diagram of a certain picture frame in the video of insult area;
Fig. 2 is that in the present invention, in video, region to be filled forms cylindricality cavity schematic diagram;
Fig. 3 is the schematic flow sheet of the fill method of insult area in video in the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawings and detailed description of the invention, the present invention is described in further detail.It should be noted that below The specific detail of the present invention described is only and use of the present invention is described, is not intended that limitation of the present invention.According to described basis Any amendment and modification that the teaching of invention is made are also within the scope of the invention.
It is known that video is made up of picture frame, in the present invention, it is assumed that each picture frame is stained district Relative position in the video frame, territory (zone of ignorance) is fixing (e.g., the channel logo position in video is all fixing), it is assumed that Insult area in each picture frame is circular, and the insult area in the most all picture frames is the formation of one and treats The column cavity filled, as in figure 2 it is shown, wherein white border circular areas is the region to be filled being stained, video is filled and is i.e. passed through The pixel value being calculated this region to be filled is filled with so that the video after filling seems natural and tripping.
In a video, the f frame picture of note video is Sf, the known region in this picture frame is designated as Φf, zone of ignorance It is designated as Ωf, the border of zone of ignorance is designated as δ Ωf.If total N frame in this video, then video is designated as V={Sf, f=1 ..., N}, In this video, all known regions are designated as Φv={ Φf, f=1 ..., N}, in video, all zone of ignorancies are designated as Ωv={ Ωf, F=1 ..., N}, in video, all zone of ignorance borders are designated as δ Ωv={ δ Ωf, f=1 ..., N}.
It addition, if some p is a pixel of f frame picture, the coordinate of this pixel be (x, y, f).At this picture In frame, the image block in a square field centered by a p is designated as Ψp, a square field centered by a p Interior video block is designated as Γp={ Ψ(x,y,f-Δf),...,Ψ(x,y,f),...,Ψ(x,y,f+Δf)}。n'pBorder normal direction list for a p Bit vector, a length of 1, specifically, if some p is not designated as δ Ω on the border of zone of ignorancefOn, then n'p=0.Based on this, below we In the video provide the present invention, the fill method of insult area is described in detail:
It is illustrated in figure 3 the schematic flow sheet of the fill method of insult area in the video that the present invention provides, can from figure To find out, include at this fill method: S1 is calculated in video one picture frame all between zone of ignorance and known region The filling priority of boundary point;S2 obtains the first video block centered by filling the point that priority is the highest, and in the known district of video Territory is found second video block the highest with the first video block similarity;First video block is filled out by S3 based on the second video block Fill;Zone of ignorance and all boundary points of known region in S4 more new video;S5 repeats step S1~S4 until owning in video Zone of ignorance in picture frame is filled complete.
Specifically, specifically include in step sl:
S11 initializes Confidence C (p) of any point p (p ∈ V) in video.Specifically, if p point belongs to known district in video Territory, i.e. p ∈ V-Ωv, then C (p)=1;If p point belongs to zone of ignorance in video, i.e. p ∈ Ωv, then C (p)=0.
S12 calculates known region and all boundary points of zone of ignorance (p ∈ δ Ω in videov) Confidence C (p).
This calculating process specifically includes: use formula (1) to calculate known region and all borders of zone of ignorance in picture frame Confidence C (p) of point:
C ( p ) = Σ q ∈ Γ p ∩ ( V - Ω v ) C ( q ) | Γ p | - - - ( 1 )
Wherein, | Γp| represent video block ΓpVolume (specially this video block ΓpIn the number of pixel that comprises), ΓpRepresent the video block centered by a p, (V-Ωv) representing in video known region in all picture frames, V represents video, Ωv Zone of ignorance in all picture frames in expression video, q represents the point of known pixels in the video block centered by a p.By formula (1) understanding, the Confidence of some p is equal to the video block Γ of a small neighbourhood centered by this pointpInterior all known pixels points Confidence and volume divided by this video block | Γp|。
S13 calculates known region and border irradiation intensity D (p) of all boundary points of zone of ignorance in video.
Specifically, include during this calculating:
S131 uses formula (2) to calculate any point p (p ∈ δ Ω in video known region and zone of ignorance boundary pointv) Deng irradiation vector
▿ I p ⊥ = ( - ∂ S f ∂ y , ∂ S f ∂ x ) / K - - - ( 2 )
Wherein, SfRepresent f frame picture in video, and some p is positioned in this f frame picture;K represents the gray scale of Video coding Value.The most in this step, the picture frame S at p point place is takenf, seek picture frame S respectivelyfLocal derviation in the x and y directionWithAt a specific embodiment, if this video uses 8bit coding, then K value 255, by that analogy.It addition, here I The value of K is not limited, if K >=1.
S132 uses formula (3) to calculate any point p (p ∈ δ Ω in video known region and zone of ignorance boundary pointv) Boundary method vector npAnd it is normalized obtains n' for unit vectorp:
n p = ( ∂ M f ∂ x , ∂ M f ∂ y ) - - - ( 3 )
Wherein, MfIt is one and picture frame SfThe two-dimentional two values matrix that size is identical, and known district in this two dimension two values matrix Territory represents with 0, and zone of ignorance represents with 1.The most in this step, the picture frame S at p point place is takenf, and initialize one and picture Frame SfThe two-dimentional two values matrix M that size is identicalfIf, p ∈ ΩfThen Mf(p)=1, otherwise Mf(p)=0.Seek two dimension two-value the most respectively Matrix MfLocal derviation in the x and y direction obtains boundary method vectorIt is normalized subsequently and is obtained Unit vector n'p
S133 use formula (4) calculating border irradiation intensity D (p):
D ( p ) = I n n e r p r o d u c t ( ▿ I p ⊥ , n p ′ ) - - - ( 4 )
Wherein, the irradiation vector such as Innerprodu ct expressionWith border normal direction unit vector n'pInner product.
S14 is based on border irradiation intensity D (p) of boundary point in Confidence C (p) of boundary point in step S12 and step S13 It is calculated filling priority P (p) of boundary point, specifically this filling priority P (p)=C (p) D (p).
Specifically include in step s 2:
S21 obtains filling in step S1 the highest some p of priority, and obtains the in square field centered by a p One video block Γp
S22 uses λpLabelling the first video block ΓpIn be positioned at the known pixels point of known region;
S23 is at known region (the V-Ω of videovFormula (5) is used to find and the first video block Γ in)pMiddle known pixels point λpThe second video block B that Euclidean distance is minimumb
s i m i l a r = Σ ( Γ p - B b ) 2 λ p - - - ( 5 ) .
Wherein, b represents the second video block BbCentral point.
Specifically include in step s3: use the second video block BbMiddle corresponding first video block ΓpThe pixel of middle zone of ignorance Value is to the first video block ΓpMiddle zone of ignorance is filled with.
Afterwards, in step s 4, the method with reference to step S11 updates each pixel in the video after step S3 is filled Confidence, the set of all of boundary point in the most more new video, and repeat step S1~S4 with this until frame of video In all of region all fill complete.
Above by being respectively described the enforcement scene case of each process, describe the present invention in detail, the technology of this area Personnel will be understood that.

Claims (7)

1. the fill method of insult area in a video, it is characterised in that described fill method includes:
S1 is calculated the filling priority of all boundary points between zone of ignorance and known region in video pictures frame;
S2 obtains the first video block centered by filling the highest point of priority, and finds in video known region and described the The second video block that one video block similarity is the highest;
First video block is filled with by S3 based on the second video block;
S4 updates all zone of ignorancies and the boundary point of known region in video pictures frame;
S5 repeats step S1~S4 until the zone of ignorance in all picture frames is filled complete in video.
2. fill method as claimed in claim 1, it is characterised in that specifically include in step sl:
S11 initializes Confidence C (p) of any point p in video;
S12 calculates known region and Confidence C (p) of all boundary points of zone of ignorance in video;
S13 calculates known region and border irradiation intensity D (p) of all boundary points of zone of ignorance in video;
S14 calculates based on border irradiation intensity D (p) of boundary point in Confidence C (p) of boundary point in step S12 and step S13 Obtain filling priority P (p) of boundary point.
3. fill method as claimed in claim 2, it is characterised in that
Specifically include in step s 11: if p point belongs to known region in video, then C (p)=1;If p point belongs to unknown in video Region, then C (p)=0;
Specifically include in step s 12: use formula (1) calculate known region and all boundary points of zone of ignorance in video from Reliability C (p):
C ( p ) = Σ q ∈ Γ p ∩ ( V - Ω v ) C ( q ) | Γ p | - - - ( 1 )
Wherein, | Γp| represent video block ΓpVolume, ΓpRepresent the video block centered by a p, (V-Ωv) represent in video All known regions.
4. fill method as claimed in claim 2, it is characterised in that specifically include in step s 13:
S131 use formula (2) calculate in video any point p in known region and zone of ignorance boundary point etc. irradiation vector
▿ I p ⊥ = ( - ∂ S f ∂ y , ∂ S f ∂ x ) / K - - - ( 2 )
Wherein, SfRepresent f frame picture in video, and some p is positioned in this f frame picture;K represents the gray value of Video coding;
S132 uses formula (3) to calculate in video the boundary method vector n of any point p in known region and zone of ignorance boundary pointp And it is normalized obtains n' for unit vectorp:
n p = ( ∂ M f ∂ x , ∂ M f ∂ y ) - - - ( 3 )
Wherein, MfIt is one and picture frame SfThe two-dimentional two values matrix that size is identical, and known region is used in this two dimension two values matrix 0 represents, zone of ignorance represents with 1;
S133 use formula (4) calculating border irradiation intensity D (p):
D ( p ) = I n e r p r o d u c t ( ▿ I p ⊥ , n p ′ ) - - - ( 4 )
Wherein, the irradiation vector such as Innerprodu ct expressionWith border normal direction unit vector n'pInner product.
5. fill method as claimed in claim 2, it is characterised in that specifically include in step S14: P (p)=C (p) D (p)。
6. the fill method as described in claim 1-5 any one, it is characterised in that specifically include in step s 2:
S21 obtains filling in step S1 the point p that priority is the highest, and obtains the first video block Γ centered by a pp
S22 uses λpLabelling the first video block ΓpIn be positioned at the known pixels point of known region;
S23 uses formula (5) to find and the first video block Γ in the known region of videopMiddle known pixels point λpEuclidean distance The second minimum video block Bb
s i m i l a r = Σ ( Γ p - B b ) 2 λ p - - - ( 5 ) .
7. fill method as claimed in claim 6, it is characterised in that specifically include in step s3: use the second video block Bb Middle corresponding first video block ΓpThe pixel value of middle zone of ignorance is to the first video block ΓpMiddle zone of ignorance is filled with.
CN201610751078.6A 2016-08-26 2016-08-26 The fill method of insult area in a kind of video Pending CN106303660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610751078.6A CN106303660A (en) 2016-08-26 2016-08-26 The fill method of insult area in a kind of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610751078.6A CN106303660A (en) 2016-08-26 2016-08-26 The fill method of insult area in a kind of video

Publications (1)

Publication Number Publication Date
CN106303660A true CN106303660A (en) 2017-01-04

Family

ID=57677527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610751078.6A Pending CN106303660A (en) 2016-08-26 2016-08-26 The fill method of insult area in a kind of video

Country Status (1)

Country Link
CN (1) CN106303660A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657591A (en) * 2017-09-05 2018-02-02 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109783658A (en) * 2019-02-19 2019-05-21 苏州科达科技股份有限公司 Image processing method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
CN102800077A (en) * 2012-07-20 2012-11-28 西安电子科技大学 Bayes non-local mean image restoration method
CN102999887A (en) * 2012-11-12 2013-03-27 中国科学院研究生院 Sample based image repairing method
CN103150711A (en) * 2013-03-28 2013-06-12 山东大学 Open computing language (OpenCL)-based image repair method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
CN102800077A (en) * 2012-07-20 2012-11-28 西安电子科技大学 Bayes non-local mean image restoration method
CN102999887A (en) * 2012-11-12 2013-03-27 中国科学院研究生院 Sample based image repairing method
CN103150711A (en) * 2013-03-28 2013-06-12 山东大学 Open computing language (OpenCL)-based image repair method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANTONIO CRIMINISI ET AL.: "Region Filling and Object Removal by Exemplar-Based Image Inpainting", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657591A (en) * 2017-09-05 2018-02-02 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109783658A (en) * 2019-02-19 2019-05-21 苏州科达科技股份有限公司 Image processing method, device and storage medium
CN109783658B (en) * 2019-02-19 2020-12-29 苏州科达科技股份有限公司 Image processing method, device and storage medium

Similar Documents

Publication Publication Date Title
US9900505B2 (en) Panoramic video from unstructured camera arrays with globally consistent parallax removal
US8520085B2 (en) Method of full frame video stabilization
US8243805B2 (en) Video completion by motion field transfer
EP2862356B1 (en) Method and apparatus for fusion of images
WO2018103244A1 (en) Live streaming video processing method, device, and electronic apparatus
CN101551904B (en) Image synthesis method and apparatus based on mixed gradient field and mixed boundary condition
JP4754364B2 (en) Image overlay device
CN103702098B (en) Three viewpoint three-dimensional video-frequency depth extraction methods of constraint are combined in a kind of time-space domain
US20120180084A1 (en) Method and Apparatus for Video Insertion
US10834379B2 (en) 2D-to-3D video frame conversion
CN106780303A (en) A kind of image split-joint method based on local registration
US9578312B2 (en) Method of integrating binocular stereo video scenes with maintaining time consistency
CN106162146A (en) Automatically identify and the method and system of playing panoramic video
US20100246954A1 (en) Apparatus, method, and medium of generating visual attention map
CN102884799A (en) Comfort noise and film grain processing for 3 dimensional video
US10007970B2 (en) Image up-sampling with relative edge growth rate priors
CN106060509A (en) Free viewpoint image synthetic method introducing color correction
CN106303660A (en) The fill method of insult area in a kind of video
CN104796623B (en) Splicing video based on pyramid Block-matching and functional optimization goes structural deviation method
CN112954443A (en) Panoramic video playing method and device, computer equipment and storage medium
CN101945299B (en) Camera-equipment-array based dynamic scene depth restoring method
CN105791795A (en) Three-dimensional image processing method and device and three-dimensional video display device
US11216662B2 (en) Efficient transmission of video over low bandwidth channels
JP5906165B2 (en) Virtual viewpoint image composition device, virtual viewpoint image composition method, and virtual viewpoint image composition program
Guo et al. Feature-based motion compensated interpolation for frame rate up-conversion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170104

RJ01 Rejection of invention patent application after publication