CN107194867A - A kind of stingy picture synthetic method based on CUDA - Google Patents

A kind of stingy picture synthetic method based on CUDA Download PDF

Info

Publication number
CN107194867A
CN107194867A CN201710336870.XA CN201710336870A CN107194867A CN 107194867 A CN107194867 A CN 107194867A CN 201710336870 A CN201710336870 A CN 201710336870A CN 107194867 A CN107194867 A CN 107194867A
Authority
CN
China
Prior art keywords
mrow
mtd
shade
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710336870.XA
Other languages
Chinese (zh)
Inventor
姬庆庆
陈楠
肖创柏
高畅
杨祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710336870.XA priority Critical patent/CN107194867A/en
Publication of CN107194867A publication Critical patent/CN107194867A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of stingy picture synthetic method based on CUDA.First, spilling Balance Treatment is carried out to foreground image non-green region using color balance operator;Then a shade is generated using basic shade operator, scratches and remove solid background;Next basic shade is strengthened using details shade operator.After above-mentioned processing, final shade is formed, is merged with background image, the image after output synthesis.System make use of GPU to accelerate in stingy picture and synthesis phase, further increase splicing speed.Test result indicates that, this method can effectively realize the real-time image scratching of video, and frame per second reaches 10 frames, and the requirement of real time of video keying can be more disclosure satisfy that compared to traditional CPU versions.

Description

A kind of stingy picture synthetic method based on CUDA
Technical field
The invention discloses a kind of stingy picture synthetic method based on CUDA, belong to digital filming technical field.
Background technology
With the maturation that continues to develop of digital filming technology, movie and television play definition is improved constantly, live telecast also gradually to High definition even super clear image quality is drawn close.In the case, also improved constantly for the requirement scratched as efficiency.How to ensure The original image quality of video and good scratch are scratched as speed is the focus studied instantly as being improved on the premise of effect.Therefore algorithm is improved Operational efficiency, lift the speed of service using existing hardware has important reality should for industries such as production of film and TV and live telecasts With value.
The maturation developed with technology, present GPU is widely applied in universal parallel calculating processing, profit The optimization design of parallel computation is carried out with GPU, GPU computation capabilities are given full play to, ever-increasing picture can be not only met Plain processing requirement, more can save development cost by ripe PC hardware platform advantages.PC platform softwares are easy to modification simultaneously The advantage of upgrading, enables to user using left-hand seat is easy to, while more productive values can also be produced.
Domestic production of film and TV industry fast development in recent years, therefore for the demand also more and more higher of correlation technique, it is domestic Scholar also joins in the research of association area one after another, and achieves certain achievement.
The human relations of horse platinum are analyzed using Motion Control systems and later stage composite software After Effects as case, Mainly study the application of its interactive photographic control system of high-tech in film shooting making.Analyze how it is solved in film Real shooting condition does not allow or the dangerous high and high picture of capital cost this problem in manufacturing process.[] this Research describes the current situation of domestic and international film special efficacy, describes Motion Control systems answering in film special efficacy With having carried out brief introduction as correlation theory to scratching, analyzed and scratch as the domestic bottleneck problem of technology.This research is existing for understanding The problem of stage domestic and international film special effect making state of development and domestic correlation technique are present tool has very great help, but this grinds Study carefully and rest on theoretic, do not applied correlation theory, have some limitations.
The prosperous good general CUDA technologies in river are applied to physics field, and it is first to the parallel architecture of graphics processor and CUDA skills The related programming model of art is described.Then graphics processor speed-up computation target and some electromagnetism of environmental correclation are utilized The problem of scattering field.[] the research is more deep for the development of CUDA technologies and the parallel architecture research of graphics processor, Introduction is very detailed, has certain reference and evocation to this research.But CUDA technologies are applied to calculate by researcher Electromagnetic scattering problems in physics, rather than scratch as in processing, stingy picture processing and EM scatter model are calculated also in the presence of necessarily Difference.This research is used for reference it and scratched to the theoretical research of CUDA technologies, being applied to as in processing.
Deng Shuan, Li Leimin, topaz is clear et al. to be directed to video keying algorithm traditional under complex scene to target object and the back of the body The problem of image segmentation that scape texture is similar or obscure boundary is clear is difficult, it is proposed that a kind of view-based access control model sensor and laser radar The video real-time image scratching algorithm that information is blended.The algorithm obtains area-of-interest depth from original laser radar cloud data Information, and the stingy picture algorithm of improved spectrum is fused to as priori, create area-of-interest depth and scratch as Laplacian Matrix, Drawn and scratched as result by the optimal iteration of clustering algorithm, and with Steerable filter device to scratching as result is post-processed.Through experiment Operational efficiency can be improved while obtaining good effect by showing this algorithm.The video that this algorithm is directed under complex environment enters Row is scratched has certain advantage as operating, but is due to that it there are certain requirements for hardware device tool, so that in the presence of certain office It is sex-limited.
A kind of image processing method based on GPU of Chen Shifu, Zhang Shu, Zhang Le et al. design.Main operational steps are:First Pending image is obtained, a figure layer of the pending image is copied the video memory of graphics processing unit to from Installed System Memory In;Then an image procossing filter is started, when filter params are adjusted, in the current visible region of the figure layer Thumbnail is calculated and obtains regulation result in real time;Finally entered using the thumbnail of each figure layer of the pending image The painting canvas to be rendered into painting canvas, and is shown to window by row by the supreme mixing in bottom.This method is advantageous in that can be one Determine to reduce occupancy of the system for video memory in degree, and rendering result can be shown in real time, but be limited in that its processing speed Lifting is not fairly obvious.
The content of the invention
The technical solution adopted by the present invention is a kind of stingy picture synthetic method based on CUDA, first, is calculated using color balance Son carries out spilling Balance Treatment to foreground image non-green region;Then a shade is generated using basic shade operator, scratches and go Solid background;Next basic shade is strengthened using details shade operator.After above-mentioned processing, final hide is formed Cover, is merged with background image, the image after output synthesis.
It make use of GPU to accelerate in stingy picture and synthesis phase, further increase splicing speed.Test result indicates that, the party Method can effectively realize the real-time image scratching of video, and frame per second reaches 10 frames, and video can be more disclosure satisfy that compared to traditional CPU versions Scratch the requirement of real time of picture.
Overflow balanced algorithm:
Balanced algorithm is overflowed to for G in foreground picture>R point, there is nG=R, so drops G values in the point of green Low, function representation is:
It is improved to overflowing balanced algorithm:For G<(B+R)/2, i.e. region 3, pixels illustrated point is not green, then enters Row reservation process;For G>(B+R)/2, i.e. region 1 and 2, pixels illustrated point G values are too high, then are balanced processing, take new G= (R+B)/2, it is changed into (R+B)/2 equivalent to R in (1) formula.Its function expression is:
Next it is improved again, introduces parameter a, span is 0-1.
Using B=R straight lines as symmetry axis, parameter a determines ∠ ACD size, as a=1, ACD=0 ° of angle, as a=0, ∠ ACD=90 °, i.e. ∠ ACD=(1-a) * 90, it can thus be appreciated that straight line AC slopes, obtain AC linear equations, you can distinguishable region 1.Ginseng Number a changes the size with reservation region equivalent to the value for changing F (G, B, R).
Region 1:F (G, B, R)=a*R+ (1-a) * B;
Region 2:F (G, B, R)=(1-a) * R+a*B;
Region 3:F (G, B, R)=G;
It is identical with (2) formula as a=0.5, therefore function expression is changed into:
For convenience of being calculated, BR plans above are switched into BG plans.Abscissa represents G values (0-255) in way, indulges Coordinate representation B values (0-255), represent a tangent plane in RGB color.
Input (R, G, B) is source images current pixel point, and a is the parameter spill for overflowing balance, can according to the R values of input To find point C (R, R) in GB plane right-angle coordinate;According to a obtain point A (R* (1-a), 0), point B ((255-R) * a+R, 255) a is ratio, makes A points can only be in 0-R scopes, B points can only be in the range of R-255;Two can be made by three above point directly Line Line1:B=(G-R* (1-a))/a, Line2:B=(G-R)/a+R;According to Line1, Line2, Line3 (B=R) and G This 7 straight lines of=255, B=255, G=0, B=0, are divided into 3 regions:
According to the G of input, B values
If 1. point (G, B) is in region 1, then the mixing of new G values nG=R* (1-a)+B*a, R, B in proportion;
If 2. point (G, B) is in region 2, then new G values nG=R*a+B* (1-a), R, the mixing of B in proportion;
If 3. point (G, B) is in region 3, then new G values nG=G;It is constant.
Basic shade algorithm
The effect of basic shade is one shade of generation, and solid background is removed for scratching:For the image of green background, background Middle g-b value is than larger, and g-b value is relatively small in prospect, and front and rear scape can be told with this.Introduce background sample point (R, G, B), it is contemplated herein that sample point is green, i.e. G>R, G>B.Now there are following several situation explanations:
g-b<0:It is not green to show this pixel, then needs to retain, i.e. shade M need to take 255 (white);
0<g-b<G-B:Show the degree of this pixel green not green between sample point, shade M need to 0~255 it Between;
G-B<g-b:Show that this pixel is greener than sample point, then need to remove, i.e. shade M need to take 0 (black).
Therefore function representation is:
Similarly the situation to g-r is analyzed, and draws following formula:
For B axle, H is taken2=g, H2Above section, i.e. H2<B, g-b<0, i.e. M=are white;H1It is relevant with sample point, if taking two Individual sample point highcolor, lowcolor, sample point G-B:Compare_a=ghigh-bhigh+glow-blow, is designated as G- B;H1=g-k* (G-B), introduce parameter k, the deviation for correcting the sample point selected by user, H herein1Following part, i.e. b< H1(k*(G-B)<G-b), i.e., M=is black;For R axles similarly, C is taken2=r, C1=g-k* (G-R).
For nine different zones, its value has nine kinds of situations.
Details shade algorithm
The effect of details shade is that, according to brightness of image and sample point brightness, one details shade of generation enters to basic shade Row enhancing.Its implementation is:Input value highcolor (rgb), lowcolor (rgb), current pixel point (rgb);According to bright Spend formula:Brightness=0.29*r+0.59*g+0.12*b, obtains highcolor corresponding brightness luminhigh, lowcolor correspondence bright Luminlow is spent, current pixel point brightness is lumin.
Brief description of the drawings
Fig. 1 overflows balance schematic diagram.
Fig. 2 overflows balanced algorithm and improves figure.
Fig. 3 overflows algorithm and improves figure again.
Fig. 4 bases shade algorithm pattern.
Fig. 5 details shade algorithm patterns.
Fig. 6 is flow chart of the invention.
Fig. 7 is Background.
Fig. 8 is scratched as result figure.
Fig. 9 synthetic effect figures.
Figure 10 foreground picture artworks.
Embodiment
Scratch as the workflow of part is:Pending video is inputted, then the chroma key for scratching picture is selected in preview region. After this, system according to user's selection scratch as processing, that is, rejects background color.Scratch afterwards as result output, with background Picture is merged (superposition);Image after the completion of final output.
Perform step:
S1.CPU obtains related data, then that the foreground image data and background image data of rgb format is incoming from RAM GPU global memories;
S2.CPU applies for space to GPU global memory, for being stored to the view data after synthesis;
The block of each in S3.GPU starts to perform Kernel functions;
S4. the data of a pixel in foreground image and background image are polymerize from global memory read it is shared In internal memory;
S5. thread is synchronized, it is ensured that the data of processing all read in shared drive;
S6. foreground image and background image data are read out from shared drive;
S7. scratch as processing, result is written into the global memory of storage composograph;
S8. thread is synchronized, it is ensured that correlated results is all write in shared drive;
S9. the composograph data aggregate write-in global memory that will be stored in shared drive;
Result is transmitted back to CPU by S10.CPU from GPU;
S11.CPU shows the result of final image processing.

Claims (2)

1. a kind of stingy picture synthetic method based on CUDA, it is characterised in that:First, it is non-to foreground image using color balance operator Green area carries out spilling Balance Treatment;Then a shade is generated using basic shade operator, scratches and remove solid background;Next Basic shade is strengthened using details shade operator;After above-mentioned processing, final shade is formed, is carried out with background image Fusion, the image after output synthesis;
It make use of GPU to accelerate in stingy picture and synthesis phase, further increase splicing speed;Test result indicates that, this method energy Enough real-time image scratchings for effectively realizing video, frame per second reaches 10 frames, video keying can be more disclosure satisfy that compared to traditional CPU versions Requirement of real time;
Overflow balanced algorithm:
Balanced algorithm is overflowed to for G in foreground picture>R point, there is nG=R, so reduces G values in the point of green, letter Number is expressed as:
<mrow> <mi>n</mi> <mi>G</mi> <mo>=</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>,</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&gt;</mo> <mi>R</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mi>G</mi> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&amp;le;</mo> <mi>R</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <mi>n</mi> <mi>G</mi> <mo>=</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>,</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>R</mi> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&gt;</mo> <mi>R</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mi>G</mi> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&amp;le;</mo> <mi>R</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
It is improved to overflowing balanced algorithm:For G<(B+R)/2, i.e. region 3, pixels illustrated point is not green, then is protected Stay processing;For G>(B+R)/2, i.e. region 1 and 2, pixels illustrated point G values are too high, then are balanced processing, take new G=(R+ B)/2, it is changed into (R+B)/2 equivalent to R in (1) formula;Its function expression is:
<mrow> <mi>n</mi> <mi>G</mi> <mo>=</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mfrac> <mrow> <mi>R</mi> <mo>+</mo> <mi>B</mi> </mrow> <mn>2</mn> </mfrac> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&gt;</mo> <mfrac> <mrow> <mi>R</mi> <mo>+</mo> <mi>B</mi> </mrow> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mi>G</mi> </mtd> <mtd> <mrow> <mo>(</mo> <mi>G</mi> <mo>&amp;le;</mo> <mfrac> <mrow> <mi>R</mi> <mo>+</mo> <mi>B</mi> </mrow> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Next it is improved again, introduces parameter a, span is 0-1;
Using B=R straight lines as symmetry axis, parameter a determines ∠ ACD size, as a=1, ACD=0 ° of angle, as a=0, ∠ ACD =90 °, i.e. ∠ ACD=(1-a) * 90, it can thus be appreciated that straight line AC slopes, obtain AC linear equations, you can distinguishable region 1;Parameter a phases When in the value change and the size of reservation region that change F (G, B, R);
Region 1:F (G, B, R)=a*R+ (1-a) * B;
Region 2:F (G, B, R)=(1-a) * R+a*B;
Region 3:F (G, B, R)=G;
It is identical with (2) formula as a=0.5, therefore function expression is changed into:
For convenience of being calculated, BR plans above are switched into BG plans;Abscissa represents G values (0-255), ordinate in way B values (0-255) are represented, a tangent plane in RGB color is represented;
Input (R, G, B) is source images current pixel point, and a is the parameter spill for overflowing balance, can be according to the R values of input Point C (R, R) is found in GB plane right-angle coordinate;Point A (R* (1-a), 0), point B ((255-R) * a+R, 255) are obtained according to a A is ratio, makes A points can only be in 0-R scopes, B points can only be in the range of R-255;Two straight lines can be made by three above point Line1:B=(G-R* (1-a))/a, Line2:B=(G-R)/a+R;According to Line1, Line2, Line3 (B=R) and G= This 7 straight lines of 255, B=255, G=0, B=0, are divided into 3 regions:
According to the G of input, B values
If 1. point (G, B) is in region 1, then the mixing of new G values nG=R* (1-a)+B*a, R, B in proportion;
If 2. point (G, B) is in region 2, then new G values nG=R*a+B* (1-a), R, the mixing of B in proportion;
If 3. point (G, B) is in region 3, then new G values nG=G;It is constant;
Basic shade algorithm
The effect of basic shade is one shade of generation, and solid background is removed for scratching:For the image of green background, g- in background B value is than larger, and g-b value is relatively small in prospect, and front and rear scape can be told with this;Introducing background sample point (R, G, B), it is contemplated herein that sample point is green, i.e. G>R, G>B;Now there are following several situation explanations:
g-b<0:It is not green to show this pixel, then needs to retain, i.e. shade M need to take 255 (white);
0<g-b<G-B:Show the degree of this pixel green not green between sample point, shade M need to be between 0~255;
G-B<g-b:Show that this pixel is greener than sample point, then need to remove, i.e. shade M need to take 0 (black);
Therefore function representation is:
Similarly the situation to g-r is analyzed, and draws following formula:
For B axle, H is taken2=g, H2Above section, i.e. H2<B, g-b<0, i.e. M=are white;H1It is relevant with sample point, if taking two samples Point highcolor, lowcolor, sample point G-B:Compare_a=ghigh-bhigh+glow-blow, is designated as G-B;H1= G-k* (G-B), introduce parameter k, the deviation for correcting the sample point selected by user, H herein1Following part, i.e. b<H1(k* (G-B)<G-b), i.e., M=is black;For R axles similarly, C is taken2=r, C1=g-k* (G-R);
Details shade algorithm
The effect of details shade is that, according to brightness of image and sample point brightness, one details shade of generation increases to basic shade By force;Its implementation is:Input value highcolor (rgb), lowcolor (rgb), current pixel point (rgb);It is public according to brightness Formula:Brightness=0.29*r+0.59*g+0.12*b, obtains highcolor corresponding brightness luminhigh, lowcolor corresponding brightness Luminlow, current pixel point brightness is lumin.
2. a kind of stingy picture synthetic method based on CUDA according to claim 1, it is characterised in that:Scratch the work as part Flow is:Pending video is inputted, then the chroma key for scratching picture is selected in preview region;After this, system is according to user Selection scratch as processing, that is, rejects background color;Scratch afterwards as result output, carry out merging/being superimposed with background frame;Finally Image after the completion of output;
Perform step:
S1.CPU obtains related data, then by the foreground image data and background image data of rgb format from the incoming GPU of RAM Global memory;
S2.CPU applies for space to GPU global memory, for being stored to the view data after synthesis;
The block of each in S3.GPU starts to perform Kernel functions;
S4. the data of a pixel in foreground image and background image are polymerize from global memory and reads shared drive In;
S5. thread is synchronized, it is ensured that the data of processing all read in shared drive;
S6. foreground image and background image data are read out from shared drive;
S7. scratch as processing, result is written into the global memory of storage composograph;
S8. thread is synchronized, it is ensured that correlated results is all write in shared drive;
S9. the composograph data aggregate write-in global memory that will be stored in shared drive;
Result is transmitted back to CPU by S10.CPU from GPU;
S11.CPU shows the result of final image processing.
CN201710336870.XA 2017-05-14 2017-05-14 A kind of stingy picture synthetic method based on CUDA Pending CN107194867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710336870.XA CN107194867A (en) 2017-05-14 2017-05-14 A kind of stingy picture synthetic method based on CUDA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710336870.XA CN107194867A (en) 2017-05-14 2017-05-14 A kind of stingy picture synthetic method based on CUDA

Publications (1)

Publication Number Publication Date
CN107194867A true CN107194867A (en) 2017-09-22

Family

ID=59873255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710336870.XA Pending CN107194867A (en) 2017-05-14 2017-05-14 A kind of stingy picture synthetic method based on CUDA

Country Status (1)

Country Link
CN (1) CN107194867A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335288A (en) * 2018-09-26 2019-10-15 惠州学院 A kind of video foreground target extraction method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588459A (en) * 2009-06-26 2009-11-25 北京交通大学 A kind of video keying processing method
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Background modeling acceleration method based on CUDA (Compute Unified Device Architecture) technology
US20130064465A1 (en) * 2011-09-12 2013-03-14 Canon Kabushiki Kaisha Image compression and decompression for image matting
CN103366364A (en) * 2013-06-07 2013-10-23 太仓中科信息技术研究院 Color difference-based image matting method
CN103581571A (en) * 2013-11-22 2014-02-12 北京中科大洋科技发展股份有限公司 Video image matting method based on three elements of color
US20140301639A1 (en) * 2013-04-09 2014-10-09 Thomson Licensing Method and apparatus for determining an alpha value

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101588459A (en) * 2009-06-26 2009-11-25 北京交通大学 A kind of video keying processing method
CN101882311A (en) * 2010-06-08 2010-11-10 中国科学院自动化研究所 Background modeling acceleration method based on CUDA (Compute Unified Device Architecture) technology
US20130064465A1 (en) * 2011-09-12 2013-03-14 Canon Kabushiki Kaisha Image compression and decompression for image matting
US20140301639A1 (en) * 2013-04-09 2014-10-09 Thomson Licensing Method and apparatus for determining an alpha value
CN103366364A (en) * 2013-06-07 2013-10-23 太仓中科信息技术研究院 Color difference-based image matting method
CN103581571A (en) * 2013-11-22 2014-02-12 北京中科大洋科技发展股份有限公司 Video image matting method based on three elements of color

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谭雄素等: "U-EDIT抠像技术在微课制作中的应用——以微课作品《重阳那些事》为例", 《数字教育》 *
陈翔: "基于CUDA的抠像算法设计优化", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335288A (en) * 2018-09-26 2019-10-15 惠州学院 A kind of video foreground target extraction method and device
CN110516534A (en) * 2018-09-26 2019-11-29 惠州学院 A kind of method for processing video frequency and device based on semantic analysis

Similar Documents

Publication Publication Date Title
Sinha et al. Image-based rendering for scenes with reflections
US8351689B2 (en) Apparatus and method for removing ink lines and segmentation of color regions of a 2-D image for converting 2-D images into stereoscopic 3-D images
US7532752B2 (en) Non-photorealistic sketching
CN106485720A (en) Image processing method and device
Gao et al. Detail preserved single image dehazing algorithm based on airlight refinement
CN102779351A (en) Interactive grayscale image colorizing method based on local linear model optimization
US20140204125A1 (en) Systems and methods for creating photo collages
CN102098528A (en) Method and device for converting planar image into stereoscopic image
CN116997933A (en) Method and system for constructing facial position map
CN102750685A (en) Image processing method and device
CN110992247A (en) Method and system for realizing special effect of straightening hair of portrait photo
Grogan et al. User interaction for image recolouring using£ 2
CN116563459A (en) Text-driven immersive open scene neural rendering and mixing enhancement method
Xiao et al. Image hazing algorithm based on generative adversarial networks
Seo et al. Image recoloring using linear template mapping
Zhang et al. Refilming with depth-inferred videos
Liu Two decades of colorization and decolorization for images and videos
Chang et al. A self-adaptive single underwater image restoration algorithm for improving graphic quality
CN107194867A (en) A kind of stingy picture synthetic method based on CUDA
CN112132923A (en) Two-stage digital image style transformation method and system based on style thumbnail high-definition
Ye et al. Hybrid scheme of image’s regional colorization using mask r-cnn and Poisson editing
Liao et al. Depth annotations: Designing depth of a single image for depth-based effects
CN110689609B (en) Image processing method, image processing device, electronic equipment and storage medium
CN117011324A (en) Image processing method, device, electronic equipment and storage medium
CN115170921A (en) Binocular stereo matching method based on bilateral grid learning and edge loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170922