CN106851130A - A kind of video-splicing method and device - Google Patents

A kind of video-splicing method and device Download PDF

Info

Publication number
CN106851130A
CN106851130A CN201611145816.9A CN201611145816A CN106851130A CN 106851130 A CN106851130 A CN 106851130A CN 201611145816 A CN201611145816 A CN 201611145816A CN 106851130 A CN106851130 A CN 106851130A
Authority
CN
China
Prior art keywords
image
video
group
view point
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611145816.9A
Other languages
Chinese (zh)
Inventor
马茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sohu New Media Information Technology Co Ltd
Original Assignee
Beijing Sohu New Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sohu New Media Information Technology Co Ltd filed Critical Beijing Sohu New Media Information Technology Co Ltd
Priority to CN201611145816.9A priority Critical patent/CN106851130A/en
Publication of CN106851130A publication Critical patent/CN106851130A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)

Abstract

The present invention discloses a kind of video-splicing method and device, and the method includes:Multi-view point video group is synchronized, and is ranked up according to video content spatial order;Any one group of image in the same time of the multi-view point video group is extracted, as the first image sets;Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, the projective transformation matrix of each image in described first image group is obtained;According to the projective transformation matrix of each image in described first image group, projective transformation is carried out to each image in the multi-view point video group respectively;Anastomosing and splicing is carried out to the adjacent image in the multi-view point video group in the same time respectively, the video-splicing to the multi-view point video group is completed.The present invention can be ensured in terms of rapidity, accuracy to video-splicing.

Description

A kind of video-splicing method and device
Technical field
The present invention relates to data processing field, and in particular to a kind of video-splicing method and device.
Background technology
As the temperature of virtual reality applications constantly heats up, people are more and more for the demand of full-view video image.So And due to the hardware limitation of collecting device, when the scene image in the visual field wide is obtained using general camera, it is necessary to adjust camera Focal length, by Zoom lens to obtain complete scene, but because the resolution ratio of camera is certain, therefore the scene for shooting is got over Greatly, the image resolution ratio density that we obtain is lower.For the scene or object of some oversizes, even if we adjust Focal length cannot also be obtained with a photo acquisition.Then, needed different angles to obtain the scene of high-resolution big wide-angle Spending the image of imaging carries out the splicing fusion of smooth and seamless.
Video-splicing technology suffers from being widely applied at aspects such as civilian, remote sensing, detection, virtual reality, video frequency searchings Prospect, the method both at home and abroad to video-splicing expands many research at present, but still can not in terms of rapidity, accuracy Meet the demand of practical application.
The content of the invention
To solve the above problems, the invention provides a kind of video-splicing method and device, can be in rapidity, accuracy Aspect is ensured to video-splicing.
The invention provides a kind of video-splicing method, methods described includes:
Multi-view point video group is synchronized, and is ranked up according to video content spatial order;
Any one group of image in the same time of the multi-view point video group is extracted, as the first image sets;
Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, described is obtained The projective transformation matrix of each image in one image sets;
According to the projective transformation matrix of each image in described first image group, respectively in the multi-view point video group Each image carries out projective transformation;
Anastomosing and splicing is carried out to the adjacent image in the multi-view point video group in the same time respectively, completes to be regarded to described more The video-splicing of point video group.
Preferably, it is described respectively by two images adjacent in described first image group carry out the extraction of characteristic point with Match somebody with somebody, before obtaining the projective transformation matrix of each image in described first image group, also include:
According to the camera parameter information for shooting each video in the multi-view point video group, described first image group is determined In each image whether belong to fish eye images;
The image for belonging to fish eye images in described first image group is carried out into correcting fisheye image.
Preferably, methods described also includes:
Image color correction is carried out to each image in described first image group, and generates the corresponding color of each image Correction parameter.
Preferably, the projective transformation matrix according to each image in described first image group, regards more to described respectively Before each image in point video group carries out projective transformation, also include:
According to the camera parameter information for shooting each video in the multi-view point video group, the multi-view point video is determined Whether the image in group in each video belongs to fish eye images;
The image for belonging to fish eye images in each video in the multi-view point video group is carried out into correcting fisheye image.
Preferably, methods described also includes:
The corresponding color correction parameter of each image in described first image group, in the multi-view point video group Image in corresponding video carries out image color correction.
Preferably, methods described also includes:
Judge each image in the multi-view point video group by image color correction color error whether be higher than Threshold value;
Again image color correction is carried out to the image higher than the threshold value, and generates color correction parameter;
Using the color correction parameter, the color correction parameter of corresponding image in described first image group is updated.
Present invention also offers a kind of video-splicing device, described device includes:
Synchronous order module, for multi-view point video group to be synchronized, and is arranged according to video content spatial order Sequence;
Extraction module, any one group of image in the same time for extracting the multi-view point video group, as the first image Group;
Matching module is extracted, for two images adjacent in described first image group to be carried out into carrying for characteristic point respectively Take and match, obtain the projective transformation matrix of each image in described first image group;
Projective transformation module, for the projective transformation matrix according to each image in described first image group, respectively to institute Each image stated in multi-view point video group carries out projective transformation;
Anastomosing and splicing module, for carrying out fusion spelling to the adjacent image in the multi-view point video group in the same time respectively Connect, complete the video-splicing to the multi-view point video group.
Preferably, described device also includes:
First determining module, for the camera parameter information of each video in the shooting multi-view point video group, Whether each image belongs to fish eye images in determining described first image group;
First flake rectification module, for the image for belonging to fish eye images in described first image group to be carried out into fish eye images Correction.
Preferably, described device also includes:
First color correction module, for carrying out image color correction to each image in described first image group, and Generate the corresponding color correction parameter of each image.
Preferably, described device also includes:
Second determining module, for the camera parameter information of each video in the shooting multi-view point video group, Determine whether the image in the multi-view point video group in each video belongs to fish eye images;
Second flake rectification module, for the figure by fish eye images are belonged in each video in the multi-view point video group As carrying out correcting fisheye image.
Preferably, described device also includes:
Second color correction module, for the corresponding colour correction ginseng of each image in described first image group Number, image color correction is carried out to the image in corresponding video in the multi-view point video group.
Preferably, described device also includes:
Judge module, the color for judging each image in the multi-view point video group by image color correction Whether error is higher than threshold value;
Generation module, for carrying out image color correction to the image higher than the threshold value again, and generates colour correction Parameter;
Update module, for utilizing the color correction parameter, updates the color of corresponding image in described first image group Color correction parameter.
The invention provides a kind of video-splicing method, first, multi-view point video group is synchronized, and according to video Content space order is ranked up.Secondly, any one group of image in the same time of the multi-view point video group is extracted, as first Image sets.Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, described is obtained The projective transformation matrix of each image in one image sets.Then, according to the projective transformation of each image in described first image group Matrix, carries out projective transformation to each image in the multi-view point video group respectively.Finally, respectively to the multi-view point video Adjacent image in group in the same time carries out anastomosing and splicing, completes the video-splicing to the multi-view point video group.The present invention is carried The video-splicing method of confession can fast and accurately complete video-splicing.
Brief description of the drawings
In order to illustrate more clearly of the technical scheme in the embodiment of the present application, below will be to make needed for embodiment description Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present application, for For those of ordinary skill in the art, without having to pay creative labor, it can also be obtained according to these accompanying drawings His accompanying drawing.
Fig. 1 is a kind of video-splicing method flow diagram provided in an embodiment of the present invention;
Fig. 2 is another video-splicing method flow diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of overlapping region schematic diagram provided in an embodiment of the present invention;
Fig. 4 is adjacent image Remap provided in an embodiment of the present inventioniAnd Remapi+1Overlapping region RegioniSignal Figure;
Fig. 5 is the template binary image schematic diagram that a kind of utilization image segmentation provided in an embodiment of the present invention determines;
Fig. 6 is a kind of stitching image schematic diagram provided in an embodiment of the present invention;
Fig. 7 is a kind of video-splicing device structural representation provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, the technical scheme in the embodiment of the present application is carried out clear, complete Site preparation is described, it is clear that described embodiment is only some embodiments of the present application, rather than whole embodiments.It is based on Embodiment in the application, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of the application protection.
Hereinafter carry out the introduction of embodiment particular content.
A kind of video-splicing method is the embodiment of the invention provides, is that one kind provided in an embodiment of the present invention is regarded with reference to Fig. 1 Frequency joining method flow chart, methods described includes:
S101:Multi-view point video group is synchronized, and is ranked up according to video content spatial order.
In the embodiment of the present invention, multi-view point video group is the one group of video shot simultaneously positioned at the camera of different angles, is The scene video in the visual field wide is obtained, the embodiment of the present invention to the multi-view point video group carries out video-splicing.
In practical application, the multi-view point video group is synchronized first, that is, realized each in the multi-view point video group Synchronization of the individual video on shooting time.Secondly, the multi-view point video group is ranked up according to video content spatial order. Wherein, the video content spatial order is the spatial order that the content shot in the video has in itself, and what is such as shot regards When frequency is for scene in a room, the video content spatial order be the fixed position of scene in the room sequentially.
S102:Any one group of image in the same time of the multi-view point video group is extracted, as the first image sets.
S103:Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, is obtained The projective transformation matrix of each image in described first image group.
In the embodiment of the present invention, any one group of image in the same time in the multi-view point video group is extracted first, be designated as the One image sets.Secondly, to being ranked up according to video content spatial order after the first image sets in adjacent two images enter The extraction of row characteristic point with match, finally give the projective transformation matrix of each image in described first image group.Wherein, it is described The projective transformation matrix of each image is as each two field picture in correspondence video in the multi-view point video group in first image sets Projective transformation matrix.
S104:According to the projective transformation matrix of each image in described first image group, respectively to the multi-view point video Each image in group carries out projective transformation.
The projective transformation matrix of each image in the described first image group that embodiment of the present invention foundation is obtained, to described many Each two field picture carries out projective transformation in correspondence video in viewpoint video group, will belong to different coordinates in the multi-view point video group The image of system is transformed in identical reference frame.
S105:Anastomosing and splicing is carried out to the adjacent image in the multi-view point video group in the same time respectively, is completed to institute State the video-splicing of multi-view point video group.
Due to there may be the difference of colour brightness etc. between adjacent image, in order that obtaining effect after adjacent image splicing More preferably, image of the embodiment of the present invention in the multi-view point video group is carried out to adjacent image first before splicing Fusion, splicing is finally completed by the image after fusion.
A kind of video-splicing method is the embodiment of the invention provides, first, multi-view point video group is synchronized, and according to Video content spatial order is ranked up.Secondly, any one group of image in the same time of the multi-view point video group is extracted, as First image sets.Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, institute is obtained State the projective transformation matrix of each image in the first image sets.Then, according to the projection of each image in described first image group Transformation matrix, carries out projective transformation to each image in the multi-view point video group respectively.Finally, respectively to the multiple views Adjacent image in video group in the same time carries out anastomosing and splicing, completes the video-splicing to the multi-view point video group.This hair The video-splicing method that bright embodiment is provided can fast and accurately complete video-splicing.
The embodiment of the present invention additionally provides a kind of video-splicing method, it is assumed that the multiple views in the video-splicing method are regarded Frequency group is N roads (N>1).
It is another video-splicing method flow diagram provided in an embodiment of the present invention, the video-splicing method with reference to Fig. 2 Including:
S201:The multi-view point video group on N roads is synchronized, and after being ranked up according to video content spatial order, note It is Videoi(i=0,1 ..., N-1).
S202:Extract the camera parameter of the multi-view point video group.
According to the configuration information of the camera for shooting the multi-view point video group for prestoring, Video is extracted respectivelyi The camera parameter of (i=0,1 ..., N-1), wherein, the camera parameter can include camera focus fi(i=0,1 ... N ,) Image resolution ratio Resi(i=0,1 ..., N), lens type CamTypei(i=0,1 ..., N) etc..
S203:Extract any one group of image in the same time of the multi-view point video group.
Extract the multi-view point video group VideoiAny one group of image in the same time of (i=0,1 ..., N-1), is designated as Frmi(i=0,1 ..., N-1).
S204:If it is determined that being fish eye lens for shooting the camera lens of the image, flake correction is carried out to the image.
According to CamTypei(i=0,1 ..., N) Frm is judged respectivelyi(i=0,1 ..., N-1) whether belong to fish-eye image Picture;Wherein, if CamTypei(i ∈ { 0,1 ..., N })=FishEye, it is determined that the camera lens for shooting the image It is fish eye lens, flake correction is carried out to the image;Otherwise retain artwork.
Frm after S204 treatmenti(i=0,1 ..., N-1), is designated as Frmi' (i=0,1 ..., N-1).
S205:Image color correction is carried out to image, image color correction parameter is obtained.
To Frmi' (i=0,1 ..., N-1) carry out image color correction, obtain by the image of image color correction, note It is Frmi" (i=0,1 ..., N-1).In addition, to Frmi' (i=0,1 ..., N-1) carry out generation after image color correction Frmi' (i=0,1 ..., N-1) the corresponding color correction parameter Exp of differencei(i=0,1 ..., N-1).
Wherein, the color correction parameter ExpiThe meaning of (i=0,1 ..., N-1) is, for Frmi' (i=0, 1 ..., N-1) in point P (x, y), it is in Frmi" corresponding point P'(x in (i=0,1 ..., N-1), y), pixel value is full Sufficient Pixelp'(x,y)=Expi*Pixelp(x,y)
S206:Feature point extraction and matching are carried out to two images adjacent in image sets.
To image sets Frmi" adjacent two images in (i=0,1 ..., N-1) are, for example, Frm0”,Frm1" carry out feature Point is extracted and matched.
Specifically, ROI0,1Represent Frm ' '0In with Frm ' '1Overlapping region, ROI1,0Represent Frm ' '1In with Frm ' '0's Overlapping region.Using SIFT, SURF or Harris-SIFT algorithm, to ROI0,1And ROI1,0Interior each pixel extracts feature Point, is designated as ObjectKeypointsindex(index=0,1), and structural feature point describes operator ObjectDescriptorsindex(index=0,1).
Below as a example by describing operator using SURF algorithm extraction characteristic point and structural feature point, describe it in detail and calculated Journey:
First, using Hessian matrixes, ROI is calculated respectively0,1And ROI1,0The characteristic value α of each pixel0i1i(i= 0,1 ..., W*H-1), wherein the resolution ratio of video is W*H in the multi-view point video group.
Secondly, characteristic point is determined using non-maxima suppression algorithmic preliminaries.Specifically, will be by Hessian matrix disposals Each pixel crossed carries out size and compares with 26 points in its 3-dimensional field, if it be maximum in this 26 points or Minimum value, then remain, and as the characteristic point for primarily determining that, is designated as PriObjectKeypointsindex(index=0,1).
Again, the characteristic point of sub-pixel is obtained using 3-dimensional linear interpolation method, while also removing those values less than certain threshold The point of value, increasing extreme value reduces the characteristic point quantity for detecting, and finally only several feature point of maximum intensity can be detected, fixed Adopted these point sets are ObjectKeypointsindex(index=0,1).
Finally, the principal direction of selected characteristic point, and construct surf feature point description operators ObjectDescriptorsindex (index=0,1).
In addition, the embodiment of the present invention can carry out Feature Points Matching using neighbour's Euclidean distance rule of three.Specific algorithm bag Include:
A) (x is assumed1,x2,...,xn) it is ObjectDescriptors0In characteristic point F to be matched0Characteristic vector, (xk1',xk2',...,xkn') it is ObjectDescriptors1In currently and F0The characteristic vector for being matched, ask for feature to The Euclidean distance of amountTraversal ObjectDescriptors1In all characteristic point Fk', obtain DkValue is most It is small with secondary small characteristic pointWithGiven threshold ηkIf,Then think F0WithIt is a pair of characteristic points of matching. Assuming that the characteristic point obtained by Feature Points Matching is to there is m0It is right, it is defined as { fi0,fi0' (i=1,2 ..., m0。)
B) { f is randomly selectedi0,fi0' (i=1,2 ..., m0) in k groups matching to (k < m0), calculate the conversion of characteristic point Matrix H.Calculate { fi0,fi0' (i=1,2 ..., m0) in remaining point in the Euclidean distance D by being obtained after transformation matrix Tmk, Given threshold is d0If, Dmk< d0, then the interior point that the point is Current Transform is assert.The point that selection is counted out most comprising in Collection, recalculates transformation matrix T'.Error is minimized with least square method, and calculates the mean error e in interior point set.
B) step is repeated, until e < e0(e0It is the mean error threshold value of setting), final m is obtained to characteristic point { fi, fi' (i=1,2 ..., m) (m≤m0), its correspondent transform matrix is defined as H.
S207:Calculate image sets Frmi" (i=0,1 ..., N-1) projective transformation matrix.
Specific algorithm is as follows:First by by the extraction of characteristic point and the image sets Frm for matchingi" (i=0,1 ..., N- 1) it is ranked up according to stitching direction (for example, video content spatial order), and defines new image sets for Imgi(i=0, 1 ..., N-1), wherein Img0With ImgN-1It is also adjacent image, and completes feature point extraction and match, its corresponding conversion Matrix is Hi' (i=0,1 ..., N).I.e. we assume that for point P (X, Y, Z) in three dimensions, it is in Img0And Img1In Corresponding points P0And P1Homogeneous coordinates be designated as (x respectively0,y0,ω)T(x1,y1,ω)T, thenUsing Hi'(i =0,1 ..., N), with Img0It is reference picture, by other images Imgi(i=0,1 ..., N-1) transform to and Img0It is identical Reference frame in, be calculated projective transformation matrix Ti' (i=0,1 ..., N).
In addition, the embodiment of the present invention can also select the modes such as cylindrical surface projecting, cubic projection, spherical projection, will Each image projection transformation is in a frame of reference so that the overlapping region alignment of adjacent image.Wherein, adjacent image Overlapping region is designated as Regioni(i=0,1 ..., N).As shown in figure 3, being a kind of overlapping region provided in an embodiment of the present invention Schematic diagram, wherein RemapiAnd Remapi+1It is adjacent image, RegioniDefinition be with Remapi+1Upper left angle point is upper left Angle point, with RemapiBottom right angle point is the rectangular area of bottom right angle point.
Video-splicing method provided in an embodiment of the present invention is become after projective transformation matrix is calculated using the projection Changing matrix carries out the video-splicing of image frame by frame to the multi-view point video group.
S208:Each two field picture in the multi-view point video group is read frame by frame, if image is fish eye images, carries out fish Eye image flame detection.
Video is read frame by frameiEach two field picture Frame in (i=0,1 ..., N-1)i(i=0,1 ..., N-1), if Framei(i=0,1 ..., N-1) belongs to fish eye images, then to Framei(i=0,1 ..., N-1) carries out flake correction, is designated as Framei' (i=0,1 ..., N-1).
S209:According to color correction parameter, image color correction is carried out to each two field picture, and judge the color error of image Whether it is higher than threshold value.
According to color correction parameter Expi(i=0,1 ..., N-1), to Framei' (i=0,1 ..., N-1) carry out figure As colour correction, and judge by the Frame after image color correctioni" (i=0,1 ..., N-1) color error it is whether high In threshold value Errexp, if it is, performing S210, otherwise perform S211.
S210:Again image color correction is carried out to image, and retrieves new color correction parameter.
If color error is higher than threshold value Errexp, then again to Framei' (i=0,1 ..., N-1) carry out image color Correction obtains Framei" (i=0,1 ..., N-1), and obtain new color correction parameter Expi(i=0,1 ..., N-1).
S211:Projective transformation is carried out to image using the advance projective transformation matrix for obtaining.
Using the advance projective transformation matrix for obtaining to Framei" (i=0,1 ..., N-1) carry out projective transformation, obtain Remapi, its resolution ratio is wi*hi, and obtain the overlapping region Region of adjacent imagei(i=0,1 ..., N-1).Such as Fig. 4 institutes Show, be adjacent image RemapiAnd Remapi+1Overlapping region RegioniSchematic diagram.
If RemapiMiddle pixel is in Framei" can not find corresponding points in (i=0,1 ..., N-1), then its pixel value is set It is set to 0.
S212:To Remapi(i=0,1 ..., N-1) carry out image co-registration splicing.
With Remap0Image co-registration is carried out as starting foreground image, it is Canvas, wherein Canvas to define panorama painting canvas Resolution ratio be wc*hc, hc=max (h0,h1,...,hN-1), wc=wN-1-RegionN-1.width it is wN-1, i.e. wcIt is wN-1Subtract Remove RegionN-1Width.
In practical application, image co-registration splicing is the technology of comparative maturity, is below a kind of realization of image co-registration splicing Mode, specifically:
A) point Point (the x in Canvasp,yp), it is assumed that Region0Upper left angle point and bottom right angle point coordinate point Wei not (xul,yul) and (xlr,ylr).If so xp< xul, and foreground image Remap0In (xp,yp) pixel value it is non-zero, then assignment It is foreground image Remap0Pixel value.If xp> xlr, and current superimposed image Remap1Pixel value it is non-zero, then be entered as superposition Image Remap1Pixel value.If xul≤xp≤xlr, then it is for further processing.
B) in Region0In, determine template binary image Mask using image segmentation algorithm0, as shown in figure 5, being this hair The template binary image schematic diagram that a kind of utilization image segmentation that bright embodiment is provided determines.Wherein Mask0Middle pixel value Point for 0 takes Remap0Respective pixel value, pixel value is that 1 point takes Remap1Respective pixel value.The wherein calculation of image segmentation Method can select watershed algorithm, Graphcut etc..Use Mask0Stitching image Pano is obtained after overlap-add procedure0, with reference to Fig. 6, it is A kind of stitching image schematic diagram provided in an embodiment of the present invention.
C) using laplacian pyramid blending algorithm to Pano0Merged, the stitching image after being optimized Pano0′。
D) with Pano0' as new foreground image, Remap2As superimposed image, the new superposition of 1 beginning is jumped into, with such Push away, until all Remapi(i=0,1 ..., N-1) is all applied, the Pano after being optimizedN-1', i.e. the panorama of present frame Stitching image.
S213:Judge whether currently processed two field picture is last frame, if then completing the group of the multi-view point video Video-splicing.If it is not, then jumping into the video-splicing that step S208 starts next two field picture.
It is that a kind of video provided in an embodiment of the present invention is spelled with reference to Fig. 7 present invention also offers a kind of video-splicing device Connection device structural representation, described device includes:
Synchronous order module 701, for multi-view point video group to be synchronized, and is carried out according to video content spatial order Sequence;
Extraction module 702, any one group of image in the same time for extracting the multi-view point video group, as the first figure As group;
Matching module 703 is extracted, for two images adjacent in described first image group to be carried out into characteristic point respectively Extract and match, obtain the projective transformation matrix of each image in described first image group;
Projective transformation module 704 is right respectively for the projective transformation matrix according to each image in described first image group Each image in the multi-view point video group carries out projective transformation;
Anastomosing and splicing module 705, for melting to the adjacent image in the multi-view point video group in the same time respectively It is merged and connects, completes the video-splicing to the multi-view point video group.
Preferably, described device, also includes:
First determining module, for the camera parameter information of each video in the shooting multi-view point video group, Whether each image belongs to fish eye images in determining described first image group;
First flake rectification module, for the image for belonging to fish eye images in described first image group to be carried out into fish eye images Correction.
Preferably, described device, also includes:
First color correction module, for carrying out image color correction to each image in described first image group, and Generate the corresponding color correction parameter of each image.
Preferably, described device, also includes:
Second determining module, for the camera parameter information of each video in the shooting multi-view point video group, Determine whether the image in the multi-view point video group in each video belongs to fish eye images;
Second flake rectification module, for the figure by fish eye images are belonged in each video in the multi-view point video group As carrying out correcting fisheye image.
Preferably, described device, also includes:
Second color correction module, for the corresponding colour correction ginseng of each image in described first image group Number, image color correction is carried out to the image in corresponding video in the multi-view point video group.
Preferably, described device also includes:
Judge module, the color for judging each image in the multi-view point video group by image color correction Whether error is higher than threshold value;
Generation module, for carrying out image color correction to the image higher than the threshold value again, and generates colour correction Parameter;
Update module, for utilizing the color correction parameter, updates the color of corresponding image in described first image group Color correction parameter.
The embodiment of the invention provides a kind of video-splicing device can realize following functions:Multi-view point video group is carried out It is synchronous, and be ranked up according to video content spatial order.Extract any one group of figure in the same time of the multi-view point video group Picture, as the first image sets.Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, Obtain the projective transformation matrix of each image in described first image group.Then, according to each image in described first image group Projective transformation matrix, projective transformation is carried out to each image in the multi-view point video group respectively.Regarded to described respectively more Adjacent image in point video group in the same time carries out anastomosing and splicing, completes the video-splicing to the multi-view point video group.This The video-splicing device that inventive embodiments are provided can fast and accurately complete video-splicing.
For apparatus embodiments, because it corresponds essentially to embodiment of the method, so related part is referring to method reality Apply the part explanation of example.Apparatus embodiments described above are only schematical, wherein described as separating component The unit of explanation can be or may not be physically separate, and the part shown as unit can be or can also It is not physical location, you can with positioned at a place, or can also be distributed on multiple NEs.Can be according to reality Selection some or all of module therein is needed to realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not In the case of paying creative work, you can to understand and implement.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operating In any this actual relation or order.And, term " including ", "comprising" or its any other variant be intended to Nonexcludability is included, so that process, method, article or equipment including a series of key elements not only will including those Element, but also other key elements including being not expressly set out, or also include being this process, method, article or equipment Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Also there is other identical element in process, method, article or equipment including the key element.
A kind of video-splicing method and device for being provided the embodiment of the present invention above is described in detail, herein Apply specific case to be set forth principle of the invention and implementation method, the explanation of above example is only intended to side Assistant solves the method for the present invention and its core concept;Simultaneously for those of ordinary skill in the art, according to think of of the invention Think, will change in specific embodiments and applications, in sum, it is right that this specification content should not be construed as Limitation of the invention.

Claims (12)

1. a kind of video-splicing method, it is characterised in that methods described includes:
Multi-view point video group is synchronized, and is ranked up according to video content spatial order;
Any one group of image in the same time of the multi-view point video group is extracted, as the first image sets;
Two images adjacent in described first image group are carried out the extraction of characteristic point and matched respectively, first figure is obtained As the projective transformation matrix of each image in group;
According to the projective transformation matrix of each image in described first image group, respectively to each in the multi-view point video group Image carries out projective transformation;
Anastomosing and splicing is carried out to the adjacent image in the multi-view point video group in the same time respectively, completion is regarded to the multiple views The video-splicing of frequency group.
2. video-splicing method according to claim 1, it is characterised in that described respectively by phase in described first image group Two adjacent images carry out the extraction of characteristic point and match, and obtain the projective transformation matrix of each image in described first image group Before, also include:
According to the camera parameter information for shooting each video in the multi-view point video group, determine each in described first image group Whether individual image belongs to fish eye images;
The image for belonging to fish eye images in described first image group is carried out into correcting fisheye image.
3. video-splicing method according to claim 1 and 2, it is characterised in that methods described also includes:
Image color correction is carried out to each image in described first image group, and generates the corresponding colour correction of each image Parameter.
4. video-splicing method according to claim 1 and 2, it is characterised in that described according in described first image group The projective transformation matrix of each image, before carrying out projective transformation to each image in the multi-view point video group respectively, also Including:
According to the camera parameter information for shooting each video in the multi-view point video group, in determining the multi-view point video group Whether the image in each video belongs to fish eye images;
The image for belonging to fish eye images in each video in the multi-view point video group is carried out into correcting fisheye image.
5. video-splicing method according to claim 3, it is characterised in that methods described also includes:
The corresponding color correction parameter of each image in described first image group, to correspondence in the multi-view point video group Video in image carry out image color correction.
6. video-splicing method according to claim 5, it is characterised in that methods described also includes:
Whether the color error for judging each image in the multi-view point video group by image color correction is higher than threshold value;
Again image color correction is carried out to the image higher than the threshold value, and generates color correction parameter;
Using the color correction parameter, the color correction parameter of corresponding image in described first image group is updated.
7. a kind of video-splicing device, it is characterised in that described device includes:
Synchronous order module, for multi-view point video group to be synchronized, and is ranked up according to video content spatial order;
Extraction module, any one group of image in the same time for extracting the multi-view point video group, as the first image sets;
Extract matching module, for respectively by two images adjacent in described first image group carry out the extraction of characteristic point with Match somebody with somebody, obtain the projective transformation matrix of each image in described first image group;
Projective transformation module, for the projective transformation matrix according to each image in described first image group, respectively to described many Each image in viewpoint video group carries out projective transformation;
Anastomosing and splicing module, for carrying out anastomosing and splicing to the adjacent image in the multi-view point video group in the same time respectively, Complete the video-splicing to the multi-view point video group.
8. video-splicing device according to claim 7, it is characterised in that described device, also includes:
First determining module, for the camera parameter information of each video in the shooting multi-view point video group, it is determined that Whether each image belongs to fish eye images in described first image group;
First flake rectification module, for the image for belonging to fish eye images in described first image group to be carried out into fish eye images school Just.
9. the video-splicing device according to claim 7 or 8, it is characterised in that described device, also includes:
First color correction module, for carrying out image color correction to each image in described first image group, and generates The corresponding color correction parameter of each image.
10. the video-splicing device according to claim 7 or 8, it is characterised in that described device, also includes:
Second determining module, for the camera parameter information of each video in the shooting multi-view point video group, it is determined that Whether the image in the multi-view point video group in each video belongs to fish eye images;
Second flake rectification module, for the image for belonging to fish eye images in each video in the multi-view point video group to be entered Row correcting fisheye image.
11. video-splicing devices according to claim 9, it is characterised in that described device, also include:
Second color correction module is right for the corresponding color correction parameter of each image in described first image group Image in the multi-view point video group in corresponding video carries out image color correction.
12. video-splicing devices according to claim 11, it is characterised in that described device also includes:
Judge module, the color error for judging each image in the multi-view point video group by image color correction Whether it is higher than threshold value;
Generation module, for carrying out image color correction to the image higher than the threshold value again, and generates color correction parameter;
Update module, for utilizing the color correction parameter, updates the color school of corresponding image in described first image group Positive parameter.
CN201611145816.9A 2016-12-13 2016-12-13 A kind of video-splicing method and device Pending CN106851130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611145816.9A CN106851130A (en) 2016-12-13 2016-12-13 A kind of video-splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611145816.9A CN106851130A (en) 2016-12-13 2016-12-13 A kind of video-splicing method and device

Publications (1)

Publication Number Publication Date
CN106851130A true CN106851130A (en) 2017-06-13

Family

ID=59139909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611145816.9A Pending CN106851130A (en) 2016-12-13 2016-12-13 A kind of video-splicing method and device

Country Status (1)

Country Link
CN (1) CN106851130A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340710A (en) * 2019-12-31 2020-06-26 智慧互通科技有限公司 Method and system for acquiring vehicle information based on image stitching
CN112598571A (en) * 2019-11-27 2021-04-02 中兴通讯股份有限公司 Image scaling method, device, terminal and storage medium
CN113784059A (en) * 2021-08-03 2021-12-10 阿里巴巴(中国)有限公司 Video generation and splicing method, equipment and storage medium for clothing production
CN114998105A (en) * 2022-06-02 2022-09-02 成都弓网科技有限责任公司 Monitoring method and system based on multi-camera pantograph video image splicing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402855A (en) * 2011-08-29 2012-04-04 深圳市蓝盾科技有限公司 Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic
CN103338343A (en) * 2013-05-29 2013-10-02 山西绿色光电产业科学技术研究院(有限公司) Multi-image seamless splicing method and apparatus taking panoramic image as reference
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
KR20150142846A (en) * 2014-06-12 2015-12-23 주식회사그린티 mosaic image of black box
CN106056539A (en) * 2016-06-24 2016-10-26 中国南方电网有限责任公司 Panoramic video splicing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402855A (en) * 2011-08-29 2012-04-04 深圳市蓝盾科技有限公司 Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic
CN103338343A (en) * 2013-05-29 2013-10-02 山西绿色光电产业科学技术研究院(有限公司) Multi-image seamless splicing method and apparatus taking panoramic image as reference
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
KR20150142846A (en) * 2014-06-12 2015-12-23 주식회사그린티 mosaic image of black box
CN106056539A (en) * 2016-06-24 2016-10-26 中国南方电网有限责任公司 Panoramic video splicing method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598571A (en) * 2019-11-27 2021-04-02 中兴通讯股份有限公司 Image scaling method, device, terminal and storage medium
CN111340710A (en) * 2019-12-31 2020-06-26 智慧互通科技有限公司 Method and system for acquiring vehicle information based on image stitching
CN111340710B (en) * 2019-12-31 2023-11-07 智慧互通科技股份有限公司 Method and system for acquiring vehicle information based on image stitching
CN113784059A (en) * 2021-08-03 2021-12-10 阿里巴巴(中国)有限公司 Video generation and splicing method, equipment and storage medium for clothing production
CN113784059B (en) * 2021-08-03 2023-08-18 阿里巴巴(中国)有限公司 Video generation and splicing method, equipment and storage medium for clothing production
CN114998105A (en) * 2022-06-02 2022-09-02 成都弓网科技有限责任公司 Monitoring method and system based on multi-camera pantograph video image splicing

Similar Documents

Publication Publication Date Title
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
JP6561216B2 (en) Generating intermediate views using optical flow
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN111127318B (en) Panoramic image splicing method in airport environment
CN104699842B (en) Picture display method and device
CN105374019A (en) A multi-depth image fusion method and device
CN105005964B (en) Geographic scenes panorama sketch rapid generation based on video sequence image
CN106851130A (en) A kind of video-splicing method and device
CN103198488B (en) PTZ surveillance camera realtime posture rapid estimation
CN107767339B (en) Binocular stereo image splicing method
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN107316275A (en) A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN110717936B (en) Image stitching method based on camera attitude estimation
CN107798702A (en) A kind of realtime graphic stacking method and device for augmented reality
CN110245199B (en) Method for fusing large-dip-angle video and 2D map
CN111815517B (en) Self-adaptive panoramic stitching method based on snapshot pictures of dome camera
CN107945221A (en) A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process
CN106548494A (en) A kind of video image depth extraction method based on scene Sample Storehouse
CN106530407A (en) Three-dimensional panoramic splicing method, device and system for virtual reality
CN112163996B (en) Flat angle video fusion method based on image processing
CN110288511A (en) Minimum error joining method, device, electronic equipment based on double camera image
Ruan et al. Image stitching algorithm based on SURF and wavelet transform
CN108737743B (en) Video splicing device and video splicing method based on image splicing
CN107958489B (en) Curved surface reconstruction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170613