CN108093188B - A method of the big visual field video panorama splicing based on hybrid projection transformation model - Google Patents

A method of the big visual field video panorama splicing based on hybrid projection transformation model Download PDF

Info

Publication number
CN108093188B
CN108093188B CN201711419418.6A CN201711419418A CN108093188B CN 108093188 B CN108093188 B CN 108093188B CN 201711419418 A CN201711419418 A CN 201711419418A CN 108093188 B CN108093188 B CN 108093188B
Authority
CN
China
Prior art keywords
image
region
pixel
characteristic point
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711419418.6A
Other languages
Chinese (zh)
Other versions
CN108093188A (en
Inventor
袁丁
胡晓辉
张弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201711419418.6A priority Critical patent/CN108093188B/en
Publication of CN108093188A publication Critical patent/CN108093188A/en
Application granted granted Critical
Publication of CN108093188B publication Critical patent/CN108093188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of joining method of big visual field video panorama based on hybrid projection transformation model, including five big steps, step 1: down-sampled, reduction operand is carried out to video;Step 2: the detection of Sift characteristic point is extracted;Step 3: to feature points clustering, region segmentation is carried out to image using SVM method, and carry out the matching of characteristic point;Step 4: hybrid projection matrix is calculated;Step 5: interative computation, realize the registration of subsequent image sequence with merge, the final splicing for realizing panorama sketch.Time-consuming problem in big, more projection array methods that present invention effectively avoids errors in traditional single projection array method has wide application background.

Description

A method of the big visual field video panorama splicing based on hybrid projection transformation model
Technical field
The method for the big visual field video panorama splicing based on hybrid projection transformation model that the present invention relates to a kind of, this method The accurate quick splicing that sequence of video images may be implemented can be applied in the technologies such as virtual reality to generate panorama sketch, Belong to computer vision field.
Background technique
Big visual field video panorama splicing is a hot research problem in current video process field, it can be with By the image sequence of one section of video be spliced to form one include full scene wide viewing angle image, solve because of picture pick-up device Limitation and existing shooting angle it is small, can not panorama observation the problem of.The technology have been widely used for remote sensing, habitata and Among the fields such as virtual reality.In remote sensing fields, the acquisition of high-definition picture, which needs to splice, establishes large area panoramic picture; In terms of habitata, also need that the tiny lens video information of acquisition is carried out to be spliced to form panorama sketch to be observed and locate Reason;And virtual reality is even more to need to provide the impression of more intuitive solid by the foundation of panorama sketch for user.
In recent years, although the splicing of panorama sketch has been widely studied, and existing many mature methods, It is, however it remains many challenging problem needs go to solve.Maximum problem is exactly in current most of panorama sketch In splicing, there is a supposed premise: or image and video camera are apart from far, it is believed that the scene in image is all in same In one plane;Image is to be obtained by video camera along projection centre rotary taking.In this way, can make in image registration Use a homography matrix as the projective transformation model of image registration.
But such method and do not have robustness because in actual photographed video have very strong randomness and Freedom is unable to satisfy above-mentioned assumed condition substantially;Therefore, if still answering battle array to be spliced using a list, what is obtained is complete There are errors for scape figure.Currently, algorithm is all to make up registration process using image fusion technology in post-processing mostly Error, but this cannot tackle the problem at its root.
Based on this, battle array is answered gradually to start to be studied to carry out the splicing of video panorama using more lists.Wherein, than more typical Dual-Homography Warping (DHW) method for having deliver on CVPR for 2011, it passes through comprehensive two planes Homography matrix come realize entire image Registration and connection work.But in weight calculation, due to needing to carry out Euclidean distance It calculates, and increasing with characteristic point, calculation times also can largely increase, therefore can expend the very big time.
Summary of the invention
Technology of the invention solves the problems, such as: overcoming the deficiencies of the prior art and provide a kind of based on hybrid projection transformation model The splicing of big visual field video panorama method, expend that the time is shorter, while accuracy is also higher, and to shooting video mode without Any constraint can have a wide range of applications.
The technology of the present invention solution: the present invention is directed to realize the splicing of big visual field video panorama.Its input quantity be by One section of video that video camera obtains exports as a width panorama sketch, present invention assumes that handled video image is broadly divided into two A plane (Background plane and foreground plane).It is described below in the present invention and realizes big visual field video panorama Scheme the technical solution of splicing:
(1) down-sampled to sequence of video images, guarantee between every two width adjacent image a minimum of 1/3 overlapping region;
(2) overlapping region of the preceding two images obtained to sampling carries out the detection of Sift characteristic point, the spy of acquisition Point set is levied, and carries out the extraction of feature vector to characteristic point;
(3) feature point set is clustered, carries out image to be divided into three areas using the method for support vector machines Domain is denoted as region Sb, SuAnd Sf, and carry out the matching of characteristic point;
(4) respectively to SbAnd SfIn characteristic point rejected using Random Sample Consensus (RANSAC) method Mismatching point, and homography matrix is calculated, it is denoted as H respectivelyb、Hf
(5) two image of preceding width obtained to sampling is registrated, in registration process, to three region Sb, SuAnd SfIn picture Element uses hybrid projection transformation model the second width image of progress to the registration of piece image;
(6) realize that third width to the registration of the second width image, repeats step (6), in turn by the iteration of homography matrix It realizes the registration of subsequent image sequence, finally carries out the fusion of image, complete the splicing task of entire panorama sketch.
The beneficial effect of the present invention compared with prior art is:
(1) after point of use clustering method, the present invention realizes the division of image different zones using SVM method, after being Continuous process of image registration is prepared;
(2) in process of image registration, the hybrid projection transformation model of innovation has been used, may be implemented relative to single-throw shadow The better splicing effect of matrix;
(3) in process of image registration, the hybrid projection transformation model of innovation has been used, it is not only more mixed than traditional DHW etc. Projective transformation model is closed closer to truth, but also the registration time can be greatly reduced, increases entire splicing more Effect.
Detailed description of the invention
Fig. 1 is that SVM method segmented image is three area schematics;
Fig. 2 the method for the present invention flow diagram;
Fig. 3 experimental result picture of the present invention.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and embodiments.
The present invention obtains video sequence by traditional slr camera shooting, realizes that video is complete at Matlab R2016b The splicing of scape figure.Flow chart of the invention as shown in Figure 2, technical solution for a better understanding of the present invention, below to this hair Bright specific embodiment is further described:
Step 1: reading in video sequence, down-sampled using video sequence progress of the Matlab to acquisition, is matched with reducing image Quasi- operand.When sampling, it need to only guarantee the overlapping region for having about 1/3 in neighbouring sample image.
Step 2: the present invention is using Sift operator to adjacent two images I1And I2Carry out detection and the Sift of characteristic point The calculating of feature vector.
Step 3: by two dimensional image plane as two-dimensional feature space, u, v value of image pixel respectively represent different spies Value indicative is clustered the feature point set of acquisition using K-means method, wherein K=2;Then, the present invention uses supporting vector The method of machine (SVM) realizes the division to the plane of delineation, as shown in Figure 1, wherein each small circle represents a Sift feature Point.Two linear equations tangent with supporting vector can be denoted as respectively:
wTx+b1=0 (1)
wTx+b2=0 (2)
Wherein, x=(u, v)T
Then, w will be metTx+b1The pixel region of < 0 is denoted as Sb;W will be metTx+b2The pixel region of > 0 is denoted as Sf;W will be metTx+b1> 0 and wTx+b2The pixel region of < 0 is denoted as Sn
To region SbAnd SfIn Sift characteristic point matched, in matching process, in order to which the matching for improving characteristic point is accurate Degree defines I1Middle region Sb(Sf) in any point to be matched (be denoted as p1) and I2Middle region Sb(Sf) in feature point set in recently Characteristic point distance be d1, with I2Middle region Sb(Sf) in feature point set in the distance of time close characteristic point be d2, then distance than It is denoted as:
If Ratio > ε (ε is empirical value, and the present invention is determined by a large amount of repetition tests are as follows: 5≤ε), then it is assumed that point p1Have found match point.
Traverse I1In all Sift characteristic point, just obtained initial matching point to collection, be denoted as P respectivelybAnd Pf
Step 4: with RANSAC method respectively to region SbAnd SfCarry out the calculating of homography matrix H.For two width X-Y schemes Picture meets homograph (or projective transformation) between corresponding points, such as shown in (4), wherein (x, y, 1)T(x', y', 1)TIt is space Projection homogeneous coordinates value of the point X in two images.For region Sb, every time at random from matching double points collection Pb4 groups of matchings of middle selection Point is to calculating Hb, and count PbThe middle point number ratio for meeting (4), is denoted as Inlier_Ratio.Finally selection is so that Inlier_ Ratio reaches maximum homography matrix and is denoted as final Hb
Wherein,
Similarly, S can be calculatedfThe corresponding homography matrix in region is denoted as Hf
Step 5: after having obtained two homography matrixs, image can be registrated.For three different zones, originally Invention uses three kinds of registration models.For region SbIn pixel, it is believed that they belong to SbThat plane at place, Then these pixels are using homography matrix H with punctualb, such as shown in (6), wherein (x, y, 1)TFor I2In certain pixel it is homogeneous Coordinate, (x', y', 1)TPass through projective transformation to I for the point1In corresponding coordinate.
For region SfIn pixel, similarly can be used (7) shown in transformation will click on row projective transformation.
For SuIn pixel, the present invention be referred to as " Undefined pixels ", that is, be unable to judge accurately out Which plane these points belong to.In response to this problem, the present invention proposes that following methods solve.
(1) for all SuIn point, define homography matrix:
Hij=wbHb+wfHf (8)
Wherein HijIndicate the corresponding homography matrix of pixel at (i, j), Hb、HfIt respectively indicates corresponding to two regions of image Homography matrix, wbAnd wfRespectively the two planes weight coefficient for corresponding to homography matrix.
(2) each S is calculateduMiddle pixel is to region Sb、SfFeature point set in nearest characteristic point distance, respectively It is denoted as db、df, the present invention proposition traditional Euclidean distance is being replaced using manhatton distance in calculating process, in this way may be used Significantly to save the time, weight calculation formula is as follows:
wb=df/(db+df) (9)
wf=db/(db+df) (10)
In conclusion the hybrid projection transformation model of available each pixel:
Wherein, background indicates background area, and undefined area indicates indefinite region, foreground table Show foreground area.
By above-mentioned steps, the registration of two images is just realized.
Step 6: pass through above-mentioned five steps, the registration work of two images sequence is realized, then, to third width I3 And subsequent image sequence realizes Registration and connection.The present invention realizes this purpose using the method for gust iteration is singly answered.In order to Convenient clear explanation, this step is by HijSubscript save, be denoted as H,
(1) for I2And I3The region of overlapping can be used formula (12) and directly be iterated projective transformation,
H3→1=H2→1(H3→2(I3)) (12)
(2) for I3In I2And I3Non-overlapping Domain can be used formula (13) and be weighted iterative projection transformation
Wherein, K is defined as I by the present invention2And I3Overlapping region in Sift feature point set, define weight λqFor λq= 1/d_Manh, wherein d_Manh indicates H3→2(p) manhatton distance between q.Again to λqIt is normalized, so that ∑ λq=1,
It can to sum up obtain:
Projection matrix of the third width image relative to the first images can be calculated by above-mentioned steps, is then thrown Shadow transformation, just realizes in this way by the registration work of third width image.For subsequent image sequence, it is recycled iteration and uses the party Formula.
Finally, the image to registration merges, the present invention gradually goes out fusion method (i.e. linear interpolation using traditional being fade-in Blending algorithm), smooth transition processing is done come the place in fact now close to overlapping region boundary using linear weighting function, weights letter Number is shown below:
Wherein I1And I2Indicate adjacent two images,
Effectiveness of the invention and accuracy have passed through experiment and have been verified, and achieve good splicing result.It is real Result is tested as shown in figure 3, (a) indicates four width images in video sequence, (b) indicates the panorama that is calculated by this four width image Figure.Sharpest edges of the invention are to carry out region division to image using SVM method, and based on this, subregion carries out The projective transformation (homograph) of image, efficiently avoids that error in traditional single projection array method is big, more projection array methods In time-consuming problem, have good prospect in the technical fields such as virtual reality.

Claims (3)

1. it is a kind of based on hybrid projection transformation model big visual field video panorama splicing method, it is characterised in that: including with Lower step:
(1) down-sampled to sequence of video images, guarantee between every two width adjacent image a minimum of 1/3 overlapping region;
(2) overlapping region of the preceding two images obtained to sampling carries out the detection of Sift characteristic point, the characteristic point of acquisition Collection, and the extraction of feature vector is carried out to characteristic point;
(3) feature point set is clustered, carries out image to be divided into three regions using the method for support vector machines, remembered For region Sb, SuAnd Sf, and carry out I1And I2Middle corresponding region SbAnd SfThe matching of characteristic point;
(4) respectively to SbAnd SfIn characteristic point rejected using Random Sample Consensus (RANSAC) method accidentally With point, and the relationship of two-dimensional coordinate between two images, i.e. homography matrix are calculated, is denoted as H respectivelyb、Hf
(5) two image of preceding width obtained to sampling is registrated, in registration process, to three region Sb, SuAnd SfIn pixel adopt With hybrid projection transformation model carry out the second width image to piece image registration;
(6) it realizes that third width repeats step (6) to the registration of piece image by the iteration of homography matrix, and then realizes The registration of subsequent image sequence finally carries out the fusion of image, completes the splicing task of entire panorama sketch;
In the step (5), hybrid projection transformation model is accomplished by
(51) for region SbIn pixel, on time use homography matrix Hb, wherein I1And I2Respectively represent first it is secondary and Second sub-picture, (x, y, 1)TFor I2In certain pixel homogeneous coordinates, (x', y', 1)TPass through projective transformation to I for the point1In Corresponding coordinate,
(52) for region SfIn pixel, use (4) shown in transformation will click on row projective transformation;
(53) for SuIn pixel, define homography matrix:
Hij=wbHb+wfHf (5)
Wherein HijIndicate the corresponding homography matrix of pixel at (i, j), Hb、HfRespectively indicate two region S of imageb、SfIt is corresponding Homography matrix, wbAnd wfRespectively two image-region Sb、SfThe weight coefficient of corresponding homography matrix;
Calculate SuIn each pixel to region Sb、SfFeature point set in nearest characteristic point manhatton distance, remember respectively For db、df, then weight calculation formula is as follows:
wb=df/(db+df) (6)
wf=db/(db+df) (7)
Hybrid projection model is expressed as:
Wherein, foreground is foreground area, i.e. Sf, undefined area is indefinite region, i.e. Su, background For background area, i.e. Sb
2. a kind of side of big visual field video panorama splicing based on hybrid projection transformation model according to claim 1 Method, it is characterised in that: in step (3), when carrying out region division to image,
(1) after carrying out feature points clustering using K-means method, image is divided using SVM method, with supporting vector Two tangent linear equations respectively are as follows:
wTx+b1=0 (1)
wTx+b2=0 (2)
Wherein, x=(u, v)T, w, b1And b2Indicate that the coefficient of linear equation, u represent the abscissa of pixel, v represents pixel Ordinate;
(2) w will be metTx+b1The pixel region of < 0 is denoted as Sb
W will be metTx+b2The pixel region of > 0 is denoted as Sf
W will be metTx+b1> 0 and wTx+b2The pixel region of < 0 is denoted as SuRealize the division of image-region.
3. a kind of side of big visual field video panorama splicing based on hybrid projection transformation model according to claim 1 Method, it is characterised in that: in the step (6), the registration of third width and subsequent image sequence is as follows:
In I2And I3Non-overlapping Domain in, using weighting homography matrix iteration method, it is as follows to realize the registration of image:
Wherein, K is defined as I2And I3Overlapping region in Sift feature point set, define weight λqFor λq=1/d_Manh, Middle d_Manh indicates H3→2(p) manhatton distance between Sift characteristic point q, then to λqIt is normalized, so that ∑ λq =1.
CN201711419418.6A 2017-12-25 2017-12-25 A method of the big visual field video panorama splicing based on hybrid projection transformation model Active CN108093188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711419418.6A CN108093188B (en) 2017-12-25 2017-12-25 A method of the big visual field video panorama splicing based on hybrid projection transformation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711419418.6A CN108093188B (en) 2017-12-25 2017-12-25 A method of the big visual field video panorama splicing based on hybrid projection transformation model

Publications (2)

Publication Number Publication Date
CN108093188A CN108093188A (en) 2018-05-29
CN108093188B true CN108093188B (en) 2019-01-25

Family

ID=62179005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711419418.6A Active CN108093188B (en) 2017-12-25 2017-12-25 A method of the big visual field video panorama splicing based on hybrid projection transformation model

Country Status (1)

Country Link
CN (1) CN108093188B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741289B (en) * 2019-01-25 2021-12-21 京东方科技集团股份有限公司 Image fusion method and VR equipment
CN110544202B (en) * 2019-05-13 2022-06-07 燕山大学 Parallax image splicing method and system based on template matching and feature clustering
CN110223250B (en) * 2019-06-02 2021-11-30 西安电子科技大学 SAR geometric correction method based on homography transformation
CN113593426B (en) * 2021-09-28 2021-12-03 卡莱特云科技股份有限公司 Module division method for LED spliced screen image and LED screen correction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678687A (en) * 2015-12-29 2016-06-15 天津大学 Stereo image stitching method based on content of images
CN107423008A (en) * 2017-03-10 2017-12-01 北京市中视典数字科技有限公司 A kind of multi-cam picture fusion method and scene display device in real time

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886979B (en) * 2017-03-30 2020-10-20 深圳市未来媒体技术研究院 Image splicing device and image splicing method
CN107154022B (en) * 2017-05-10 2019-08-27 北京理工大学 A kind of dynamic panorama mosaic method suitable for trailer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678687A (en) * 2015-12-29 2016-06-15 天津大学 Stereo image stitching method based on content of images
CN107423008A (en) * 2017-03-10 2017-12-01 北京市中视典数字科技有限公司 A kind of multi-cam picture fusion method and scene display device in real time

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Constructing Image Panoramas using Dual-Homography Warping;JunHong Gao et al.;《CVPR 2011》;20110822;全文

Also Published As

Publication number Publication date
CN108093188A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
Li et al. Parallax-tolerant image stitching based on robust elastic warping
CN108093188B (en) A method of the big visual field video panorama splicing based on hybrid projection transformation model
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
Adel et al. Image stitching based on feature extraction techniques: a survey
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
Lin et al. Smoothly varying affine stitching
KR101643607B1 (en) Method and apparatus for generating of image data
CN104574311B (en) Image processing method and device
CN108734657B (en) Image splicing method with parallax processing capability
Yuan et al. Multiscale gigapixel video: A cross resolution image matching and warping approach
CN109313799B (en) Image processing method and apparatus
CN107316275A (en) A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
Mistry et al. Image stitching using Harris feature detection
CN111553939B (en) Image registration algorithm of multi-view camera
CN110288511A (en) Minimum error joining method, device, electronic equipment based on double camera image
Vaghela et al. A review of image mosaicing techniques
Pulli et al. Mobile panoramic imaging system
CN109166075A (en) One kind being directed to small overlapping region image split-joint method
Xue et al. Fisheye distortion rectification from deep straight lines
CN112396558A (en) Image processing method, image processing apparatus, and computer-readable storage medium
Tseng et al. Depth image super-resolution via multi-frame registration and deep learning
Vaidya et al. The study of preprocessing and postprocessing techniques of image stitching
Ha et al. Embedded panoramic mosaic system using auto-shot interface
US20230097869A1 (en) Method and apparatus for enhancing texture details of images
Inzerillo Super-resolution images on mobile smartphone aimed at 3D modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant