Summary of the invention
In view of this, it is an object of the invention to provide a kind of video sequence color image joining method.
The present invention adopts following scheme to realize: a kind of video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy.
In an embodiment of the present invention, its concrete steps are:
S01: extract image to be spliced from video;
S02: to the image zooming-out unique point extracted;
S03: the unique point extracted be described, obtains proper vector;
S04: proper vector is carried out bi-directional matching;
S05: the image mated is carried out bilinearity interpolation reconstruction splicing.
In an embodiment of the present invention, described image to be spliced is the same scene image more than two width or two width having overlapping region.
In an embodiment of the present invention, described step S02 is specially:
S21: the integrogram obtaining entire image, is beneficial to the quick calculating in follow-up yardstick space etc.;
S22: utilize quick Hessian matrix detection unique point.
In an embodiment of the present invention, after described step S22 process completes, use size to become big cell type wave filter gradually to integral image filtering, set up the yardstick space of image, then, unique point is carried out threshold process, successively make comparisons with default threshold value, successively get rid of the point lower than threshold value, finally obtain the point higher than threshold value, then these are compared higher than the point of threshold value and 26 neighborhoods in this yardstick space and upper and lower yardstick space, finally determine that extreme point is accurate unique point.
In an embodiment of the present invention, described proper vector comprises two part contents: one is 64 dimensional feature vectors under gray-scale map picture, and two is the proper vector of the chromatic information of superposition characteristic point position script coloured image.
In an embodiment of the present invention, described step S03 is specially:
S31: utilize the original described method of SURF to extract proper vector, obtain the proper vector of 64 dimensions:;
S32: to the unique point obtained, the coloured image that oppositely search is to be spliced, finds the information of the point of characteristic point position in initial image, information color value, i.e. the red color component value R of unique point, green component values G and blue component value B;
S33: after finding above-mentioned information, is added to above-mentioned 64 dimensional feature vector ends, is finally obtained comprising the proper vector of chromatic information value, namely obtain the proper vector of one 67 dimension.
In an embodiment of the present invention, after described step S33 process completes, to the proper vector of above-mentioned 67 dimensionsIt is normalized, obtains normalized proper vector:, whereinIt it is the mould of proper vector.
In an embodiment of the present invention, described step S04 is specially: treat the unique point P1 on stitching image I1, from image I2 to be spliced, find Europe formula put P2nt and P2snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d1nt and d1snt that record is corresponding, adopt formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d1nt and d1snt, it be designated as T1, if T1 < T0, T0 is rate constant, then enters second time coupling, otherwise get rid of the current unique point in image I1 to be spliced; Described second time coupling step is: treat the unique point P2 in stitching image I2, from image I1 to be spliced, find Europe formula conversely put P1nt and P1snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d2nt and d2snt that record is corresponding, adopts formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d2nt and d2snt, it be designated as T2, if T2 < T0, then continue to judge whether current matching point is the point matched first time, if namely P1nt is exactly the P1 matched first time, then illustrate that P1 and P2 is corresponding points, sets up matching double points P1-P2; If P1nt is not the P1 matched first time, then illustrates that P1 and P2 is not corresponding points, the current point in rejection image I1 and I2 while of just, then enter the coupling process of next corresponding points, until the unique point on image I1 was all mated.
In an embodiment of the present invention, after described step S05 process completes, the figure spliced is done last modification and beautifies, to obtain last satisfactory stitching image.
The present invention compared with prior art tool have the following advantages:
1. existing stitching image source is the discontinuous image of single width or single AVI video format, and the image deriving from video is required to be continuous adjacent, and the video image that the present invention adopts, video format are various, and applicability is extensive; And the image obtained differs and is decided to be continuous print, only need to have certain repetition scene, wide adaptability;
2. the method for registering images of existing employing SURF algorithm, its proper vector described after obtaining unique point is all the information of gray-scale map picture, loss chromatic information is serious, and on the basis of proper vector extracted in original SURF method of the present invention, increase chromatic information data, can well the original information of combination picture, make images match more accurate;
3. existing image characteristic point matching method is in order to increase coupling accuracy, adopts Europe formula distance to find nearest matching point, then rejects wrong matching point by RANSAC method, and algorithm is complicated, it is achieved difficulty; And the present invention due to proper vector superposition chromatic information, therefore Europe formula distance is only adopted just can accurately to mate, simultaneously in order to increase coupling accuracy, adopt the mode of nearest neighbour and time neighbour's ratio to reject fuzzy point, then adopt two-way matching process to improve accuracy. Whole process is easily understood, and algorithm realizes easily simple.
Embodiment
For making the object of the present invention, technical scheme and advantage clearly understand, below by specific embodiment and relevant drawings, the present invention will be described in further detail.
The present invention provides a kind of video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy.
As shown in Figure 1, a kind of video sequence color image joining method flow process of the present invention is: S01: extract image (described image to be spliced is image I1 and I2) to be spliced from video; S02: to the image zooming-out unique point extracted; S03: the unique point extracted be described, obtains proper vector; S04: proper vector is carried out bi-directional matching; S05: the image mated is carried out bilinearity interpolation reconstruction splicing.
First, will carry out image mosaic must have more than two width or two width to have the same scene image of overlapping region. The acquisition of major part image repeatedly takes Same Scene by photographic camera at present, thus obtains the static image of to be spliced multiple. The image sources of the present invention extracts image to be spliced from video file, therefore has the consistent attribute such as continuity, dependency between image, follow-up registration operation is had good help. The present invention can extract the image to be spliced with repetition scene from the video file of various form, and video format comprises: AVI, MPEG, WMV, MKV etc.
After obtaining image, what first to be carried out is the calculating of integrogram, as shown in Figure 2. Calculate respectively the pixel from initial point (0,0) to point (x, y) and, wherein (x, y) represents the point on whole image to be spliced. So just obtain the integrogram of entire image, it is beneficial to the quick calculating in follow-up yardstick space etc.
Utilize quick Hessian matrix detection unique point, first it is the cell type wave filter set up as shown in Figure 3, corresponding respectively is the wave filter in horizontal x direction, vertical y direction and x-y direction, utilize cell type wave filter to integral image filtering, the response value obtaining three directions is defined as Dxx, Dyy and Dxy, thus obtains Hessian matrix parameter.
Utilize experimental formula:, it is approximately the determinant of Hessian, then judges if negative, to illustrate the value of the determinant at picture point (x, y) place that this point is non-extreme point, be not namely unique point; If determinant is positive number, illustrates that this point is extreme point, it is unique point.
The unique point obtained by determinant is also just coarse, next needs further accurately location feature point. As shown in fig. 4 a, the size of cell type wave filter is respectively����, analogize with this. As shown in Figure 4 b, it may also be useful to size becomes big cell type wave filter gradually to integral image filtering, sets up the yardstick space of image.
Then, unique point is carried out threshold process, successively make comparisons with default threshold value, successively get rid of the point lower than threshold value, finally obtain the point higher than threshold value, then these are compared higher than the point of threshold value and 26 neighborhoods in this yardstick space and upper and lower yardstick space, as shown in Figure 5, finally determine that extreme point is accurate unique point.
After obtaining the unique point of accurately location, it is necessary to determine the relevant information of unique point further, namely describe unique point, obtain the proper vector of unique point. Proper vector in the present invention comprises two part contents: one is 64 dimensional feature vectors under gray-scale map picture, and two is the proper vector of the chromatic information of superposition characteristic point position script coloured image.
First, it is utilize the original described method of SURF to extract proper vector, it is divided into two steps:
First, it is determined that the principal direction of unique point. Centered by unique point, calculate around unique point(Yardstick for space, unique point place) the little wave response of Haar in x direction and y direction in scope, then Gauss's weighting is carried out by the response in two directions, set up the system of coordinates centered by unique point, again to around unique point, fan-shaped as cumulative region taking 60 degree, the Haar response of cumulative Gauss's weighting, obtaining the cumulative maximum vector value obtained is the principal direction of unique point, wherein 60 degree of sectors move at the border circular areas taking unique point as the center of circle, as shown in Fig. 6 a-Fig. 6 c.
2nd, calculate proper vector. As shown in Figure 7, to setting up size around unique point it isSquare window, and square window is rotated in principal direction, then square shaped window Further Division is comprise 25 points=16 sub regions, and the Haar calculating x direction and y direction in every sub regions respectively little wave response dx, dy. Respectively Haar response is sued for peace, adopts formula:, calculate the proper vector of every sub regions. Because always having 16 sub regions, every sub regions has 4 data, so altogether can be in the hope of in a square window=64 data, finally combine the proper vector of subregion, obtain the proper vector of 64 dimensions:��
Then, above 64 dimensional vectors are transformed so that it is comprise chromatic information.
Concrete treatment process is: to the unique point obtained, the coloured image that oppositely search is to be spliced, find the information of the point of characteristic point position on initial image to be spliced, information color value, i.e. the red color component value R of unique point, green component values G and blue component value B. After finding these information, they are added to above-mentioned 64 dimensional feature vector ends, finally obtains comprising the proper vector of chromatic information value, after adding RGB tri-data, make final proper vector include 67 data, namely obtain the proper vector of one 67 dimension��
In order to meet proper vector to the robustness of the factors such as rotation, yardstick and illumination, above-mentioned proper vector to be normalized, obtains normalized proper vector:
, whereinIt it is the mould of proper vector.
The unique point of image to be spliced finds, and after the information of unique point also expressed by the form of proper vector, so that it may to enter next step operation, namely the unique point of two width images is mated.
The present invention has considered coupling accuracy and complicated operation sex chromosome mosaicism, devises and adopts the mode of Europe formula distance coupling to carry out bi-directional matching, as composition graphs 8 introduces coupling process below.
First, as shown in Figure 8 a, treat the unique point P1 on stitching image I1, from image I2 to be spliced, find Europe formula put P2nt and P2snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d1nt and d1snt that record is corresponding, adopt formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d1nt and d1snt, it be designated as T1, if T1 < T0(T0 is rate constant), then enter second time coupling, otherwise the current unique point in rejection image I1.
Next second time coupling is carried out, as shown in Figure 8 b, the some P2 in image I2 to be spliced, finds Europe formula conversely from image I1 to be spliced and puts P1nt and P1snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d2nt and d2snt that record is corresponding, adopts formula:Calculate the ratio of nearest neighbour with time nearest neighbor distance d2nt and d2snt, it is designated as T2, if T2 < T0(T0 is rate constant), then judge whether current matching point is the point matched first time, if namely P1nt is exactly the P1 matched first time, then illustrate that P1 and P2 is corresponding points, sets up matching double points P1-P2; If P1nt is not the P1 matched first time, then illustrate that P1 and P2 is not corresponding points, the current point in rejection image I1 and I2 while of just.
Then enter the coupling process of next corresponding points, until the unique point on image I1 to be spliced was all mated, so just image characteristic point to be spliced has all been mated, so that it may to complete the foundation of the Feature Points Matching pair of two images to be spliced.
Conversion relation between image has translation, rotation, yardstick conversion, projective transformation etc., because the image sources of the present invention is continuous print video, therefore conversion just only translation substantially between image, rotate and the situation of scaling, namely belong to the category of affined transformation. Therefore the image split-joint method of the present invention adopts affined transformation to estimate conversion relation.
Assume on image I2 to be spliced some P2 (x2, y2) after affined transformation to some P1 (x1, y1) on image I1 to be spliced, then have conversion relation:, wherein H is affine transformation matrix, and this formula is generally written as:
;
Therefore, 6 parameters of affine transformation matrix H only need to be obtained, so that it may to obtain the conversion relation of two width images. The present invention adopts method of least squares to estimate 6 parameters of above-mentioned H-matrix by the mode of Sampling characters point.
After completing previous step, can being spliced by image, for preventing incoherence and the mosaic phenomenon of stitching image, make splicing vestige minimumization, the present invention adopts the mode of two sublinear interpolation to be spliced by image. Utilize the mapping point not necessarily integral point that affine transformation formula calculates, so employing two sublinear interpolation methods can well solution integral point problem by no means.
As shown in Figure 9, it is assumed that the point calculated from affined transformation is P point, and P point is non-integral point, nearest four integral points in this position are A, B, C and D, and corresponding pixel value is the line segment height in figure, then the pixel value of P point just linearly can be calculated by twice. First, by linear ratio relation, calculate the value of R1 by A and D, obtain the value of R2 by B and C. Then calculate the value of P point according to ratio by R1 and R2 then.
In the process, carried out two sublinear evaluations, thus it has been called bilinear interpolation method. By comparatively continuous print stitching image can be obtained after bilinearity interpolation.
After completing interpolation splicing, due to image it is also possible to there is the problems such as exposure difference, slight misalignment, contrast gradient difference, therefore the figure spliced also to be done last modification beautify (comprising exposure compensating, local overall brightness adjustment, the overall toning in local and content replacement etc.), to obtain last satisfactory stitching image.
Above-listed better embodiment; the object, technical solutions and advantages of the present invention have been further described; it is it should be understood that; the foregoing is only the better embodiment of the present invention; not in order to limit the present invention; within the spirit and principles in the present invention all, any amendment of doing, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.