CN103167247B - A kind of video sequence color image joining method - Google Patents

A kind of video sequence color image joining method Download PDF

Info

Publication number
CN103167247B
CN103167247B CN201310118005.XA CN201310118005A CN103167247B CN 103167247 B CN103167247 B CN 103167247B CN 201310118005 A CN201310118005 A CN 201310118005A CN 103167247 B CN103167247 B CN 103167247B
Authority
CN
China
Prior art keywords
image
point
unique point
spliced
proper vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310118005.XA
Other languages
Chinese (zh)
Other versions
CN103167247A (en
Inventor
黄立勤
陈财淦
陈国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuxin Futong Technology Co., Ltd
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201310118005.XA priority Critical patent/CN103167247B/en
Publication of CN103167247A publication Critical patent/CN103167247A/en
Application granted granted Critical
Publication of CN103167247B publication Critical patent/CN103167247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to a kind of video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy. The present invention solves the method for registering being suitable for coloured image, adopts more simple implementation method in coupling process simultaneously.

Description

A kind of video sequence color image joining method
Technical field
The present invention relates to computer vision, digital image processing field, especially a kind of video sequence color image joining method.
Background technology
The research that image mosaic is computer vision field always is popular. So-called image mosaic, refers to that the Same Scene image (can be different time, different visual angles or different sensors) that two (or more) have overlapping part is spliced into the technology that a width has the high-resolution image of wide viewing angle. It is well known that the visual angle scope of existing camera installation is limited, the panoramic picture with scope with great visual angle needed for width people to be obtained and just have to be spliced by the independent image that camera installation obtains.
Image registration and image co-registration are two gordian techniquies of image mosaic. Image registration is the basis of image co-registration, and the relationship between quality of registration, to the quality merged, finally has influence on the quality of image mosaic, so image registration is part the most important in image mosaic technology. The development of image registration techniques is depended in the development of panoramic picture splicing to a great extent. From the sixties in last century, according to the character of image, the research of image registration mainly launches around three aspects: based on the registration of characteristics of image, based on the relevant registration of gray scale and based on the registration of transform domain. Wherein the registration of feature based is the method for registering images studying the earliest, also having most extensive adaptability.
In the method for registering images of feature based conventional at present, SURF is relatively good one. But major part adopts the method for registering images of SURF all for gray-scale map picture, is converted into gray-scale map picture by coloured image, on gray-scale map picture, then find unique point, Feature Points Matching until last splicing.
What the coupling of unique point was conventional is adopt Europe formula Distance geometry RANSAC method, once mates, and removes for two times wrong, and step is more.
As mentioned above, it is necessary, the method for registering images based on SURF conventional at present is all for gray-scale map picture, and the process that coloured image transfers to gray-scale map picture will inevitably cause the loss of information, so the registration for coloured image has very big problem of dtmf distortion DTMF. Then bring coupling, splice inaccurate etc. problem.
In addition, adopt Europe formula Distance geometry RANSAC method that unique point is mated not only step many, and algorithm is complicated, it is achieved comparatively loaded down with trivial details.
Summary of the invention
In view of this, it is an object of the invention to provide a kind of video sequence color image joining method.
The present invention adopts following scheme to realize: a kind of video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy.
In an embodiment of the present invention, its concrete steps are:
S01: extract image to be spliced from video;
S02: to the image zooming-out unique point extracted;
S03: the unique point extracted be described, obtains proper vector;
S04: proper vector is carried out bi-directional matching;
S05: the image mated is carried out bilinearity interpolation reconstruction splicing.
In an embodiment of the present invention, described image to be spliced is the same scene image more than two width or two width having overlapping region.
In an embodiment of the present invention, described step S02 is specially:
S21: the integrogram obtaining entire image, is beneficial to the quick calculating in follow-up yardstick space etc.;
S22: utilize quick Hessian matrix detection unique point.
In an embodiment of the present invention, after described step S22 process completes, use size to become big cell type wave filter gradually to integral image filtering, set up the yardstick space of image, then, unique point is carried out threshold process, successively make comparisons with default threshold value, successively get rid of the point lower than threshold value, finally obtain the point higher than threshold value, then these are compared higher than the point of threshold value and 26 neighborhoods in this yardstick space and upper and lower yardstick space, finally determine that extreme point is accurate unique point.
In an embodiment of the present invention, described proper vector comprises two part contents: one is 64 dimensional feature vectors under gray-scale map picture, and two is the proper vector of the chromatic information of superposition characteristic point position script coloured image.
In an embodiment of the present invention, described step S03 is specially:
S31: utilize the original described method of SURF to extract proper vector, obtain the proper vector of 64 dimensions:;
S32: to the unique point obtained, the coloured image that oppositely search is to be spliced, finds the information of the point of characteristic point position in initial image, information color value, i.e. the red color component value R of unique point, green component values G and blue component value B;
S33: after finding above-mentioned information, is added to above-mentioned 64 dimensional feature vector ends, is finally obtained comprising the proper vector of chromatic information value, namely obtain the proper vector of one 67 dimension.
In an embodiment of the present invention, after described step S33 process completes, to the proper vector of above-mentioned 67 dimensionsIt is normalized, obtains normalized proper vector:, whereinIt it is the mould of proper vector.
In an embodiment of the present invention, described step S04 is specially: treat the unique point P1 on stitching image I1, from image I2 to be spliced, find Europe formula put P2nt and P2snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d1nt and d1snt that record is corresponding, adopt formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d1nt and d1snt, it be designated as T1, if T1 < T0, T0 is rate constant, then enters second time coupling, otherwise get rid of the current unique point in image I1 to be spliced; Described second time coupling step is: treat the unique point P2 in stitching image I2, from image I1 to be spliced, find Europe formula conversely put P1nt and P1snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d2nt and d2snt that record is corresponding, adopts formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d2nt and d2snt, it be designated as T2, if T2 < T0, then continue to judge whether current matching point is the point matched first time, if namely P1nt is exactly the P1 matched first time, then illustrate that P1 and P2 is corresponding points, sets up matching double points P1-P2; If P1nt is not the P1 matched first time, then illustrates that P1 and P2 is not corresponding points, the current point in rejection image I1 and I2 while of just, then enter the coupling process of next corresponding points, until the unique point on image I1 was all mated.
In an embodiment of the present invention, after described step S05 process completes, the figure spliced is done last modification and beautifies, to obtain last satisfactory stitching image.
The present invention compared with prior art tool have the following advantages:
1. existing stitching image source is the discontinuous image of single width or single AVI video format, and the image deriving from video is required to be continuous adjacent, and the video image that the present invention adopts, video format are various, and applicability is extensive; And the image obtained differs and is decided to be continuous print, only need to have certain repetition scene, wide adaptability;
2. the method for registering images of existing employing SURF algorithm, its proper vector described after obtaining unique point is all the information of gray-scale map picture, loss chromatic information is serious, and on the basis of proper vector extracted in original SURF method of the present invention, increase chromatic information data, can well the original information of combination picture, make images match more accurate;
3. existing image characteristic point matching method is in order to increase coupling accuracy, adopts Europe formula distance to find nearest matching point, then rejects wrong matching point by RANSAC method, and algorithm is complicated, it is achieved difficulty; And the present invention due to proper vector superposition chromatic information, therefore Europe formula distance is only adopted just can accurately to mate, simultaneously in order to increase coupling accuracy, adopt the mode of nearest neighbour and time neighbour's ratio to reject fuzzy point, then adopt two-way matching process to improve accuracy. Whole process is easily understood, and algorithm realizes easily simple.
Accompanying drawing explanation
Fig. 1 is a kind of video sequence color image joining method schema of the present invention;
Fig. 2 is the schematic diagram of integrogram;
Fig. 3 is the schematic diagram of cell type wave filter;
Fig. 4 a-4b is yardstick space process of establishing;
Fig. 5 is the schematic diagram of unique point 26 neighborhood;
Fig. 6 a-6c is the schematic diagram finding unique point principal direction;
Fig. 7 is the schematic diagram of the direction window describing unique point;
Fig. 8 a-8b is the schematic diagram of Feature Points Matching process;
Fig. 9 is the schematic diagram of bilinearity interpolation.
Embodiment
For making the object of the present invention, technical scheme and advantage clearly understand, below by specific embodiment and relevant drawings, the present invention will be described in further detail.
The present invention provides a kind of video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy.
As shown in Figure 1, a kind of video sequence color image joining method flow process of the present invention is: S01: extract image (described image to be spliced is image I1 and I2) to be spliced from video; S02: to the image zooming-out unique point extracted; S03: the unique point extracted be described, obtains proper vector; S04: proper vector is carried out bi-directional matching; S05: the image mated is carried out bilinearity interpolation reconstruction splicing.
First, will carry out image mosaic must have more than two width or two width to have the same scene image of overlapping region. The acquisition of major part image repeatedly takes Same Scene by photographic camera at present, thus obtains the static image of to be spliced multiple. The image sources of the present invention extracts image to be spliced from video file, therefore has the consistent attribute such as continuity, dependency between image, follow-up registration operation is had good help. The present invention can extract the image to be spliced with repetition scene from the video file of various form, and video format comprises: AVI, MPEG, WMV, MKV etc.
After obtaining image, what first to be carried out is the calculating of integrogram, as shown in Figure 2. Calculate respectively the pixel from initial point (0,0) to point (x, y) and, wherein (x, y) represents the point on whole image to be spliced. So just obtain the integrogram of entire image, it is beneficial to the quick calculating in follow-up yardstick space etc.
Utilize quick Hessian matrix detection unique point, first it is the cell type wave filter set up as shown in Figure 3, corresponding respectively is the wave filter in horizontal x direction, vertical y direction and x-y direction, utilize cell type wave filter to integral image filtering, the response value obtaining three directions is defined as Dxx, Dyy and Dxy, thus obtains Hessian matrix parameter.
Utilize experimental formula:, it is approximately the determinant of Hessian, then judges if negative, to illustrate the value of the determinant at picture point (x, y) place that this point is non-extreme point, be not namely unique point; If determinant is positive number, illustrates that this point is extreme point, it is unique point.
The unique point obtained by determinant is also just coarse, next needs further accurately location feature point. As shown in fig. 4 a, the size of cell type wave filter is respectively����, analogize with this. As shown in Figure 4 b, it may also be useful to size becomes big cell type wave filter gradually to integral image filtering, sets up the yardstick space of image.
Then, unique point is carried out threshold process, successively make comparisons with default threshold value, successively get rid of the point lower than threshold value, finally obtain the point higher than threshold value, then these are compared higher than the point of threshold value and 26 neighborhoods in this yardstick space and upper and lower yardstick space, as shown in Figure 5, finally determine that extreme point is accurate unique point.
After obtaining the unique point of accurately location, it is necessary to determine the relevant information of unique point further, namely describe unique point, obtain the proper vector of unique point. Proper vector in the present invention comprises two part contents: one is 64 dimensional feature vectors under gray-scale map picture, and two is the proper vector of the chromatic information of superposition characteristic point position script coloured image.
First, it is utilize the original described method of SURF to extract proper vector, it is divided into two steps:
First, it is determined that the principal direction of unique point. Centered by unique point, calculate around unique point(Yardstick for space, unique point place) the little wave response of Haar in x direction and y direction in scope, then Gauss's weighting is carried out by the response in two directions, set up the system of coordinates centered by unique point, again to around unique point, fan-shaped as cumulative region taking 60 degree, the Haar response of cumulative Gauss's weighting, obtaining the cumulative maximum vector value obtained is the principal direction of unique point, wherein 60 degree of sectors move at the border circular areas taking unique point as the center of circle, as shown in Fig. 6 a-Fig. 6 c.
2nd, calculate proper vector. As shown in Figure 7, to setting up size around unique point it isSquare window, and square window is rotated in principal direction, then square shaped window Further Division is comprise 25 points=16 sub regions, and the Haar calculating x direction and y direction in every sub regions respectively little wave response dx, dy. Respectively Haar response is sued for peace, adopts formula:, calculate the proper vector of every sub regions. Because always having 16 sub regions, every sub regions has 4 data, so altogether can be in the hope of in a square window=64 data, finally combine the proper vector of subregion, obtain the proper vector of 64 dimensions:��
Then, above 64 dimensional vectors are transformed so that it is comprise chromatic information.
Concrete treatment process is: to the unique point obtained, the coloured image that oppositely search is to be spliced, find the information of the point of characteristic point position on initial image to be spliced, information color value, i.e. the red color component value R of unique point, green component values G and blue component value B. After finding these information, they are added to above-mentioned 64 dimensional feature vector ends, finally obtains comprising the proper vector of chromatic information value, after adding RGB tri-data, make final proper vector include 67 data, namely obtain the proper vector of one 67 dimension��
In order to meet proper vector to the robustness of the factors such as rotation, yardstick and illumination, above-mentioned proper vector to be normalized, obtains normalized proper vector:
, whereinIt it is the mould of proper vector.
The unique point of image to be spliced finds, and after the information of unique point also expressed by the form of proper vector, so that it may to enter next step operation, namely the unique point of two width images is mated.
The present invention has considered coupling accuracy and complicated operation sex chromosome mosaicism, devises and adopts the mode of Europe formula distance coupling to carry out bi-directional matching, as composition graphs 8 introduces coupling process below.
First, as shown in Figure 8 a, treat the unique point P1 on stitching image I1, from image I2 to be spliced, find Europe formula put P2nt and P2snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d1nt and d1snt that record is corresponding, adopt formula:, calculate nearest neighbour and the ratio of time nearest neighbor distance d1nt and d1snt, it be designated as T1, if T1 < T0(T0 is rate constant), then enter second time coupling, otherwise the current unique point in rejection image I1.
Next second time coupling is carried out, as shown in Figure 8 b, the some P2 in image I2 to be spliced, finds Europe formula conversely from image I1 to be spliced and puts P1nt and P1snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d2nt and d2snt that record is corresponding, adopts formula:Calculate the ratio of nearest neighbour with time nearest neighbor distance d2nt and d2snt, it is designated as T2, if T2 < T0(T0 is rate constant), then judge whether current matching point is the point matched first time, if namely P1nt is exactly the P1 matched first time, then illustrate that P1 and P2 is corresponding points, sets up matching double points P1-P2; If P1nt is not the P1 matched first time, then illustrate that P1 and P2 is not corresponding points, the current point in rejection image I1 and I2 while of just.
Then enter the coupling process of next corresponding points, until the unique point on image I1 to be spliced was all mated, so just image characteristic point to be spliced has all been mated, so that it may to complete the foundation of the Feature Points Matching pair of two images to be spliced.
Conversion relation between image has translation, rotation, yardstick conversion, projective transformation etc., because the image sources of the present invention is continuous print video, therefore conversion just only translation substantially between image, rotate and the situation of scaling, namely belong to the category of affined transformation. Therefore the image split-joint method of the present invention adopts affined transformation to estimate conversion relation.
Assume on image I2 to be spliced some P2 (x2, y2) after affined transformation to some P1 (x1, y1) on image I1 to be spliced, then have conversion relation:, wherein H is affine transformation matrix, and this formula is generally written as:
;
Therefore, 6 parameters of affine transformation matrix H only need to be obtained, so that it may to obtain the conversion relation of two width images. The present invention adopts method of least squares to estimate 6 parameters of above-mentioned H-matrix by the mode of Sampling characters point.
After completing previous step, can being spliced by image, for preventing incoherence and the mosaic phenomenon of stitching image, make splicing vestige minimumization, the present invention adopts the mode of two sublinear interpolation to be spliced by image. Utilize the mapping point not necessarily integral point that affine transformation formula calculates, so employing two sublinear interpolation methods can well solution integral point problem by no means.
As shown in Figure 9, it is assumed that the point calculated from affined transformation is P point, and P point is non-integral point, nearest four integral points in this position are A, B, C and D, and corresponding pixel value is the line segment height in figure, then the pixel value of P point just linearly can be calculated by twice. First, by linear ratio relation, calculate the value of R1 by A and D, obtain the value of R2 by B and C. Then calculate the value of P point according to ratio by R1 and R2 then.
In the process, carried out two sublinear evaluations, thus it has been called bilinear interpolation method. By comparatively continuous print stitching image can be obtained after bilinearity interpolation.
After completing interpolation splicing, due to image it is also possible to there is the problems such as exposure difference, slight misalignment, contrast gradient difference, therefore the figure spliced also to be done last modification beautify (comprising exposure compensating, local overall brightness adjustment, the overall toning in local and content replacement etc.), to obtain last satisfactory stitching image.
Above-listed better embodiment; the object, technical solutions and advantages of the present invention have been further described; it is it should be understood that; the foregoing is only the better embodiment of the present invention; not in order to limit the present invention; within the spirit and principles in the present invention all, any amendment of doing, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (7)

1. a video sequence color image joining method, it is characterized in that: based on SURF algorithm, the vector that SURF is described unique point increases chromatic information, to increase the accuracy that unique point describes, then bi-directional secondary Feature Points Matching is adopted, reach the object of accurately coupling, finally utilize bilinearity Interpolation Principle to be spliced by image to be spliced, it is to increase the continuity of splicing and accuracy;
Its concrete steps are:
S01: extract image to be spliced from video;
S02: to the image zooming-out unique point extracted;
S03: the unique point extracted be described, obtains proper vector;
S04: proper vector is carried out bi-directional matching;
S05: the image mated is carried out bilinearity interpolation reconstruction splicing;
Described image to be spliced is the same scene image more than two width or two width having overlapping region;
Described step S04 is specially: treat the unique point P1 on stitching image I1, finds Europe formula and puts P2nt and P2snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d1nt and d1snt that record is corresponding, adopt formula from image I2 to be spliced:Calculate nearest neighbour and the ratio of time nearest neighbor distance d1nt and d1snt, it is designated as T1, if T1 < T0, T0 is rate constant, then enters second time coupling, otherwise get rid of the current unique point in image I1 to be spliced; Described second time coupling step is: treat the unique point P2 in stitching image I2, from image I1 to be spliced, find Europe formula conversely put P1nt and P1snt apart from minimum and secondary little nearest neighbour and time neighbour, the Europe formula distance d2nt and d2snt that record is corresponding, adopts formula:Calculate nearest neighbour and the ratio of time nearest neighbor distance d2nt and d2snt, it is designated as T2, if T2 is < T0, then continue to judge whether current matching point is the point matched first time, if namely P1nt is exactly the P1 matched first time, then illustrate that P1 and P2 is corresponding points, sets up matching double points P1-P2; If P1nt is not the P1 matched first time, then illustrates that P1 and P2 is not corresponding points, the current point in rejection image I1 and I2 while of just, then enter the coupling process of next corresponding points, until the unique point on image I1 was all mated.
2. a kind of video sequence color image joining method according to claim 1, it is characterised in that: described step S02 is specially:
S21: the integrogram obtaining entire image, is beneficial to the quick calculating in follow-up yardstick space;
S22: utilize quick Hessian matrix detection unique point.
3. a kind of video sequence color image joining method according to claim 2, it is characterized in that: after described step S22 process completes, size is used to become big cell type wave filter gradually to integral image filtering, set up the yardstick space of image, then, unique point is carried out threshold process, successively make comparisons with default threshold value, successively get rid of the point lower than threshold value, finally obtain the point higher than threshold value, then these are compared higher than the point of threshold value and 26 neighborhoods in this yardstick space and upper and lower yardstick space, finally determine that extreme point is accurate unique point.
4. a kind of video sequence color image joining method according to claim 1, it is characterized in that: described proper vector comprises two part contents: one is 64 dimensional feature vectors under gray-scale map picture, two is the proper vector of the chromatic information of superposition characteristic point position script coloured image.
5. a kind of video sequence color image joining method according to claim 1, it is characterised in that: described step S03 is specially:
S31: utilize the original described method of SURF to extract proper vector, obtain the proper vector of 64 dimensions: v=[i1,i2,i3,��,i64];
S32: to the unique point obtained, the coloured image that oppositely search is to be spliced, finds the information of the point of characteristic point position in initial image, information color value, i.e. the red color component value R of unique point, green component values G and blue component value B;
S33: after finding above-mentioned information, is added to above-mentioned 64 dimensional feature vector ends, is finally obtained comprising the proper vector v of chromatic information valuefinal=[i1,i2,i3,��,i64, R, G, B], namely obtain the proper vector of one 67 dimension.
6. a kind of video sequence color image joining method according to claim 5, it is characterised in that: after described step S33 process completes, to the proper vector v of above-mentioned 67 dimensionsfinalIt is normalized, obtains normalized proper vector: v &OverBar; f i n a l = &lsqb; i 1 | v f i n a l | , i 2 | v f i n a l | , i 3 | v f i n a l | , ... , i 64 | v f i n a l | , R | v f i n a l | , G | v f i n a l | , B | v f i n a l | &rsqb; , Wherein | vfinal| it is the mould of proper vector.
7. a kind of video sequence color image joining method according to claim 1, it is characterised in that: after described step S05 process completes, the figure spliced is done last modification and beautifies, to obtain last satisfactory stitching image.
CN201310118005.XA 2013-04-08 2013-04-08 A kind of video sequence color image joining method Active CN103167247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310118005.XA CN103167247B (en) 2013-04-08 2013-04-08 A kind of video sequence color image joining method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310118005.XA CN103167247B (en) 2013-04-08 2013-04-08 A kind of video sequence color image joining method

Publications (2)

Publication Number Publication Date
CN103167247A CN103167247A (en) 2013-06-19
CN103167247B true CN103167247B (en) 2016-06-01

Family

ID=48589957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310118005.XA Active CN103167247B (en) 2013-04-08 2013-04-08 A kind of video sequence color image joining method

Country Status (1)

Country Link
CN (1) CN103167247B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408741A (en) * 2014-10-27 2015-03-11 大连理工大学 Video global motion estimation method with sequential consistency constraint
CN106056567A (en) * 2016-07-22 2016-10-26 珠海医凯电子科技有限公司 Supersonic wave space compound imaging method
CN107909033A (en) * 2017-11-15 2018-04-13 西安交通大学 Suspect's fast track method based on monitor video
CN108322728B (en) * 2018-02-07 2019-10-15 盎锐(上海)信息科技有限公司 Computer and model generating method with scanning function
CN110490268A (en) * 2019-08-26 2019-11-22 山东浪潮人工智能研究院有限公司 A kind of feature matching method of the improvement nearest neighbor distance ratio based on cosine similarity

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Distinctive Image Features from Scale-Invariant Keypoints;David G. Lowe;《International Journal of Computer Vision,2004》;20040105;全文 *
SURF Algorithm with Color and Global Characteristics;Hyunsup Yoon,et al.;《ICROS-SICE International Joint Conference 2009》;20090821;第2部分 *
基于SURF的彩色图像配准;石雅笋,等;《红外技术》;20100731;第32卷(第07期);全文 *
基于扩展的SURF描述符的彩色图像配准技术;刘学,等;《计算机应用研究》;20110331;第28卷(第03期);前言、第1-3部分 *
基于特征的彩色图像配准技术研究;刘学;《硕士学位论文数据库》;20101201;第五章 *

Also Published As

Publication number Publication date
CN103167247A (en) 2013-06-19

Similar Documents

Publication Publication Date Title
US11823363B2 (en) Infrared and visible light fusion method
CN110020985B (en) Video stitching system and method of binocular robot
CN111062905B (en) Infrared and visible light fusion method based on saliency map enhancement
CN104376548B (en) A kind of quick joining method of image based on modified SURF algorithm
CN110660023B (en) Video stitching method based on image semantic segmentation
CN107945113B (en) The antidote of topography&#39;s splicing dislocation
CN104463778B (en) A kind of Panoramagram generation method
CN103167247B (en) A kind of video sequence color image joining method
CN103226822B (en) Medical imaging joining method
CN104732482A (en) Multi-resolution image stitching method based on control points
CN108171735B (en) Billion pixel video alignment method and system based on deep learning
CN104240211A (en) Image brightness and color balancing method and system for video stitching
CN103632359A (en) Super-resolution processing method for videos
CN105069749B (en) A kind of joining method of tire-mold image
CN104392416A (en) Video stitching method for sports scene
CN112929626B (en) Three-dimensional information extraction method based on smartphone image
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN106856000A (en) A kind of vehicle-mounted panoramic image seamless splicing processing method and system
Rathnayaka et al. An efficient calibration method for a stereo camera system with heterogeneous lenses using an embedded checkerboard pattern
CN105550981A (en) Image registration and splicing method on the basis of Lucas-Kanade algorithm
CN105096287A (en) Improved multi-time Poisson image fusion method
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
CN106780326A (en) A kind of fusion method for improving panoramic picture definition
CN106780309A (en) A kind of diameter radar image joining method
CN104077764A (en) Panorama synthetic method based on image mosaic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191014

Address after: 350002, 15-16 floor, technology transfer center building, 611 Industrial Road, Gulou District, Fuzhou, Fujian.

Patentee after: FUJIAN FORTUNETONE NETWORK TECHNOLOGY CO., LTD.

Address before: Minhou County of Fuzhou City, Fujian province 350108 Street Town Road No. 2 University City School District of Fuzhou University

Patentee before: Fuzhou University

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 350000 Floor 15-16 of Technology Transfer Center Building of Strait No. 611 Industrial Road, Gulou District, Fuzhou City, Fujian Province

Patentee after: Fuxin Futong Technology Co., Ltd

Address before: 350002, 15-16 floor, technology transfer center building, 611 Industrial Road, Gulou District, Fuzhou, Fujian.

Patentee before: FUJIAN FORTUNETONE NETWORK TECHNOLOGY Co.,Ltd.