CN111260555A - Improved image splicing method based on SURF - Google Patents
Improved image splicing method based on SURF Download PDFInfo
- Publication number
- CN111260555A CN111260555A CN202010040996.4A CN202010040996A CN111260555A CN 111260555 A CN111260555 A CN 111260555A CN 202010040996 A CN202010040996 A CN 202010040996A CN 111260555 A CN111260555 A CN 111260555A
- Authority
- CN
- China
- Prior art keywords
- point
- points
- image
- pairs
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims abstract description 53
- 230000001131 transforming effect Effects 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims description 2
- 238000003379 elimination reaction Methods 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 abstract description 9
- 238000012545 processing Methods 0.000 abstract description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an improved image splicing method based on SURF, and belongs to the technical field of image processing. The method comprises the following steps: carrying out picture interception on videos shot by the two monitoring cameras; carrying out appropriate pretreatment on the two pictures; extracting and matching the characteristic points of the image; rejecting mismatching characteristic point pairs; calculating the distance of the matched characteristic point pair; performing distance limitation on proper feature point pairs according to the maximum distance and the minimum distance, and rejecting feature point pairs with overlarge distances; calculating a homography matrix according to the remaining characteristic point pairs; and transforming the two images into the same coordinate system through the homography matrix, and splicing and fusing the two images. The invention solves the problem that the image obtained by real-time video shot by a monitoring camera has mistaken identified characteristic points to influence the final splicing result by limiting the distance of the characteristic points.
Description
Technical Field
The invention relates to an improved image stitching method based on SURF (speedUp Robust features), and belongs to the technical field of image processing.
Background
With the continuous development of society and the increasing complication of interpersonal relationships. For a complex environment, a plurality of network cameras are usually installed at a certain position to monitor scenes in different view angle ranges in real time. Although the captured visual field is larger, through a plurality of displays or by dividing and displaying the displays, the videos are disordered and have no sense of unity; in addition, the method requires a large amount of human resources, and manual monitoring cannot take into account all monitoring scenes, and it is difficult to ensure that a monitor can concentrate on observing for a long time, so that the situation of the scene cannot be mastered accurately in real time, and further, the occurrence of accidents is difficult to be effectively reduced or even avoided.
Therefore, image splicing becomes a popular research field, a seamless image is formed by a series of spatially overlapped images, the view field is larger than that of a single image, and the identification, feeling and monitoring capability of the surrounding environment and objects can be greatly improved. The existing image splicing method generally adopts a SURF algorithm in combination with a RANSAC algorithm to complete image splicing. But in some complex cases the splicing effect is difficult to guarantee.
Disclosure of Invention
The invention aims to solve the technical problem of better image splicing.
In order to solve the above problems, the technical solution of the present invention is to provide an improved image stitching method based on SURF, which includes the following steps:
step 1: capturing pictures of videos shot by the two cameras;
step 2: preprocessing the two pictures;
and step 3: extracting and matching the characteristic points of the image;
and 4, step 4: rejecting mismatching characteristic point pairs;
and 5: calculating the distance of the matched characteristic point pair;
step 6: performing distance limitation on proper feature point pairs according to the maximum distance and the minimum distance, and rejecting feature point pairs with overlarge distances;
and 7: obtaining the rest part of characteristic point pairs, and calculating a homography matrix;
and 8, transforming the two images into the same coordinate system through the homography matrix, and splicing and fusing the two images.
Preferably, said step 2 comprises distortion correction of the image.
Preferably, the step 3 uses SURF algorithm to extract and match the feature points, and the step 3 includes:
step 3-1: constructing a Hessian matrix to generate all interest points:
for a point with a point pixel of l (x, y) in the image, when the scale is sigma, a Hessian matrix is calculated:
wherein lxx(x, σ) is a second derivative of Gaussian filterThe result of the same l (x, y) convolution, whereinlxy(x,σ),lyyThe meaning of (x, σ) is similar;
obtaining an approximation of the Hessian matrix determinant for each pixel:
det(H(x,σ))=lxx(x,σ)lyy(x,σ)-(0.9lxy(x,σ))2
the value of the discriminant is the eigenvalue of the Hessian matrix, whether the value is an extremum or not can be determined by using the sign of the discriminant, if the value is less than 0, whether (x, y) is a local extremum point can be determined, and if the value is greater than 0, whether (x, y) is a local extremum point can be determined;
step 3-2: constructing a scale space: starting from a 9 x 9 box filter, expanding the size of the box filter, wherein the 9 x 9 box filter is a filtering template obtained by dispersing and reducing a Gaussian second-order differential function when sigma is 1.2; keeping the image unchanged, and only changing the size of the filtering window to obtain images with different scales to form a scale space;
step 3-3: positioning the characteristic points: comparing the size of each pixel point processed by the hessian matrix with 26 points in the 3-dimensional field of the pixel point, and if the size is the maximum value or the minimum value of the 26 points, determining the pixel point as a primary characteristic point;
step 3-4: determining the main direction of the feature points: counting Harr wavelet characteristics in a circular neighborhood of the characteristic points, namely counting the sum of horizontal and vertical Harr wavelet characteristics of all points in a sector of 60 degrees in the circular neighborhood of the characteristic points, then rotating the sector at intervals of 0.2 radian, counting the Harr wavelet characteristic values in the sector again, and finally taking the direction of the sector with the maximum value as the main direction of the characteristic points;
step 3-5: generating a feature descriptor: and taking a 4-by-4 rectangular area block around the feature point, and counting haar wavelet features of 25 pixels in the horizontal direction and the vertical direction by each sub-area. The haar wavelet features are 4 directions of the sum of the horizontal direction value, the vertical direction value, the horizontal direction absolute value and the vertical direction absolute value; the 4 values are used as a feature vector of each subblock region, so that a total 64-dimensional vector is used as a descriptor of Surf features;
step 3-6: matching the characteristic points: calculating Euclidean distances of two groups of feature points; the smaller the euclidean distance is, the higher the similarity is, and when the euclidean distance is smaller than a set threshold, it can be determined that the matching is successful.
Preferably, the step 4 uses a RANSAC algorithm to perform feature point elimination, and the step 4 includes:
step 4-1: randomly selecting 4 pairs from the matched feature point pairs, and solving a transformation matrix M;
step 4-2: obtaining a corresponding point in the image to be matched after each point in one of the rest characteristic point pairs is transformed through a matrix M, calculating the distance between the point and the originally matched point in the image to be matched, if the distance is smaller than a preset threshold value, the characteristic point is a correct matching point, and storing the correct matching point pair;
step 4-3: and 4-1, randomly picking 4 groups of characteristic point pairs from the remaining matching point pairs again, calculating a corresponding transformation matrix, and repeating the step 4-2. After repeating for several times, the correct matching point is finally obtained.
Preferably, said step 5 calculates a feature point pair (x) of the left image and the right image when the two images are directly connected according to the matched feature point pair obtained in said step 4n,yn) And (x'n,y'n) (N is 1,2,3 … … N, N is the number of pairs of characteristic points) with a distance d between themnThe formula is as follows:
where l is the truncated image length.
Preferably, the step 6 extracts the maximum distance and the minimum distance of the feature point pair according to the result of the step 5, selects a proper threshold value according to the maximum distance and the minimum distance to screen the feature point pair, and eliminates the content which does not belong to the shooting scene in the picture obtained by the monitoring video, such as the time watermark and the camera number watermark; and the rest characteristic point pairs are characteristic point pairs which are normally matched in the actual scene picture.
Preferably, said step 7 of calculating the homography matrix is: taking out 4 pairs of normally matched M pairs of feature points, calculating a corresponding transformation matrix, solving each point in the left image in the remaining pairs of feature points, obtaining a corresponding point in the image to be matched after matrix transformation, calculating the distance between the transformed point and the originally matched point in the right image, if the distance is less than a preset threshold value, the feature point is a correct matching point, and storing the number of the correct matching points obtained by the obtained matrix; and repeating the operation on the remaining M-4 pairs of feature points, and counting the number of correct matching points corresponding to each transformation matrix, wherein the corresponding matrix with the largest number of correct matching points is the homography matrix corresponding to the two images.
Preferably, the step 8 transforms the two images into the same coordinate system according to the homography matrix, splices the two images into one image, and smoothes the image to eliminate the seam appearing on the overlapping area during image synthesis.
Compared with the prior art, the invention has the following beneficial effects:
after the matching of the characteristic points is completed, the distance limitation of the characteristic points is added, so that the problem of how to process the content except the monitoring scene in the picture in the splicing process is effectively solved, the accuracy of video image splicing is improved, and a better splicing effect can be ensured in certain application scenes.
Drawings
FIG. 1 is a flow chart of the present invention;
Detailed Description
In order to make the invention more comprehensible, preferred embodiments are described in detail below with reference to the accompanying drawings:
as shown in fig. 1, the present invention provides an improved image stitching method based on SURF.
Fig. 1 shows a schematic flow chart of an improved image stitching method based on SURF of the present invention, and with reference to fig. 1, the present invention includes the following steps:
step 1: and capturing two pictures to be captured from the two monitoring videos at the same time.
Step 2: preprocessing the two images acquired in the step 1, including: and carrying out distortion correction on the image.
And step 3: extracting and matching the characteristic points of the image processed in the step 2, specifically comprising the following steps:
step 3-1: constructing a Hessian matrix to generate all interest points: for a point with a point pixel of l (x, y) in the image, when the scale is sigma, a Hessian matrix is calculated:
wherein lxx(x, σ) is a second derivative of Gaussian filterThe result of the same l (x, y) convolution, whereinlxy(x,σ),lyyThe meaning of (x, σ) is similar.
The Hessian matrix determinant for each pixel is approximated as:
det(H(x,σ))=lxx(x,σ)lyy(x,σ)-(0.9lxy(x,σ))2
the value of the discriminant is the eigenvalue of the Hessian matrix, and if the value is less than 0, it can be determined that (x, y) is not a local extreme point, and if the value is greater than 0, it can be determined that point (x, y) is a local extreme point.
Step 3-2: constructing a scale space: the size of the box filter is expanded from a 9 x 9 box filter, and the 9 x 9 box filter is a filtering template obtained by dispersing and reducing a Gaussian second order differential function when sigma is 1.2. Keeping the image unchanged, and only changing the size of the Gaussian filter window to obtain images with different scales, namely forming a scale space;
step 3-3: positioning the characteristic points: comparing the size of each pixel point processed by the hessian matrix with 26 points in the 3-dimensional field of the pixel point, if the size is the maximum value or the minimum value of the 26 points, reserving the pixel points, and determining the pixel points as preliminary feature points;
step 3-4: determining the main direction of the feature points: in the circular neighborhood of the feature point, the sum of horizontal and vertical Harr wavelet features of all points in a sector of 60 degrees is counted, then the sector is rotated at intervals of 0.2 radian, and after the Harr wavelet feature value in the region is counted again, the direction of the sector with the largest value is finally taken as the main direction of the feature point.
Step 3-5: generating a feature descriptor: and taking a 4-by-4 rectangular area block around the feature point, and counting haar wavelet features of 25 pixels in the horizontal direction and the vertical direction by each sub-area. The haar wavelet features are 4 directions of the sum of the horizontal direction value, the vertical direction value, the horizontal direction absolute value and the vertical direction absolute value. These 4 values are taken as feature vectors for each sub-block region, so a total of 64-dimensional vectors are taken as descriptors of Surf features.
Step 3-6: matching the characteristic points: by calculating the Euclidean distance of two groups of feature points. The smaller the euclidean distance is, the higher the similarity is, and when the euclidean distance is smaller than a set threshold, it can be determined that the matching is successful.
And 4, step 4: using RANSAC algorithm to remove the feature points, the method comprises the following specific steps:
step 4-1: randomly selecting 4 pairs from the matched feature point pairs, and solving a transformation matrix M;
step 4-2: and (3) obtaining each point in one image in the rest characteristic point pairs through a matrix M, then obtaining the corresponding point in the image to be matched after the point is transformed, calculating the distance between the point and the originally matched point in the image to be matched, if the distance is smaller than a preset threshold value, the characteristic point is a correct matching point, and storing the correct matching point pair.
Step 4-3: and 4-1, randomly picking 4 groups of characteristic point pairs from the remaining matching point pairs again, calculating a corresponding transformation matrix, and repeating the step 4-2. After repeating for several times, the correct matching point is finally obtained.
And 5: according to the matched characteristic point pairs obtained in the step 4, calculating characteristic point pairs (x) respectively positioned in the left image and the right image when the two images are directly connectedn,yn) And (x'n,y'n) (N is 1,2,3 … … N, N is the number of pairs of characteristic points) with a distance d between themnThe formula is as follows:
where l is the image length.
Step 6: and (5) according to the result of the step (5), extracting the maximum distance and the minimum distance of the feature point pairs, selecting a proper threshold value according to the maximum distance and the minimum distance to screen the feature point pairs, and eliminating the contents which do not belong to the shooting scene in the pictures obtained by the monitoring video, such as time watermarks and camera number watermarks. And the rest characteristic point pairs are characteristic point pairs which are normally matched in the actual scene picture.
And 7: taking out 4 pairs from the M pairs of feature points which are matched normally, calculating a corresponding transformation matrix, solving each point in the left image in the remaining pairs of feature points, obtaining a corresponding point in the image to be matched after matrix transformation, calculating the distance between the transformed point and the originally matched point in the right image, if the distance is less than a preset threshold value, the feature point is a correct matching point, and storing the number of the correct matching points which can be obtained by the obtained matrix. And repeating the operation on the remaining M-4 pairs of feature points, and counting the number of correct matching points corresponding to each transformation matrix, wherein the corresponding matrix with the largest number of correct matching points is the homography matrix corresponding to the two images.
And 8: and (4) transforming the two pictures into the same coordinate system according to the homography matrix obtained in the step (7), splicing the two pictures into an image, and smoothing the image to eliminate the splicing seam appearing on the overlapped area during image synthesis.
While the invention has been described with respect to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. Those skilled in the art can make various changes, modifications and equivalent arrangements, which are equivalent to the embodiments of the present invention, without departing from the spirit and scope of the present invention, and which may be made by utilizing the techniques disclosed above; meanwhile, any changes, modifications and variations of the above-described embodiments, which are equivalent to those of the technical spirit of the present invention, are within the scope of the technical solution of the present invention.
Claims (8)
1. An improved image stitching method based on SURF is characterized by comprising the following steps:
step 1: capturing pictures of videos shot by the two cameras;
step 2: preprocessing the two pictures;
and step 3: extracting and matching the characteristic points of the image;
and 4, step 4: rejecting mismatching characteristic point pairs;
and 5: calculating the distance of the matched characteristic point pair;
step 6: performing distance limitation on proper feature point pairs according to the maximum distance and the minimum distance, and rejecting feature point pairs with overlarge distances;
and 7: obtaining the rest part of characteristic point pairs, and calculating a homography matrix;
and 8: and transforming the two images into the same coordinate system through the homography matrix, and splicing and fusing the two images.
2. The SURF-based improved image stitching method as claimed in claim 1, wherein the step 2 includes distortion correction of the image.
3. The SURF-based improved image stitching method as claimed in claim 1, wherein the step 3 uses a SURF algorithm to extract and match the feature points, and the step 3 comprises:
step 3-1: constructing a Hessian matrix to generate all interest points:
for a point with a point pixel of l (x, y) in the image, when the scale is sigma, a Hessian matrix is calculated:
wherein lxx(x, σ) is a second derivative of Gaussian filterThe result of the same l (x, y) convolution, whereinlxy(x,σ),lyyThe meaning of (x, σ) is similar;
obtaining an approximation of the Hessian matrix determinant for each pixel:
det(H(x,σ))=lxx(x,σ)lyy(x,σ)-(0.9lxy(x,σ))2;
the value of the discriminant is the eigenvalue of the Hessian matrix, whether the value is an extremum or not can be determined by using the sign of the discriminant, if the value is less than 0, whether (x, y) is a local extremum point can be determined, and if the value is greater than 0, whether (x, y) is a local extremum point can be determined;
step 3-2: constructing a scale space: starting from a 9 x 9 box filter, expanding the size of the box filter, wherein the 9 x 9 box filter is a filtering template obtained by dispersing and reducing a Gaussian second-order differential function when sigma is 1.2; keeping the image unchanged, and only changing the size of the filtering window to obtain images with different scales to form a scale space;
step 3-3: positioning the characteristic points: comparing the size of each pixel point processed by the hessian matrix with 26 points in the 3-dimensional field of the pixel point, and if the size is the maximum value or the minimum value of the 26 points, determining the pixel point as a primary characteristic point;
step 3-4: determining the main direction of the feature points: counting Harr wavelet characteristics in a circular neighborhood of the characteristic points, namely counting the sum of horizontal and vertical Harr wavelet characteristics of all points in a sector of 60 degrees in the circular neighborhood of the characteristic points, then rotating the sector at intervals of 0.2 radian, counting the Harr wavelet characteristic values in the sector again, and finally taking the direction of the sector with the maximum value as the main direction of the characteristic points;
step 3-5: generating a feature descriptor: and taking a 4-by-4 rectangular area block around the feature point, and counting haar wavelet features of 25 pixels in the horizontal direction and the vertical direction by each sub-area. The haar wavelet features are 4 directions of the sum of the horizontal direction value, the vertical direction value, the horizontal direction absolute value and the vertical direction absolute value; the 4 values are used as a feature vector of each subblock region, so that a total 64-dimensional vector is used as a descriptor of Surf features;
step 3-6: matching the characteristic points: calculating Euclidean distances of two groups of feature points; the smaller the euclidean distance is, the higher the similarity is, and when the euclidean distance is smaller than a set threshold, it can be determined that the matching is successful.
4. The SURF-based improved image stitching method as claimed in claim 1, wherein the step 4 uses RANSAC algorithm to perform feature point elimination, and the step 4 comprises:
step 4-1: randomly selecting 4 pairs from the matched feature point pairs, and solving a transformation matrix M;
step 4-2: obtaining a corresponding point in the image to be matched after each point in one of the rest characteristic point pairs is transformed through a matrix M, calculating the distance between the point and the originally matched point in the image to be matched, if the distance is smaller than a preset threshold value, the characteristic point is a correct matching point, and storing the correct matching point pair;
step 4-3: and 4-1, randomly picking 4 groups of characteristic point pairs from the remaining matching point pairs again, calculating a corresponding transformation matrix, and repeating the step 4-2. After repeating for several times, the correct matching point is finally obtained.
5. The SURF-based improved image stitching method according to claim 1, wherein the step 5 calculates the feature point pair (x) of the left image and the right image when the two images are directly connected according to the matched feature point pair obtained in the step 4n,yn) And (x'n,y'n) (N is 1,2,3 … … N, N is the number of pairs of characteristic points) with a distance d between themnThe formula is as follows:
where l is the truncated image length.
6. The SURF-based improved image stitching method according to claim 1, wherein the step 6 is to extract the maximum distance and the minimum distance of the feature point pairs according to the result of the step 5, select a proper threshold value according to the maximum distance and the minimum distance to screen the feature point pairs, and reject the content which does not belong to the shooting scene in the picture obtained by the surveillance video, such as the time watermark and the camera number watermark; and the rest characteristic point pairs are characteristic point pairs which are normally matched in the actual scene picture.
7. The SURF-based improved image stitching method according to claim 1, wherein the step 7 of calculating the homography matrix is: taking out 4 pairs of normally matched M pairs of feature points, calculating a corresponding transformation matrix, solving each point in the left image in the remaining pairs of feature points, obtaining a corresponding point in the image to be matched after matrix transformation, calculating the distance between the transformed point and the originally matched point in the right image, if the distance is less than a preset threshold value, the feature point is a correct matching point, and storing the number of the correct matching points obtained by the obtained matrix; and repeating the operation on the remaining M-4 pairs of feature points, and counting the number of correct matching points corresponding to each transformation matrix, wherein the corresponding matrix with the largest number of correct matching points is the homography matrix corresponding to the two images.
8. An improved image stitching method as claimed in claim 1, wherein the step 8 transforms the two pictures into the same coordinate system according to the homography matrix, stitches the two pictures into one image, and smoothes the images to eliminate the seam appearing in the overlapped region when the images are combined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010040996.4A CN111260555A (en) | 2020-01-15 | 2020-01-15 | Improved image splicing method based on SURF |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010040996.4A CN111260555A (en) | 2020-01-15 | 2020-01-15 | Improved image splicing method based on SURF |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111260555A true CN111260555A (en) | 2020-06-09 |
Family
ID=70947021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010040996.4A Pending CN111260555A (en) | 2020-01-15 | 2020-01-15 | Improved image splicing method based on SURF |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111260555A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708698A (en) * | 2022-03-24 | 2022-07-05 | 重庆巡感科技有限公司 | Intelligent sensing and early warning system for foreign matters in tunnel |
CN115358930A (en) * | 2022-10-19 | 2022-11-18 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102006425A (en) * | 2010-12-13 | 2011-04-06 | 交通运输部公路科学研究所 | Method for splicing video in real time based on multiple cameras |
CN102129704A (en) * | 2011-02-23 | 2011-07-20 | 山东大学 | SURF operand-based microscope image splicing method |
KR20120021666A (en) * | 2010-08-12 | 2012-03-09 | 금오공과대학교 산학협력단 | Panorama image generating method |
CN103294024A (en) * | 2013-04-09 | 2013-09-11 | 宁波杜亚机电技术有限公司 | Intelligent home system control method |
CN104036480A (en) * | 2014-06-20 | 2014-09-10 | 天津大学 | Surf algorithm based quick mismatching point eliminating method |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105894443A (en) * | 2016-03-31 | 2016-08-24 | 河海大学 | Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm |
CN106127690A (en) * | 2016-07-06 | 2016-11-16 | *** | A kind of quick joining method of unmanned aerial vehicle remote sensing image |
CN106339981A (en) * | 2016-08-25 | 2017-01-18 | 安徽协创物联网技术有限公司 | Panorama stitching method |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN105608671B (en) * | 2015-12-30 | 2018-09-07 | 哈尔滨工业大学 | A kind of image split-joint method based on SURF algorithm |
CN109118544A (en) * | 2018-07-17 | 2019-01-01 | 南京理工大学 | Synthetic aperture imaging method based on perspective transform |
CN109919832A (en) * | 2019-02-27 | 2019-06-21 | 长安大学 | One kind being used for unpiloted traffic image joining method |
-
2020
- 2020-01-15 CN CN202010040996.4A patent/CN111260555A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120021666A (en) * | 2010-08-12 | 2012-03-09 | 금오공과대학교 산학협력단 | Panorama image generating method |
CN102006425A (en) * | 2010-12-13 | 2011-04-06 | 交通运输部公路科学研究所 | Method for splicing video in real time based on multiple cameras |
CN102129704A (en) * | 2011-02-23 | 2011-07-20 | 山东大学 | SURF operand-based microscope image splicing method |
CN103294024A (en) * | 2013-04-09 | 2013-09-11 | 宁波杜亚机电技术有限公司 | Intelligent home system control method |
CN104036480A (en) * | 2014-06-20 | 2014-09-10 | 天津大学 | Surf algorithm based quick mismatching point eliminating method |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105608671B (en) * | 2015-12-30 | 2018-09-07 | 哈尔滨工业大学 | A kind of image split-joint method based on SURF algorithm |
CN105894443A (en) * | 2016-03-31 | 2016-08-24 | 河海大学 | Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm |
CN106127690A (en) * | 2016-07-06 | 2016-11-16 | *** | A kind of quick joining method of unmanned aerial vehicle remote sensing image |
CN106339981A (en) * | 2016-08-25 | 2017-01-18 | 安徽协创物联网技术有限公司 | Panorama stitching method |
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN109118544A (en) * | 2018-07-17 | 2019-01-01 | 南京理工大学 | Synthetic aperture imaging method based on perspective transform |
CN109919832A (en) * | 2019-02-27 | 2019-06-21 | 长安大学 | One kind being used for unpiloted traffic image joining method |
Non-Patent Citations (5)
Title |
---|
FEI LEI ET AL.: "A fast method for image mosaic based on SURF", 《2014 9TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS》 * |
曹君宇等: "基于SURF的图像拼接过程中配准算法的改进", 《云南大学学报》 * |
曾峦等: "《侦查图像获取与融合技术》", 31 May 2015 * |
赵小川编著: "《MATLAB图像处理——程序实现与模块化仿真》", 30 November 2018, 北京航空航天大学出版社 * |
韩九强等: "《数字图像处理 基于XAVIS组态软件》", 30 September 2018 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708698A (en) * | 2022-03-24 | 2022-07-05 | 重庆巡感科技有限公司 | Intelligent sensing and early warning system for foreign matters in tunnel |
CN114708698B (en) * | 2022-03-24 | 2023-08-08 | 重庆巡感科技有限公司 | Intelligent sensing and early warning system for foreign matters in tunnel |
CN115358930A (en) * | 2022-10-19 | 2022-11-18 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
CN115358930B (en) * | 2022-10-19 | 2023-02-03 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784576B (en) | Image stitching method based on improved ORB feature algorithm | |
CN109685913B (en) | Augmented reality implementation method based on computer vision positioning | |
CN108093221B (en) | Suture line-based real-time video splicing method | |
CN104392416B (en) | Video stitching method for sports scene | |
TWI639136B (en) | Real-time video stitching method | |
CN110992263B (en) | Image stitching method and system | |
CN111160291B (en) | Human eye detection method based on depth information and CNN | |
CN110020995B (en) | Image splicing method for complex images | |
CN110717936B (en) | Image stitching method based on camera attitude estimation | |
CN111260555A (en) | Improved image splicing method based on SURF | |
CN110120013A (en) | A kind of cloud method and device | |
CN112734914A (en) | Image stereo reconstruction method and device for augmented reality vision | |
CN113469216B (en) | Retail terminal poster identification and integrity judgment method, system and storage medium | |
CN106997366B (en) | Database construction method, augmented reality fusion tracking method and terminal equipment | |
CN107330856B (en) | Panoramic imaging method based on projective transformation and thin plate spline | |
CN110533652B (en) | Image stitching evaluation method based on rotation invariant LBP-SURF feature similarity | |
CN111147815A (en) | Video monitoring system | |
KR20160000533A (en) | The method of multi detection and tracking with local feature point for providing information of an object in augmented reality | |
CN111709434B (en) | Robust multi-scale template matching method based on nearest neighbor feature point matching | |
KR101718309B1 (en) | The method of auto stitching and panoramic image genertation using color histogram | |
CN110910418B (en) | Target tracking algorithm based on rotation invariance image feature descriptor | |
CN115731591A (en) | Method, device and equipment for detecting makeup progress and storage medium | |
Lau et al. | Atdetect: Face detection and keypoint extraction at range and altitude | |
Zhuo et al. | Stereo matching approach using zooming images | |
CN113723465B (en) | Improved feature extraction method and image stitching method based on same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200609 |
|
WD01 | Invention patent application deemed withdrawn after publication |