CN107945111A - A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors - Google Patents

A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors Download PDF

Info

Publication number
CN107945111A
CN107945111A CN201711143013.4A CN201711143013A CN107945111A CN 107945111 A CN107945111 A CN 107945111A CN 201711143013 A CN201711143013 A CN 201711143013A CN 107945111 A CN107945111 A CN 107945111A
Authority
CN
China
Prior art keywords
mrow
image
descriptors
lbp
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711143013.4A
Other languages
Chinese (zh)
Other versions
CN107945111B (en
Inventor
程德强
李岩
陈亮亮
姚洁
高蕊
赵广源
刘海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201711143013.4A priority Critical patent/CN107945111B/en
Publication of CN107945111A publication Critical patent/CN107945111A/en
Application granted granted Critical
Publication of CN107945111B publication Critical patent/CN107945111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors, comprise the following steps:Obtain image to be spliced;The characteristic point information of image to be spliced is extracted using SURF;Ask for the Harr descriptors and CS LBP descriptors of above-mentioned each characteristic point;Using Harr descriptors obtained above and CS LBP descriptors, the Feature Points Matching pair of image to be spliced is determined;Gradually go out method using smooth being fade-in to merge the image after matching, obtain stitching image.CS LBP use central symmetry comparative approach, operational efficiency higher more simplified relative to traditional LBP;SURF and CS LBP associated methods keep SURF speed is fast, can handle in real time on the basis of, it is good to the splicing effect of large area rotation, illumination complicated image.

Description

A kind of image split-joint method based on SURF feature extraction combination CS-LBP descriptors
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of SURF feature extraction combinations CS-LBP that is based on to describe The image split-joint method of symbol.
Background technology
Image mosaic is an important content in Digital Image Processing, is that two even plurality of pictures are spliced into by one kind The broader picture in one visual field.Image mosaic can make up the defects of video camera wide-angle is inadequate, and can be more by setting Camera achievees the purpose that the ultra-wide angle visual field is shot, and the photo that pans.
Image mosaic technology mainly includes two key links of image registration and image co-registration.Image registration is image mosaic Core, its target is the motion conditions found out between two width or several overlay chart pictures of alignment, and image registration is directly closed It is the success rate and the speed of service to merging algorithm for images.Image fusion technology is for eliminating due to geometric correction, dynamically Intensity or color discontinuous problem between adjacent image caused by scene or illumination variation, two width are determined through overmatching overlapping The image in region merges into a width seamless image.Method on image mosaic has many paper publishings both at home and abroad, it is calculated Method is broadly divided into the method based on model, the method based on transform domain, the side based on the relevant method of gray scale and feature based Method, and the efficiency of image mosaic how is improved, reduce processing time and strengthen the weight that the adaptability of splicing system is always studied Point.
Image mosaic can be divided into the method based on gray scale correlation method and feature based from method.
Based on the relevant method of gray scale be with corresponding to two images lap or color system in gray level phase The registration position of image is found for criterion like property.Common algorithm has ratio matching method, block matching method and mesh fitting method.It is based on The method of feature extracts feature set from image to be matched first, is matched using the correspondence of feature.
The method of feature based make use of the notable feature of image, have calculation amount it is small, fireballing feature, pair and image Distortion, noise, block etc. there is certain robustness, but its matching performance depends greatly on feature extraction Quality.Characteristic matching is wherein divided into based on corners Matching and based on the matched method of spot again.Based on the method that angle point is common Have:Harris, Fast.Method based on spot has:SIFT, SURF.
In the method for currently a popular feature based, SURF (Speeded Up Robust Features) is carried in characteristic point Aspect is taken to have outstanding performance, but in the image mosaic under the complex scenes such as rotation, uneven illumination, the correct matching of acquisition Point directly affects splicing effect, the complexity of method is high, and the speed of operation is slow to less.
The content of the invention
In view of above-mentioned analysis, the present invention is intended to provide a kind of figure based on SURF feature extraction combination CS-LBP descriptors As joining method, to solve existing SURF methods, in the case where large area rotates image condition, matching is performed poor or arithmetic speed is slow The problem of.
The purpose of the present invention is mainly achieved through the following technical solutions:
Propose a kind of image split-joint method based on SURF feature extraction combination CS-LBP descriptors, including following step Suddenly:
The characteristic point information of image to be spliced is extracted using SURF;
Ask for the Harr descriptors and CS-LBP descriptors of above-mentioned each characteristic point;
Using Harr descriptors obtained above and CS-LBP descriptors, the Feature Points Matching pair of image to be spliced is determined;
Image is merged as a result, gradually going out method using smooth being fade-in according to the matching pair of features described above point, is spliced Image.
The present invention has the beneficial effect that:
The present invention builds Gauss scale pyramid, non-maximum using the thought of the FAST-Hessian integral images of SURF The methods of value suppresses, harr wavelet recognition directions.There is precision height, Scale invariant, can accurately extract sufficient amount Characteristic point.Using CS-LBP (central symmetry local binary pattern) descriptor, there is good match effect to rotation image, The algorithm speed of service can be accelerated at the same time, using the image meter centered on each characteristic point by the description of CS-LBP and Harr small echos Calculation method.CS-LBP uses central symmetry comparative approach, operational efficiency higher more simplified relative to traditional LBP, SURF and CS- The method that LBP is combined keep SURF speed is fast, can handle in real time on the basis of, to large area rotation, illumination complicated image Splicing effect it is good.Gradually go out method blending image using smooth being fade-in, obtained stitching image is more accurate, better.
On the basis of such scheme, the present invention has also done following improvement:
Further, asking for the CS-LBP descriptors of each characteristic point includes:
Selected characteristic point Pi(x, y, s, θ), wherein (x, y) is characterized position coordinates a little on the original image, s and θ divide Scale a little and principal direction are not characterized;
With characteristic point PiCentered on, the N point pixel values on the circle that radius is R are taken, by the pixel gray value of circle up-sampling Compared with center pixel gray value, by the centrosymmetric coding rules of CS-LBP, 16 dimensional feature vectors are generated;
Feature vector obtained above is normalized, obtains CS-LBP descriptors.
It is using the above-mentioned further beneficial effect of scheme:
Advantage most prominent CS-LBP is that the dimension of description is reduced to 16 dimensions, and 3* is substituted using circular processing region 3 boxed areas, description texture flat site also have more robustness.The ability to see things in their true light of CS-LBP is powerful, while calculating process is simple, CS-LBP uses central symmetry comparative approach, operational efficiency higher more simplified relative to traditional LBP.
Further, the centrosymmetric coding rules of the CS-LBP are:
In formula, N is that to be evenly distributed on radius be the pixel number on the circle of R, niAnd ni+N/2Centered on symmetrical pixels point pair Gray value, T' be value range be [0,1] threshold value, u, v be feature vector coordinate.
Further, the characteristic point information that image to be spliced is extracted using SURF, including extraction dimensional information, extraction position Confidence breath, extraction directional information.
It is using the above-mentioned further beneficial effect of scheme:
SURF is a kind of local feature detection method with robustness, has very strong robustness and higher computing Speed, ensure that the stability and real-time of splicing.
Further, the Harr descriptors of each characteristic point are asked for, including:A square-shaped frame is chosen around characteristic point, The square-shaped frame is divided into 16 sub-regions, the horizontal direction and Vertical Square of 25 pixels are counted in every sub-regions To Harr wavelet characters, obtain Harr descriptor of the vector as characteristic point of 16 × 4=64 dimensions of each characteristic point.
Further, the Feature Points Matching pair for determining image to be spliced, including:It will obtain Harr descriptors match pair Arest neighbors NN is multiplied with the nearest neighbor distance ratio NNDR for obtaining CS-LBP descriptors match pair, and obtained value is used as one group of feature Value, when characteristic value is less than the threshold value T of setting, i.e. NN × NNDR≤T, then regard one group of Feature Points Matching pair as.
It is using the above-mentioned further beneficial effect of scheme:
Determined by using arest neighbors (NN) the nearest neighbor distance ratio (NNDR) based on matching strategy between characteristic point Matching pair, improves matching precision.
Further, Harr descriptors use arest neighbors (NN), and matching pair is obtained using Euclidean distance:
In formula, dR(i) and dL(i) be respectively in two images to be matched Harr feature descriptors vector i-th point Amount;
CS-LBP descriptors then use nearest neighbor distance ratio (NNDR), and matching pair is obtained using Euclidean distance:
In formula, dR' (i) be piece image to be matched in CS-LBP feature descriptors vector i-th of component, dL,1(i) And dL,2(i) it is respectively i-th point of arest neighbors and the CS-LBP feature descriptors of time neighbour characteristic point distance in another piece image Amount.
Further, further include using RANSAC methods to Feature Points Matching obtained above to screening, reject mistake Match somebody with somebody, including:
L matching pair is randomly selected from Feature Points Matching centering obtained above, calculates all parameters of transformation matrix H, Obtain all parameters of matching iconic model M;
Test data is gone to concentrate other Feature Points Matchings pair using the model M of known parameters obtained above, if met The model, then it is assumed that the interior point assumed that;
Set a threshold value T ", if the number of interior point exceedes threshold value, preserve in put and perform next step, otherwise again into Row matching is screened to randomly selecting;
The above process is repeatedly executed n times, is counted in every time and is counted out, using the set of number maximum as preliminary accurate Matching is to set.
It is using the above-mentioned further beneficial effect of scheme:
For matched matching to rejecting error hiding using RANSAC, matching precision is substantially increased, can be obtained preferably Splicing effect.
Further, described gradually to go out method using smooth being fade-in the image after matching is merged, fusion formula is as follows:
I (x, y)=dI1(x,y)+(1-d)I2(x,y)
Wherein, I1(x, y) and I2(x, y) is respectively two images to be fused, and d and 1-d are two images overlapping region pair The pixel weights answered, x1And x2It is the minimum and maximum value of overlay region x-axis, d is a gradual change coefficient, and d slowly changes to 0 by 1 When, image is from I1(x, y) has slowly been transitioned into I2(x, y), that is, the image spliced.
Further, further include and treat stitching image and pre-processed.
It is using the above-mentioned further beneficial effect of scheme:
Pretreatment such as can be filtered image, sharpen at the enhancing processing, to prepare for image registration, raising image The speed and precision of registration.
In the present invention, it can also be mutually combined between above-mentioned each technical solution, to realize more preferred compositions schemes.This Other feature and advantage of invention will illustrate in the following description, also, certain advantages can become from specification it is aobvious and It is clear to, or is understood by implementing the present invention.The purpose of the present invention and other advantages can by write specification, right Specifically noted structure is realized and obtained in claim and attached drawing.
Brief description of the drawings
Attached drawing is only used for showing the purpose of specific embodiment, and is not considered as limitation of the present invention, in whole attached drawing In, identical reference symbol represents identical component.
Fig. 1 shows the image split-joint method based on SURF feature extraction combination CS-LBP descriptors in the embodiment of the present invention Flow chart;
Fig. 2 shows the schematic diagram that the sum of gray value in matrix is sought using integral image;
Fig. 3 shows the schematic diagram with box-like wave filter approximation LoG;
Fig. 4 shows that non-maximum suppresses the area schematic at place;
Fig. 5 shows the circle shaped neighborhood region schematic diagram centered on characteristic point;
Fig. 6 shows the principal direction schematic diagram that characteristic point is asked in gradient coordinate system;
Fig. 7 shows LBP the and CS-LBP feature schematic diagrames of 8 neighborhood territory pixels.
Embodiment
The preferred embodiment of the present invention is specifically described below in conjunction with the accompanying drawings, wherein, attached drawing forms the application part, and It is used to explain the principle of the present invention together with embodiments of the present invention, is not intended to limit the scope of the present invention.
The specific embodiment of the present invention, discloses a kind of figure based on SURF feature extraction combination CS-LBP descriptors As joining method.As shown in Figure 1, specifically include following steps:
Step S1, pre-processes the image to be spliced of acquisition;
The image of acquisition is pre-processed, to prepare for image registration, after the pretreatment of image can improve The speed and precision of image registration.Due to the influence of image sensor and shooting environmental, image to be spliced occurs some such as: Degenerate, grayscale shift and the smudgy problem of image, the enhancings such as pretreatment can be filtered image, sharpen are handled.It is excellent Choosing, two images are chosen as image to be spliced, and are pre-processed.
Step S2, characteristic point information is extracted using SURF;
SURF is a kind of local feature detection method with robustness, has very strong robustness and higher computing Speed.On the one hand the outstanding representation of SURF, reason are to have carried out appropriate simplification and approximation on the premise of correctness is ensured, separately On the one hand it is the concept that integral image is repeatedly used in this method.Using the SURF extractions dimensional information of characteristic point, positional information, Principal direction information,
Specifically:SURF methods improve the speed of computing by using integral image.Integral image IΣThe size of (x, y) Size is equal with the size dimension of original image I (x, y), and value of the integral image at point (x, y) place is equal to abscissa in original image Less than or equal to x and ordinate is again smaller than the sum of all pixels gray value equal to y, that is, in original image, from its upper left The sum of all pixels gray value in the rectangular area that angle is formed to point (x, y) place.Original image can be calculated using integral image In the sum of grey scale pixel value in any rectangle, as shown in Fig. 2, have any four point A, B, C, D in image I (x, y), this four points The sum of pixel grey scale of region W of composition is:
∑ I (W)=IΣ(D)+IΣ(A)-IΣ(B)-IΣ(C)
LoG (Laplacian of Gaussian) is the best method that can ensure Scale invariant, by Hessian and LoG is combined, and is just had Hessian-Laplace methods, be ensure that the characteristic point scale invariability detected.
Specifically, the point X=(x, y) in given image I, scale are that the Hessian matrix Hs (X, σ) of σ are defined as:
In formula, Lxx(X, σ) is Gauss second-order differentialIn the convolution of point X=(x, y) place and image I, g (σ) be because Son is the Gaussian function of σ.
As shown in figure 3, the first row image is exactly to pass through discretization, and reduced as 9 × 9 grids, σ=1.2 along x side To the Gauss Second Order Differential Operator in, y directions and xy directions, i.e. LxxTemplate, LyyTemplate, LxyTemplate.It is approximate using box-like wave filter LoG, can use 9 × 9 box-like wave filter --- D after weightingxxTemplate, DyyTemplate, DxyTemplate substitutes above-mentioned differential operator, i.e., The second row image in Fig. 3.
The weights of the grey parts of box-like wave filter are 0, therefore the part is not involved in calculating, and it is big to function only as filling template Small effect.And DxxTemplate and DyyTemplate all respectively has two white portions and a black portions, therefore their box-like filtering Device shares three protrusions, and DxyTemplate has two white portions and two black portions, therefore its box-like wave filter is total to There are four protrusions.The general formulae for handling obtained response is so filtered to image using box-like wave filter For:
In formula, N represents the summation of protrusion, SnRepresent the area of n-th of protrusion, wnRepresent n-th of protrusion Weights.The each protrusion to seek template correspond to four points A, B, C, D are formed in image the gray scale of matrix area it With with box-like wave filter (Dxx、Dyy、Dxy) approximate substitution Gauss Second Order Differential Operator (Lxx、Lyy、Lxy) obtain Hessian matrixes Determinant need to add certain weights, i.e.,:
det(Happrox)=DxxDyy-(wDxy)2
In formula, w is weights, and to balance because of deviation caused by approximation, the general value of w is 0.9.It is possible thereby to Arrive, use boxed the approximate mark for replacing Gauss Second Order Differential Operator Hessian matrixes of wave filter:
tr(Happrox)=Dxx+Dyy
In order to realize scale invariability, the convolution that is iterated with Gaussian kernel to original image, and repeat down-sampled Processing, to reduce its size, that is, image in pyramid is gradually increased the bottom of to top scale, and picture size size is then It is gradually reduced, establishes scale image pyramid.In 3-dimensional (x, y, s) metric space, by what is crossed by Hessian matrix disposals Each pixel is in each regional area, as shown in figure 4, non-maximum suppression is carried out, only than the sound of 26 points closed on All big point should be worth and be just chosen as characteristic point, that is, orient the position coordinates X=(x, y, s) of characteristic point, in formula, x, y is characterized The transverse and longitudinal coordinate value of point, s are characterized a scale at place.
On the basis of the positional information of features described above point is obtained, identified using the response of Harr small echos is obtained on x and y directions State the principal direction of characteristic point.Specifically, as shown in figure 5, establishing one centered on characteristic point, for 6s, (s is characterized an institute to radius Scale) circle shaped neighborhood region, we are known as 6s circular neighborhoods, and the circle is sampled by the sampling interval of s, wherein s for spy The scale of scalogram picture where sign point, then calculates all sampled pixels in the 6s circular neighborhoods in their x directions and y side Upward Haar small echo responses, the size dimension of Haar small echos response is 4s.Haar small echos are a kind of simplest wave filters, The gradient in x directions and y directions can be detected with it.
Also need to be weighted processing to the Haar small echo responses in x directions and y directions.Common weighting function is Gauss Function, the variance of the Gaussian function are set to 2s.Finally respectively with the Haar small echo responses in the x directions after weighting and y directions A gradient coordinate system is established for x-axis and y-axis, and the sampled point in all 6s circular neighborhoods is all distributed in the coordinate system.Such as Fig. 6 Shown, one is designed in the gradient coordinate system, and subtended angle is 60 degree of fan-shaped sliding window centered on origin, and to the slip All the points in window add up its x-axis coordinate and y-axis coordinate value, calculate the mould and argument of accumulative sum.
So rotating fan sliding window, untill rotating a circle, the modulus value under more all windows, Modulus of access maximum That window corresponding to argument be this feature point principal direction angle, i.e.,
Step S3, obtains the Harr descriptors and CS-LBP descriptors of above-mentioned each characteristic point;
Obtain the Harr descriptors of each characteristic point;
In the images match process based on SUR, each feature neighborhood of a point is represented by feature vector, and using descriptor into Row description, which must be unique, while for scaling, rotate, obscure, brightness change and affine transformation are steady Strong.
Choose a square-shaped frame around characteristic point, the length of side of frame is 20s (where s is detected this feature point Scale) frame is divided into 16 sub-regions, the harr horizontally and vertically that 25 pixels are counted per sub-regions is small Wave characteristic, that is, ∑ dx, ∑ d | x |, ∑ dy, ∑ d | y |, just there are 4 values in so each zonule, has thus obtained each feature The vector of 16 × 4=64 dimensions of point, is the Harr descriptors of characteristic point.
Obtain the CS-LBP descriptors of each characteristic point;
SURF algorithm has some limitations.Due to harr wavelet characters, when image rotation occurs, the Harrs- small echos of calculating Respond dxAnd dyIt will change, and cause the different directions and description vectors of characteristic point.Therefore, it is big in image rotation and illumination variation In the case of, the matching performance of SURF algorithm is poor.By changing SURF detectors and descriptor, figure can be more effectively handled Picture, particularly rotation and fluoroscopy images, while its calculating speed will not be slowed down.
Local binary pattern (LBP) is a kind of very strong textural characteristics to illumination invariant, states gray level image picture Vegetarian refreshments and the binary system of surrounding pixel point relation describe, and are initially used to the description of texture image.LBP operators surrounding neighbors picture The relative gray values of element describe center pixel, and surrounding neighbors pixels is carried out binaryzation by it.
For the characteristic point that each is detected, SURF has tried to achieve position, scale and directional information, with PiCentered on, choosing The image-region that size around characteristic point is 11*11 is selected, its CS-LBP descriptor is extracted, specifically includes following steps:
Step S301, if Pi(x, y, s, θ) is the characteristic point detected, wherein (x, y) is characterized a little on the original image Position coordinates, s and θ are characterized scale and principal direction a little.
Step S302, with characteristic point PiCenter, takes radius as the N point pixel values on R circles, by the pixel ash of circle up-sampling Angle value passes through the centrosymmetric coding rules of CS-LBP, the dimension description vectors of generation 16 T compared with center pixel gray valuei
CS-LBP feature point description methods, by the symmetrical pixel pair of Correlation Centre, with this substitute all pixels point according to The secondary mode compared with central point.As shown in fig. 7, the LBP features and CS-LBP features of 8 neighborhood territory pixels, for the picture of 8 neighborhoods Element, LBP produce 256 different binary values.In order to simplify the binary value of operator acquisition, LBP is extended to based on center Symmetrical local binary pattern.Compared with traditional LBP, CS-LBP is to the gray scale based on centrosymmetric 2 pixels Value compares, and it is then 1 to be such as larger than equal to, and is otherwise 0.
Specifically, the centrosymmetric coding rules of CS-LBP are as follows:
In formula, in the N number of pixel being evenly distributed on the circle that radius is R, niAnd ni+N/2Centered on symmetrical pixels point To gray value, T' be value range be [0,1] threshold value.CS-LBP is by central symmetry thought, to 4 directions of image Change is encoded, by intrinsic dimensionality be reduced to 16 dimensions (it is preferable, take R=2, when N=8, T=0.01, gained CS-LBP descriptions The dimension of symbol is moderate, and matching effect is optimal), and can guarantee that more rich texture information.
Step S303, is normalized feature vector obtained above:
In order to eliminate the influence of illumination change, feature vector obtained above is normalized.
To sum up, the result is that one is known as TiOne-dimensional vector, be Expressive Features point PiPeripheral region it is one-dimensional to Amount.Vector TiDescriptor rotational invariance is given, is normalized to obtain the robustness to illumination change using description vectors.
Step S4, using the descriptor of characteristic point obtained above, is described using from each by CS-LBP and Harr small echos Characteristic point centered on image computational methods, the matching pair of Harr descriptors and CS-LBP descriptors is respectively obtained, by making The matching pair between two groups of characteristic points is determined with arest neighbors (NN) the nearest neighbor distance ratio (NNDR) based on matching strategy.Including Following steps:
Step S401, obtains the matching pair of Harr descriptors;
Harr descriptors use arest neighbors (NN), and matching pair is obtained using Euclidean distance:
In formula, dR(i) and dL(i) be respectively in two images to be matched Harr feature descriptors vector i-th point Amount.
Step S402, obtains the matching pair of CS-LBP descriptors;
CS-LBP descriptors then use nearest neighbor distance ratio (NNDR), and matching pair is obtained using Euclidean distance:
In formula, dR' (i) be piece image to be matched in CS-LBP feature descriptors vector i-th of component, dL,1(i) And dL,2(i) it is respectively i-th point of arest neighbors and the CS-LBP feature descriptors of time neighbour characteristic point distance in another piece image Amount.
Step S403, by the matching of Harr descriptors obtained above and CS-LBP descriptors to choosing, determines spy Matching pair between sign point.
Obtained NN is multiplied with NNDR, obtained value is as one group of characteristic value, if characteristic value is less than the threshold of setting Value T, i.e. NN × NNDR≤T, then regard a pair of of Feature Points Matching pair as.
Step S5:Error hiding is rejected to screening to Feature Points Matching obtained above using RANSAC methods.
For the two images matching pair obtained in step S4, using RANSAC (Random Sample Consensus) Method carries out rejecting error hiding.RANSAC algorithms are the typical methods that matching carries out purification parameter Estimation.Core concept is:It is first First, L groups data are randomly selected in initial data to estimate the initial value of object function respectively, object function has required solution Key problem is designed and obtained, and secondly, draws the corresponding two classes data of every group of initial parameter value, one kind is at the beginning of group parameter belonging to satisfaction The data point of initial value be known as in point, it is another kind of it is ungratified be exterior point.Then count every group of parameter it is corresponding in count out, interior point The more parameter model of number is better, and it is initial finally to find out the corresponding optimal parameter of object function according to the judgment criteria of setting Value.Specifically include following steps:
Step S501, randomly selects L Feature Points Matching pair from matching centering, calculates all parameters of transformation matrix H, obtain To all parameters of model M.
Step S502, goes test data to concentrate other Feature Points Matchings pair using the model M of known parameters obtained above, If meet the model, then it is assumed that the interior point assumed that.
Step S503, sets a threshold value T ", if the number of interior point exceedes threshold value, preserves interior point and performs next step, Otherwise step S501 is re-started.
Step S504, the above process are repeatedly executed n times, are counted in every time and are counted out, by the set of interior maximum of counting out As preliminary accurate matching to set.
Step S6, merges the image after rejecting error hiding, obtains stitching image.
Image use after above-mentioned rejecting error hiding is fade-in and gradually goes out method progress image co-registration processing.Specifically, estimate After transformation matrix H, it is respectively I to set 2 images to be fused1(x, y) and I2(x, y), by I1Transform to I2The coordinate at place Under system, and use bilinear interpolation.Then the pixel value I (x, y) of weighted average fusion method overlapping region image is represented by:
I (x, y)=d1I1(x,y)+d2I2(x,y)
In formula, I1(x, y) and I2(x, y) is divided into two images to be fused, d1And d2Corresponded to for two images overlapping region Pixel weights, and meet d1+d2=1.
Gradually go out method using smooth being fade-in to make choice weights.It is fade-in that gradually to go out method be the Europe with pixel to picture centre Selection gist of the formula distance as weights, fusion formula are as follows:
I (x, y)=dI1(x,y)+(1-d)I2(x,y)
Wherein, x1And x2It is overlay region x-axis minimum and maximum value, d is a gradual change coefficient, when d slowly changes to 0 by 1, figure As from I1(x, y) has slowly been transitioned into I2(x, y), that is, the image spliced.
In conclusion an embodiment of the present invention provides a kind of image based on SURF feature extraction combination CS-LBP descriptors Joining method, characteristic point information is extracted using SURF, using on x and y directions the response of Harr small echos can come identify point of interest Direction is reappeared, is described for each characteristic point using CS-LBP character description methods, gradually going out method using smooth being fade-in realizes The splicing of image.CS-LBP uses central symmetry comparative approach, operational efficiency higher more simplified relative to traditional LBP.SURF and The method that CS-LBP is combined keep SURF speed is fast, can handle in real time on the basis of, to large area rotation, the complicated figure of illumination The splicing effect of picture is good.
It will be understood by those skilled in the art that realizing all or part of flow of above-described embodiment method, meter can be passed through Calculation machine program instructs relevant hardware to complete, and the program can be stored in computer-readable recording medium.Wherein, institute Computer-readable recording medium is stated as disk, CD, read-only memory or random access memory etc..
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto, Any one skilled in the art the invention discloses technical scope in, the change or replacement that can readily occur in, It should be covered by the protection scope of the present invention.

Claims (10)

1. a kind of image split-joint method based on SURF feature extraction combination CS-LBP descriptors, it is characterised in that including following step Suddenly:
The characteristic point information of image to be spliced is extracted using SURF;
Ask for the Harr descriptors and CS-LBP descriptors of above-mentioned each characteristic point;
Using Harr descriptors obtained above and CS-LBP descriptors, the Feature Points Matching pair of image to be spliced is determined;
Image is merged as a result, gradually going out method using smooth being fade-in according to the matching pair of features described above point, obtains stitching image.
2. method according to claim 1, it is characterised in that asking for the CS-LBP descriptors of each characteristic point includes:
Selected characteristic point Pi(x, y, s, θ), wherein (x, y) is characterized position coordinates a little on the original image, s and θ are respectively spy Levy the scale and principal direction of point;
With characteristic point PiCentered on, the N point grey scale pixel values on the circle that radius is R are taken, by the grey scale pixel value of circle up-sampling with Heart grey scale pixel value is compared, and by the centrosymmetric coding rules of CS-LBP, generates 16 dimensional feature vectors;
Feature vector obtained above is normalized, obtains CS-LBP descriptors.
3. method according to claim 2, it is characterised in that the centrosymmetric coding rules of CS-LBP are:
In formula, N is that to be evenly distributed on radius be the pixel number on the circle of R, niAnd ni+N/2Centered on symmetrical pixels point pair ash Angle value, T' are the threshold value that value range is [0,1], and u, v are the coordinate of feature vector.
4. according to the method in claim 2 or 3, it is characterised in that the feature that image to be spliced is extracted using SURF Point information, including extraction dimensional information, extraction positional information, extraction directional information.
5. method according to claim 4, it is characterised in that the Harr descriptors of each characteristic point are asked for, including: A square-shaped frame is chosen around characteristic point, the square-shaped frame is divided into 16 sub-regions, is united in every sub-regions Count 25 pixels Harr wavelet characters horizontally and vertically, obtain each characteristic point 16 × 4=64 dimension to Measure the Harr descriptors as characteristic point.
6. according to the method described in claim 5, it is characterized in that, the Feature Points Matching pair for determining image to be spliced, bag Include:The arest neighbors NN for obtaining Harr descriptors match pair and the nearest neighbor distance for obtaining CS-LBP descriptors match pair are compared into NNDR It is multiplied, obtained value is as one group of characteristic value, and when characteristic value is less than the threshold value T of setting, i.e. NN × NNDR≤T, then regard as It is one group of Feature Points Matching pair.
7. according to the method described in claim 6, it is characterized in that,
Harr descriptors use arest neighbors NN, and matching pair is obtained using Euclidean distance:
<mrow> <mi>N</mi> <mi>N</mi> <mo>=</mo> <msqrt> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>128</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>R</mi> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>d</mi> <mi>L</mi> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
In formula, dR(i) and dL(i) be respectively Harr feature descriptors vector in two images to be matched i-th of component;
CS-LBP descriptors obtain matching pair then using nearest neighbor distance than NNDR using Euclidean distance:
<mrow> <mi>N</mi> <mi>N</mi> <mi>D</mi> <mi>R</mi> <mo>=</mo> <mfrac> <msqrt> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>128</mn> </msubsup> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>d</mi> <mi>R</mi> </msub> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>d</mi> <mrow> <mi>L</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <msqrt> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>128</mn> </msubsup> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>d</mi> <mi>R</mi> </msub> <mo>&amp;prime;</mo> </msup> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>d</mi> <mrow> <mi>L</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mrow>
In formula, dR' (i) be piece image to be matched in CS-LBP feature descriptors vector i-th of component, dL,1(i) and dL,2 (i) it is respectively i-th of component of arest neighbors and the CS-LBP feature descriptors of time neighbour's characteristic point distance in another piece image.
8. method according to claim 7, it is characterised in that further include using RANSAC methods to spy obtained above Sign point matching is to screening, rejecting error hiding, including:
L matching pair is randomly selected from Feature Points Matching centering obtained above, all parameters of transformation matrix H is calculated, obtains Match all parameters of iconic model M;
Test data is gone to concentrate other Feature Points Matchings pair using the model M of known parameters obtained above, if meeting the mould Type, then it is assumed that the interior point assumed that;
Set a threshold value T ", if the number of interior point exceedes threshold value, preserve in put and perform next step, otherwise re-start Randomly selecting for pairing, is screened;
The above process is repeatedly executed n times, is counted in every time and is counted out, using the set of number maximum as preliminary accurate matching To set.
9. method according to claim 8, it is characterised in that after smooth being fade-in of utilization gradually goes out method to matching Image is merged, and fusion formula is as follows:
I (x, y)=dI1(x,y)+(1-d)I2(x,y)
<mrow> <mi>d</mi> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>-</mo> <mi>x</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mfrac> </mrow>
Wherein, I (x, y) be fusion after image, I1(x, y) and I2(x, y) is respectively two images to be fused, and d and 1-d are The corresponding pixel weights in two images overlapping region, x1And x2It is the minimum and maximum value of overlay region x-axis, d is a gradual change system Number, when d slowly changes to 0 by 1, image is from I1(x, y) has slowly been transitioned into I2(x, y), that is, the image spliced.
10. the method according to one of claim 5~9, it is characterised in that further include and treat stitching image and located in advance Reason.
CN201711143013.4A 2017-11-17 2017-11-17 Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor Active CN107945111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711143013.4A CN107945111B (en) 2017-11-17 2017-11-17 Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711143013.4A CN107945111B (en) 2017-11-17 2017-11-17 Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor

Publications (2)

Publication Number Publication Date
CN107945111A true CN107945111A (en) 2018-04-20
CN107945111B CN107945111B (en) 2021-07-27

Family

ID=61931628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711143013.4A Active CN107945111B (en) 2017-11-17 2017-11-17 Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor

Country Status (1)

Country Link
CN (1) CN107945111B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038488A (en) * 2017-12-06 2018-05-15 河海大学常州校区 The robustness image hash method mixed based on SIFT and LBP
CN109102534A (en) * 2018-08-29 2018-12-28 长光卫星技术有限公司 Optical remote sensing image registration method and system under the conditions of haze weather
CN109949227A (en) * 2019-03-12 2019-06-28 湖北文理学院 Image split-joint method, system and electronic equipment
CN110246168A (en) * 2019-06-19 2019-09-17 中国矿业大学 A kind of feature matching method of mobile crusing robot binocular image splicing
CN110929084A (en) * 2019-12-17 2020-03-27 徐庆 Method and device for acquiring image shape feature descriptor
CN110969594A (en) * 2019-11-29 2020-04-07 广东优世联合控股集团股份有限公司 Image splicing method
CN111814711A (en) * 2020-07-15 2020-10-23 中国矿业大学 Image feature fast matching method and system applied to mine machine vision
CN114866853A (en) * 2022-04-12 2022-08-05 咪咕文化科技有限公司 Live broadcast interaction method, device, equipment and storage medium
CN116824183A (en) * 2023-07-10 2023-09-29 北京大学 Image feature matching method and device based on multiple feature descriptors

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN104933434A (en) * 2015-06-16 2015-09-23 同济大学 Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN104933434A (en) * 2015-06-16 2015-09-23 同济大学 Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
C.J.PRABHAKAR等: "LBP-SURF Descriptor with Color Invariant and Texture based features for underwater images", 《CONFERENCE:PROCEEDINGS OF THE EIGHTH INDIAN CONFERENCE ON COMPUTER VISION,GRAPHICS AND IMAGE PROCESSING》 *
WANKOU YANG,ZHANYU WANG,BAOCHANG ZHANG: "Face recognition using adaptive local ternary patterns method", 《NEUROCOMPUTING》 *
侯宏花: "《数字图像处理与分析》", 30 September 2011, 北京理工大学出版社 *
刘丹,胡永健,刘琲贝: "联合SIFT特征点和cs-lbp特征描述子的复制粘贴篡改检测", 《合肥工业大学学报(自然科学版)》 *
曹君宇: "基于SURF的图像拼接算法研究", 《中国优秀硕士学位论文全文数据库》 *
段连飞: "《无人机图像处理》", 30 September 2017, 西北工业大学出版社 *
赵兴彩: "车位图像畸变的矫正与拼接研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038488A (en) * 2017-12-06 2018-05-15 河海大学常州校区 The robustness image hash method mixed based on SIFT and LBP
CN108038488B (en) * 2017-12-06 2021-09-17 河海大学常州校区 Robustness image hashing method based on SIFT and LBP mixing
CN109102534B (en) * 2018-08-29 2020-09-01 长光卫星技术有限公司 Optical remote sensing image registration method and system under haze weather condition
CN109102534A (en) * 2018-08-29 2018-12-28 长光卫星技术有限公司 Optical remote sensing image registration method and system under the conditions of haze weather
CN109949227A (en) * 2019-03-12 2019-06-28 湖北文理学院 Image split-joint method, system and electronic equipment
CN110246168A (en) * 2019-06-19 2019-09-17 中国矿业大学 A kind of feature matching method of mobile crusing robot binocular image splicing
CN110969594A (en) * 2019-11-29 2020-04-07 广东优世联合控股集团股份有限公司 Image splicing method
CN110929084A (en) * 2019-12-17 2020-03-27 徐庆 Method and device for acquiring image shape feature descriptor
CN110929084B (en) * 2019-12-17 2023-04-11 徐庆 Method and device for acquiring image shape feature descriptor
CN111814711A (en) * 2020-07-15 2020-10-23 中国矿业大学 Image feature fast matching method and system applied to mine machine vision
CN111814711B (en) * 2020-07-15 2023-08-08 中国矿业大学 Image feature quick matching method and system applied to mine machine vision
CN114866853A (en) * 2022-04-12 2022-08-05 咪咕文化科技有限公司 Live broadcast interaction method, device, equipment and storage medium
CN116824183A (en) * 2023-07-10 2023-09-29 北京大学 Image feature matching method and device based on multiple feature descriptors
CN116824183B (en) * 2023-07-10 2024-03-12 北京大学 Image feature matching method and device based on multiple feature descriptors

Also Published As

Publication number Publication date
CN107945111B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN107945111A (en) A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN106485183B (en) A kind of Quick Response Code localization method and system
Juan et al. SURF applied in panorama image stitching
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN104376548B (en) A kind of quick joining method of image based on modified SURF algorithm
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
US20130208997A1 (en) Method and Apparatus for Combining Panoramic Image
Cao et al. Edge-based Blur Metric for Tamper Detection.
CN104392416B (en) Video stitching method for sports scene
CN104809731B (en) A kind of rotation Scale invariant scene matching method based on gradient binaryzation
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
Lo et al. Joint trilateral filtering for depth map super-resolution
CN108171674B (en) Vision correction method for projector image with any visual angle
CN109859226A (en) A kind of detection method of the X-comers sub-pix of figure segmentation
Kim et al. Multiple level feature-based universal blind image quality assessment model
CN105335952B (en) Matching power flow computational methods and device and parallax value calculating method and equipment
CN107968916A (en) A kind of fast video digital image stabilization method suitable for on-fixed scene
Yue et al. Recaptured screen image demoiréing
CN109658366A (en) Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN108154147A (en) The region of interest area detecting method of view-based access control model attention model
CN103679672A (en) Panorama image splicing method based on edge vertical distance matching
Nam et al. Deep convolutional neural network for identifying seam-carving forgery
CN109559273A (en) A kind of quick joining method towards vehicle base map picture
CN105740874B (en) Determine the method and device of operation coordinate when automatic test script playback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Chen Liangliang

Inventor after: Gao Rui

Inventor after: Cheng Deqiang

Inventor after: Yao Jie

Inventor after: Liu Hai

Inventor after: Li Yan

Inventor after: Zhao Guangyuan

Inventor before: Cheng Deqiang

Inventor before: Li Yan

Inventor before: Chen Liangliang

Inventor before: Yao Jie

Inventor before: Gao Rui

Inventor before: Zhao Guangyuan

Inventor before: Liu Hai

GR01 Patent grant
GR01 Patent grant