CN107016646A - One kind approaches projective transformation image split-joint method based on improved - Google Patents
One kind approaches projective transformation image split-joint method based on improved Download PDFInfo
- Publication number
- CN107016646A CN107016646A CN201710236732.4A CN201710236732A CN107016646A CN 107016646 A CN107016646 A CN 107016646A CN 201710236732 A CN201710236732 A CN 201710236732A CN 107016646 A CN107016646 A CN 107016646A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- matrix
- follows
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000009466 transformation Effects 0.000 title claims abstract description 33
- 238000013459 approach Methods 0.000 title claims description 14
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000002708 enhancing effect Effects 0.000 claims abstract description 5
- 239000000284 extract Substances 0.000 claims abstract description 3
- 238000010606 normalization Methods 0.000 claims abstract description 3
- 239000011159 matrix material Substances 0.000 claims description 81
- 238000005070 sampling Methods 0.000 claims description 40
- 230000004907 flux Effects 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 9
- 238000004519 manufacturing process Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 230000000717 retained effect Effects 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 4
- 238000012952 Resampling Methods 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000012887 quadratic function Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims 1
- 239000010931 gold Substances 0.000 claims 1
- 229910052737 gold Inorganic materials 0.000 claims 1
- 238000002203 pretreatment Methods 0.000 abstract description 4
- 230000006872 improvement Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000000205 computational method Methods 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
Projective transformation image split-joint method is approached based on improved the invention discloses one kind, is related to technical field of image processing.Including S1:Image preprocessing;S2:Image characteristic point is extracted;S3:Based on the pyramidal Image Feature Point Matching of light stream.The mode of the S1 image preprocessings includes noise suppressed, the enhancing of texture and contrast and histogrammic normalization, also include demarcating the camera lens of video camera, corresponding mathematical modeling is set up according to the reason for image fault, image that is contaminated or distorting is corrected using the parameter of demarcation;Image characteristic point extracts and uses BRISK and/or ORB Feature Descriptors in the S2;The present invention makes improvement based on approaching projective transformation (APAP) image split-joint method, pre-treatment step is added before characteristic extraction step, ORB and/or BRISK binary features description is creatively employed in characteristic extraction step, light stream pyramid is creatively employed in matching step, makes the splicing of image time saving and registering accurate.
Description
Technical field
The invention belongs to technical field of image processing, more particularly to image mosaic technology field.
Background technology
Image mosaic technology is exactly image mosaic that several overlap into seamless high resolution graphics one large-scale
The technology of picture.The scene image in the wide visual field is obtained using general camera, because the resolution ratio of camera is certain, the scene of shooting is got over
Greatly, the image resolution ratio obtained is lower;And panorama camera, wide-angle lens are first-class not only very expensive, and distortion is serious.In order to not
Ultra-wide field of view is obtained under conditions of reduction image resolution ratio, or even 360 degree of panorama sketch, occur in that and utilize computer progress image
Joining method.
Image mosaic technology is broadly divided into three key steps:Image preprocessing, image registration and image co-registration, image
Pretreatment is referred mainly to before image registration, and image is carried out to enhancing and the histogrammic normalizing of noise suppressed, texture and contrast
The pretreatment such as change, makes obvious difference is not present with reference to figure and figure subject to registration.Image preprocessing mainly does standard for image registration
It is standby, allow picture quality to disclosure satisfy that the requirement of image registration, when picture quality is undesirable without image preprocessing if enter
Row image registration, it is easy to cause the error hiding between some subregions in two images or between characteristic point.
Image registration refers mainly to, based on reference to the image key feature or half-tone information in figure and figure subject to registration, find most preferably
The characteristic point pair or image region pair of matching, calculate the motion vector between each characteristic point pair or subregion pair, and estimate two
Global motion transform parameter linearly or nonlinearly between width image.The general motion global using Least Square Method
Transformation parameter, and due to the limitation on processing time, parameter Estimation and Feature Points Matching are completed by an iteration.
Image co-registration refers to after images match is completed, and image is spliced, sutured, and the border of suture is put down
Sliding processing, allows and sutures the natural transition of borderline region.
The success or not key of image mosaic is the registration effect of image.However, between common image subject to registration, no
There may be a variety of nonlinear transformations with target area, or exist large area without obvious characteristic region (such as even grain area
Domain or homochromy region etc.), these situations considerably increase the difficulty of image registration.
In the prior art, approach projective transformation (APAP) image split-joint method be generally divided into four steps, i.e. feature extraction,
Characteristic matching, H-matrix, image co-registration are asked, SIFT feature description is used in characteristic extraction step, was matched in the later stage
Matching speed is slow in journey and registration is inaccurate.
The content of the invention
The present invention is to overcome above-mentioned situation not enough, it is desirable to provide a kind of registering accurate and timesaving image split-joint method, can
Realize that image seamless splices.
One kind approaches projective transformation image split-joint method, including S1 based on improved:Image preprocessing;S2:Characteristics of image
Point is extracted;S3:Based on the pyramidal Image Feature Point Matching of light stream;S4:Based on MDLT algorithms and RANSAC algorithm estimated projections
Transformation matrix;S5:The projective transformation matrix of every piece of calculating;S6:Image it is seamless spliced;
The mode of the S1 image preprocessings includes noise suppressed, the enhancing of texture and contrast and histogrammic normalizing
Change, in addition to the camera lens of video camera is demarcated, corresponding mathematical modeling is set up according to the reason for image fault, demarcation is utilized
Parameter image that is contaminated or distorting is corrected;
The S2 image characteristic points extract and use BRISK and/or ORB Feature Descriptors.
Further, the extraction process of the BRISK Feature Descriptors is as follows:
The first step:Build metric space;
During BRISK algorithms carry out feature point detection, the scale image pyramid of foundation is divided into n octave
(octave) there is an interior layer between layer and n internal (intra-octave) layer, the adjacent octave layer of each two;I-th
Octave layer uses ci tables, and i-th of interior layer is represented with di, wherein i={ 0,1 ..., n-1 };Ci, di layers and the yardstick pass of original image
System is expressed as follows with t:T (ci)=2i, t (di)=2i*1.5;
Second step:Feature point detection;
In the first step, n=4, obtains 8 images, and FAST9-16 Corner Detections are carried out to this 8 images;To original image
Img carries out a FAST5-8 Corner Detection, and as d (- 1) virtual level, obtaining 9 width has the image of angle point information;
3rd step:Non-maxima suppression;
4th step:Characteristic point position is fitted;
First, in the score value neighborhood of the scalogram picture where characteristic point, previous scalogram picture and latter scalogram picture,
The Two-Dimensional Quadratic function interpolation of least square is carried out to FAST score value, score extreme point and its accurate coordinate position is obtained;
One-dimensional interpolation is carried out to dimension again, the yardstick corresponding to extreme point is obtained;
5th step:Calculate the Feature Descriptor of characteristic point;
Centered on characteristic point, in the N number of point of its surrounding sample, sampling configuration is centered on characteristic point, to build different half
The concentric circles in footpath, the equal interval sampling point of certain amount is then taken in each concentric circles, obtains N number of including characteristic point
Sampled point;Carried out the gaussian filtering that variance is δ to each sampled point, the standard deviation of filter radius size and gaussian filtering core into
Direct ratio, the N number of sampled point finally used is the sampled point after Gaussian smoothing, and N span is 50~70;
Due to there is N number of sampled point, then sampled point combination of two in a pair, has and plantedCombination, all combination sides
The set of formula is referred to as sampled point pair, is expressed as follows:
Wherein,Represent the set of all sampled points pair, PiWith PjRepresent i-th and j sampled point;
If the gray value of two sampled points after gaussian filtering is I (Pi, σi) and I (Pj, σj), wherein σ represents half
Footpath, and the partial gradient g (P of this point-to-point transmissioni, Pj) be:
Above-mentioned set is divided into two subsets according to the distance between 2 points:Short distance point is to subsetAntithetical phrase is put over long distances
Collection
Wherein, threshold θmax=9.75t, threshold θmin=13.67t, t represent the yardstick of this feature point;
The direction calculating formula of characteristic point is as follows:
Wherein, l refers to subsetElement number, gxRepresent the gradient of characteristic point in the x direction, gyRepresent characteristic point
Gradient in y-direction, g represents gradient of the characteristic point on x and y directions;
When calculating BRISK Feature Descriptors, first sample area rotation alpha=arctan2 (gy, gx) angle, Ran Houzai
Resampling in region after rotation, is obtainedIndividual sampling pair, selects short distance point therein to subsetIn 512
Point pair, is compared, ultimately produces descriptor, calculation formula is as follows as follows:
WhereinThe sampled point obtained after rotation is represented, b represents the value in each in Feature Descriptor, the value is only
Can be 0 or 1.
Further, the extraction process of the ORB Feature Descriptors is as follows:
The first step:Feature point extraction based on FAST;
For given original image img, it is that the detailed process of characteristic point is to judge certain point P on image:
1) gray value for assuming the point is I (x, y), and the circle that a radius is 3 pixels is drawn by the center of circle of P, this circle
There are 16 pixels on border;A suitable threshold T is set, when the absolute value of the difference of the gray value of 2 points is more than t, it is believed that
This 2 points are differed;Circumferentially if continuous m point is different with P points, m ∈ 9~12, then it is assumed that P is characterized a little;
2) using the measure of Harris angle points, Harris angle points response maximum is picked out from FAST characteristic points
N number of angle point, the receptance function of wherein Harris angle points is defined as:
R=detM- α (traceM)2
Wherein, IxRepresent the partial derivatives of image I in the x direction, IyRepresent image I) partial derivative in y-direction, detM is
Matrix M determinant, traceM is matrix M mark, and α is constant, and span is that 0.04~0.06, R represents Harris angle points
Receptance function;
3) scale image pyramid set up is divided into n octave (octave) layer and n internal (intra-octave) layer,
There is an interior layer between the adjacent octave layer of each two;I-th of octave layer uses ci tables, and i-th of interior layer is represented with di, wherein
I=0,1 ..., n-1 };Assuming that there is the production method of img, octave layers of image as follows:C0 layers are exactly img original images, c1 layers
It is c0 layers of 2 times of down-samplings, c2 layers are c1 layers of 2 times of down-samplings, by that analogy;Intra-octave layers of production method is such as
Under:D0 layers be img 1.5 times of down-samplings, d2 layers are d1 layers of 2 times of down-samplings, and d3 layers are d2 layers of 2 times of down-samplings, with such
Push away;The scaling relation of ci, di layers and original image is expressed as follows with t:
T (ci)=2i, t (di)=2i*1.5
4) determine 3) to calculate the direction of all characteristic points using gray scale centroid method, i.e., calculated by square characteristic point using r as
Barycenter in radius, feature point coordinates to direction of barycenter one vector of formation as this feature point;For any one
Characteristic point P, the square of neighborhood territory pixel for defining P is:
Wherein, I (x, y) represents the gray value at point (x, y) place;
The barycenter C of the square is:
Assuming that angle point P coordinate is O, then vectorial angle is the direction of this feature point;Calculation formula is as follows:
θ=arctan (m01, m10)
Second step:Feature point description based on BRIEF;
The orientation angle of characteristic point is contained in the characteristic point that the above-mentioned first step is calculated;Calculate and obtain for the first step
Characteristic point P, it BRIEF (Binary Robust Independent Elementary Features) description son be one
Individual length is n two-value sequence, and this two-value string is that n point is generated to (2n point) around characteristic point, by this 2n point
(xi, yi), i=1,2 ..., 2n constitutes a matrix S:
Turned clockwise by anglec of rotation θ, obtain new point pair:
Sθ=RθS
Wherein:
Further, calculating process of the S3 based on the pyramidal Image Feature Point Matching of light stream is as follows:
The first step:For reference gray level image IgWith registering gray level image IkDown-sampling is carried out respectively, obtains IgAnd IkFigure
As pyramid, i.e., with the low pass filter pair of one [0.25 0.5 0.25]Convolution is carried out, calculation formula is as follows:
Wherein,Represent reference gray level image IgIn in L tomographic images pixel coordinate (x, y) place pixel value, root
The second tomographic image of image pyramid can be obtained according to down-sampling formulaWithTo the second tomographic imageWithCarry out down-sampling,
Obtain the third layer image of image pyramidWithThe number of plies of image pyramid takes 3 to 5 layers;
Second step:Calculate imageMiddle characteristic point p [x, y] is relative to imageLuminous flux, calculation formula is as follows:
Wherein, IxRepresent the derivative of the line direction of each pixel in w*w windows, IyRepresent each pixel in w*w window
The derivative of the column direction of point, ItRepresent imageWithTime-derivative, dn=[un, vn] represent imageMiddle characteristic point p [x,
Y] relative to imageLuminous flux, wherein unFor the luminous flux of the line direction of image pyramid n-th layer, vnFor image pyramid
The luminous flux of the column direction of n-th layer, n value is 4;
3rd step:By the luminous flux d of n-th layernPyramidal (n-1)th layer of light stream is transferred to as initial luminous flux, that is, is existed
ImageLuminous flux d is superimposed on middle pixel q [x, y]n, the coordinate of each pixel is become q [x+2*un, y+2*vn],
Then in imageWith the image for being superimposed luminous fluxBetween, repeat previous step and calculate (n-1)th layer of image pyramid
Luminous flux dn-1=[un-1, vn-1];This process is repeated, until calculating the light stream for obtaining image pyramid first layer, i.e. original image
Measure d1=[u1, v1];
4th step:According to first layer luminous flux d1=[u1, v1] and reference gray level image IgCharacteristic point p [x, y] coordinate,
Calculate registering gray level image IkThe feature point coordinates (x ', y ') of middle matching, the x ' such as tried to achieve, pixel models of the y ' beyond image
Enclose, i.e., Feature Points Matching fails, then loses characteristic point q [x ', y '], and by corresponding characteristic point p [x, y] from set of characteristic points
Eliminated in D;If the x ' tried to achieve, y ' are in the range of image pixel, then matching characteristic point q [x ', y '] is stored in feature point set
In D ', computational methods are as follows:
X '=x-u1Y '=y-v1。
Further, specific steps of the S4 based on MDLT algorithms and RANSAC algorithm estimated projection transformation matrixs are such as
Under:
Assuming that there is the point x=[x y] in a pair of matching double points, reference picture in two images subject to registrationTWith target image
In point x '=[x ' y ']T, the mapping relations between x and x ' represent with homography matrix H:
Wherein,X homogeneous coordinates are represented ,~representing as similar as possible, H is 3*3 matrix,In inhomogeneous coordinate, there is following corresponding relation:
Wherein, rjThe jth row of H-matrix is represented, and is mapped as Nonlinear Mapping herein;
Assuming that there is a series of N number of initial matching points pair between reference picture and registering imageThen have
Wherein, h is represented H vector form, hiRepresent the i-th row of homography matrix;
Assuming thatThe front two row of matrix above is represented, i represents i-th of matching double points { xi, x 'i, give estimation
H values, the object function of matrix H is:
Wherein, | | aiH | | the geometric error of ith feature point pair is represented,What DLT algorithms were calculated is view picture
Image is using a homography matrix, so being only applicable to the little situation of the anglec of rotation, MDLT algorithms change to DLT algorithms
Enter, i.e., to each pixel x in image*, the H of a local dependence is calculated using weighting valuation*Carry out projection mapping:
Wherein
Value show more greatly apart from x*It is nearer,Calculation formula it is as follows:
Wherein, σ is scale factor;
It is rightOperation is normalized, γ ∈ [0,1] are taken:
Before H-matrix is calculated using MDLT, the RANSAC algorithms based on DLT are first used to remove the exterior point of matching, it is assumed that just
Beginningization homography matrix H(I, J) ', calculate the characteristic point p [x in image I characteristic set Di, yi] intermediate quantity q ' [x 'i, y 'i]:
q′[x′i, y 'i]=H(I, J) ' -1*p[xi, yi]
If calculating obtained intermediate vector q ' [x 'i, y 'i] meet formula below, then by this feature point to being retained in feature set
In S;Otherwise this feature point is calculated to the quantity NUM for being retained in matching double points in collection S to being put into complementary set C, finally;
|q[x′i, y 'i]-q′[x′i, y 'i]|≤0.1
If NUM >=M, terminate RANSAC algorithms, and will current initial homography matrix H(I, J) 'It is used as optimal homography square
Battle array H, wherein M are characterized number of samples threshold value a little to collecting S;
If NUM < M, the cycle-index of RANSAC algorithms is determined whether whether in threshold value set in advance, if
The cycle-index of RANSAC algorithms is in threshold value, then repeatedly above procedure tries to achieve new initial homography matrix H(I, J) ';If
The cycle-index of RANSAC algorithms then terminates RANSAC algorithms beyond the threshold value of setting, and will current initial homography matrix
H(I, J) 'It is used as optimal homography matrix H.
Further, the S5 calculate every piece of projective transformation matrix specific method it is as follows:
The first step:Original image is uniformly divided into C1×C2Individual unit, and using the center pixel of each unit as
x*, then all pixels in same unit use same homography matrix;
Second step:Each unit is calculated using MDLTMatrix, makes weight matrix Wγ=γ I, it is assumed that V list
Show WγA right singular value decomposition;Eigenvalues Decomposition calculating process is as follows:
Matrix and WγMatrix is identical, except the W on diagonaliIt is not γ place, then:
Wherein,riThe i-th row of A matrixes is represented,Above above-mentioned characteristic equation diagonalization
Matrix in the middle of formulaObtain
Further, the seamless spliced fusion formula of described image is as follows:
m1(x, y), m2(x, y) is respectively the grey scale pixel value of previous frame reference picture and rear frame image to be spliced, and m (x, y) is
Grey scale pixel value after fusion, k1、k2For weights, and meet k1+k2=1;
Further, the k1And k2Weights it is as follows:
The present invention makes improvement based on the image split-joint method for approaching projective transformation (APAP), adds before characteristic extraction step
Enter pre-treatment step, creatively employ in characteristic extraction step ORB and/or BRISK binary features description,
Light stream pyramid is creatively employed in matching step, makes image mosaic time saving and registering accurate.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also
To obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is to approach projective transformation image split-joint method flow chart based on improved in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
As shown in figure 1, a kind of image split-joint method for approaching projective transformation of the present invention comprises the following steps:
S1:Image preprocessing.
In embodiments of the present invention, by camera acquisition to multiple there are the original images in region of partly overlapping to be made an uproar
The pretreatments such as the enhancing and histogrammic normalization of sound suppression, texture and contrast, so that image to be spliced is not present
Obvious difference.
In addition, when video camera camera lens either camera without being wide-angle just to scenery or video camera to be captured
Camera lens and fish eye lens, then the scene image photographed will produce more serious distortion, originally identical in two images
Object can become to mismatch because of distortion, the problem of this can bring very big to the registration of image.Lead in the embodiment of the present invention
Cross and the camera lens of video camera is demarcated, according to the reason for image fault, set up corresponding mathematical modeling, so as to utilize demarcation
Parameter is corrected to image that is contaminated or distorting.
S2:Image characteristic point is extracted.
Multiple images to be spliced after being pre-processed to S1 carry out that multiple are waited to spell in feature point extraction, the embodiment of the present invention
Two kinds of binary features description of the image zooming-out connect, are BRISK (Binary Robust Invariant Scalable respectively
) and ORB (Oriented FAST and Rotated BRIEF) Feature Descriptor Keypoints.
BRISK Feature Descriptors are a kind of binary Feature Descriptors, with preferable rotational invariance, Scale invariant
Property and preferable robustness, registration effect is good when to there is larger fuzzy image registration.The first step:Build metric space.
During BRISK algorithms carry out feature point detection, the scale image pyramid of foundation is divided into n octave
(octave) there is an interior layer between layer and n internal (intra-octave) layer, the adjacent octave layer of each two;I-th
Octave layer uses ci tables, and i-th of interior layer is represented with di, wherein i={ 0,1 ..., n-1 }.Assuming that having image img, octave layer
Production method it is as follows:C0 layers are exactly img original images, and c1 layers are c0 layers of 2 times of down-samplings, and c2 layers are c1 layers of 2 times of down-samplings,
By that analogy.Intra-octave layers of production method is as follows:D0 layers be img 1.5 times of down-samplings, d2 layers are 2 times of d1 layers
Down-sampling, d3 layers are d2 layers of 2 times of down-samplings, by that analogy.The scaling relation of ci, di layers and original image is expressed as follows with t:
T (ci)=2i, t (di)=2i*1.5
Second step:Feature point detection.
In the first step, due to n=4, so 8 images can be obtained altogether, FAST9-16 angles are carried out to this 8 images
Point detection, i.e., if continuous 9 or the gray value of the pixel of more than 9 in 16 pixels around a pixel
Gray value more than the pixel adds a threshold value or both less than a gray value threshold value that subtracts one for the pixel, then it is assumed that the picture
Vegetarian refreshments is a characteristic point.FAST5-8 Corner Detection is carried out to original image img, as d (- 1) virtual level, then altogether
Obtaining 9 width has the image of angle point information.
3rd step:Non-maxima suppression.
Characteristic point under each yardstick that second step is obtained to each layer characteristic point detected, it is necessary to carry out non-maximum
Suppress, i.e. score value of the FAST of this feature point score value than the FAST of 8 field pixels of layer where it is big, and
Than the scalogram picture last layer and next layer of corresponding region pixel FAST score value it is big, otherwise can not regard
Characteristic point.
4th step:Characteristic point position is fitted.
By previous step, the yardstick of the characteristic point that we obtain is not continuous, and the position of characteristic point is also
Calculated according to the ranks value of pixel, nor continuously, it is therefore desirable to be fitted.First, where characteristic point
Scalogram picture, the score value neighborhood of previous scalogram picture and latter scalogram picture, least square is carried out to FAST score value
Two-Dimensional Quadratic function interpolation (x, y direction), obtain score extreme point truly and its accurate coordinate position (as
Characteristic point position);One-dimensional interpolation is carried out to dimension again, the yardstick corresponding to extreme point is obtained (as feature point scale).
5th step:Calculate the Feature Descriptor of characteristic point.
Centered on characteristic point, in the N number of point (including characteristic point) of its surrounding sample, sampling configuration be using characteristic point in
The heart, builds the concentric circles of different radii, the equal interval sampling point of certain amount is then taken in each concentric circles, obtains including spy
N number of sampled point including levying a little.Because this sampling configuration can produce aliasing effect, therefore variance is carried out to each sampled point
For δ gaussian filtering, filter radius size is directly proportional to the standard deviation of gaussian filtering core, the N number of sampled point finally used be through
N span is 50~70 in the sampled point crossed after Gaussian smoothing, the present invention.
Due to there is N number of sampled point, then sampled point combination of two in a pair, hasPlant combination, all combination sides
The set of formula is referred to as sampled point pair, is expressed as follows:
Wherein,Represent the set of all sampled points pair, PiWith PjRepresent i-th and j sampled point.
If the gray value of two sampled points after gaussian filtering is I (Pi, σi) and I (Pj, σj), wherein σ represents half
Footpath, and the partial gradient g (P of this point-to-point transmissioni, Pj) be:
Above-mentioned set is divided into two subsets according to the distance between 2 points:Short distance point is to subsetAntithetical phrase is put over long distances
Collection
Wherein, threshold θmax=9.75t, threshold θmin=13.67t, t represent the yardstick of this feature point.
The direction calculating formula of characteristic point is as follows:
Wherein, l refers to subsetElement number, gxRepresent the gradient of characteristic point in the x direction, gyRepresent characteristic point
Gradient in y-direction, g represents gradient of the characteristic point on x and y directions
BRISK Feature Descriptors have preferable rotational invariance, therefore when calculating BRISK Feature Descriptors, first
Sample area rotation alpha=arctan2 (gy, gx) angle, then resampling in region again after rotation, is obtained
Individual sampling pair, selects short distance point therein to subsetIn 512 points pair, compared as follows, ultimately produce descriptor, count
Calculate formula as follows:
WhereinThe sampled point obtained after rotation is represented, b represents the value in each in Feature Descriptor, and the value can only
It is 0 or 1
ORB Feature Descriptors are a kind of binary Feature Descriptors, with rotational invariance, size constancy and to making an uproar
The robustness of sound, the registration effect to image is good, at least fast 100 times more sub than SIFT feature description of the matching speed of characteristic point.ORB
The calculating process of Feature Descriptor is as follows:
The first step:Feature point extraction based on FAST.
For given original image img, it is that the detailed process of characteristic point is to judge certain point P on image:
1) gray value for assuming the point is I (x, y), and the circle that a radius is 3 pixels is drawn by the center of circle of P, this circle
There are 16 pixels on border;A suitable threshold T is set, when the absolute value of the difference of the gray value of 2 points is more than t, it is believed that
This 2 points are differed;Circumferentially if continuous m point is different with P points, m ∈ 9~12, then it is assumed that P is characterized a little.
2) using the measure of Harris angle points, Harris angle points response maximum is picked out from FAST characteristic points
N number of angle point, the receptance function of wherein Harris angle points is defined as:
R=detM- α (traceM)2
Wherein, IxRepresent the partial derivatives of image I in the x direction, IyRepresent image I) partial derivative in y-direction, detM is
Matrix M determinant, traceM is matrix M mark, and α is constant, and span is that 0.04~0.06, R represents Harris angle points
Receptance function.
3) scale image pyramid set up is divided into n octave (octave) layer and n internal (intra-octave) layer,
There is an interior layer between the adjacent octave layer of each two;I-th of octave layer uses ci tables, and i-th of interior layer is represented with di, wherein
I=0,1 ..., n-1 }.Assuming that there is the production method of img, octave layers of image as follows:C0 layers are exactly img original images, c1 layers
It is c0 layers of 2 times of down-samplings, c2 layers are c1 layers of 2 times of down-samplings, by that analogy.Intra-octave layers of production method is such as
Under:D0 layers be img 1.5 times of down-samplings, d2 layers are d1 layers of 2 times of down-samplings, and d3 layers are d2 layers of 2 times of down-samplings, with such
Push away.The scaling relation of ci, di layers and original image is expressed as follows with t:
T (ci)=2i, t (di)=2i*1.5
4) determine 3) to calculate the direction of all characteristic points using gray scale centroid method, i.e., calculated by square characteristic point using r as
Barycenter in radius, feature point coordinates to direction of barycenter one vector of formation as this feature point.For any one
Characteristic point P, the square of neighborhood territory pixel for defining P is:
Wherein, I (x, y) represents the gray value at point (x, y) place.
The barycenter C of the square is:
Assuming that angle point P coordinate is O, then vectorial angle is the direction of this feature point.Calculation formula is as follows:
θ=arctan (m01, m10)
Second step:Feature point description based on BRIEF.
The orientation angle of characteristic point is contained in the characteristic point that the above-mentioned first step is calculated.Calculate and obtain for the first step
Characteristic point P, it BRIEF (Binary Robust Independent Elementary Features) description son be one
Individual length is n two-value sequence, and this two-value string is that n point is generated to (2n point) around characteristic point, by this 2n point
(xi, yi), i=1,2 ..., 2n constitutes a matrix S:
Turned clockwise by anglec of rotation θ, obtain new point pair:
Sθ=RθS
Wherein:
S3:Based on the pyramidal Image Feature Point Matching of light stream.
According to ORB and the BRISK Feature Descriptor of the step S2 images to be spliced extracted, light stream pyramid algorith meter is utilized
Light stream is calculated, the matching of characteristic point between image to be spliced is completed, calculating process is as follows:
First:For reference gray level image IgWith registering gray level image IkDown-sampling is carried out respectively, obtains IgAnd IkImage
Pyramid, i.e., with the low pass filter pair of one [0.25 0.5 0.25]Convolution is carried out, calculation formula is as follows:
Wherein,Represent reference gray level image IgIn in L tomographic images pixel coordinate (x, y) place pixel value, root
The second tomographic image of image pyramid can be obtained according to down-sampling formulaWithTo the second tomographic imageWithCarry out down-sampling,
Obtain the third layer image of image pyramidWithThe number of plies of image pyramid takes 3 to 5 layers.
Second:Calculate imageMiddle characteristic point p [x, y] is relative to imageLuminous flux, calculation formula is as follows:
Wherein, IxRepresent the derivative of the line direction of each pixel in w*w windows, IyRepresent each pixel in w*w window
The derivative of the column direction of point, ItRepresent imageWithTime-derivative, dn=[un, vn] represent imageMiddle characteristic point p [x,
Y] relative to imageLuminous flux, wherein unFor the luminous flux of the line direction of image pyramid n-th layer, vnFor image pyramid
The luminous flux of the column direction of n-th layer, n value is 4;
3rd:By the luminous flux d of n-th layernPyramidal (n-1)th layer of light stream is transferred to as initial luminous flux, i.e., in figure
PictureLuminous flux d is superimposed on middle pixel q [x, y]n, the coordinate of each pixel is become q [x+2*un, y+2*vn], so
Afterwards in imageWith the image for being superimposed luminous fluxBetween, repeat the light that previous step calculates (n-1)th layer of image pyramid
Flow dn-1=[un-1, vn-1].This process is repeated, until calculating the luminous flux for obtaining image pyramid first layer, i.e. original image
d1=[u1, v1]。
4th step:According to first layer luminous flux d1=[u1, v1] and reference gray level image IgCharacteristic point p [x, y] coordinate,
Calculate registering gray level image IkThe feature point coordinates (x ', y ') of middle matching, the x ' such as tried to achieve, pixel models of the y ' beyond image
Enclose, i.e., Feature Points Matching fails, then loses characteristic point q [x ', y '], and by corresponding characteristic point p [x, y] from set of characteristic points
Eliminated in D;If the x ' tried to achieve, y ' are in the range of image pixel, then matching characteristic point q [x ', y '] is stored in feature point set
In D ', computational methods are as follows:
X '=x-u1Y '=y-v1
S4:Based on MDLT algorithms and RANSAC algorithm estimated projection transformation matrixs.
Projective transformation matrix is calculated using MDLT (Moving Direct Linear Transformation) algorithm, can
Effectively to avoid the ghost phenomena of two images intersection, alignment error is reduced, is geometrically seeming more to approach
Truly, distort smaller.
Assuming that there is the point x=[x y] in a pair of matching double points, reference picture in two images subject to registrationTWith target image
In point x '=[x ' y ']T, the mapping relations between x and x ' represent with homography matrix H:
Wherein,X homogeneous coordinates are represented ,~representing as similar as possible, H is 3*3 matrix,In inhomogeneous coordinate, there is following corresponding relation:
Wherein, rjThe jth row of H-matrix is represented, and is mapped as Nonlinear Mapping herein.DLT algorithms are exactly to calculate single answer
A series of a kind of basic algorithm of property matrix H, it is assumed that have N number of initial matching points pair between reference picture and registering imageThen have
Wherein, h is represented H vector form, hiRepresent the i-th row of homography matrix.
Assuming thatThe front two row of matrix above is represented, i represents i-th of matching double points { xi, x 'i, give estimation
H values, the object function of matrix H is:
Wherein, | | aiH | | the geometric error of ith feature point pair is represented,What DLT algorithms were calculated is whole
Width image is using a homography matrix, so being only applicable to the little situation of the anglec of rotation, MDLT algorithms change to DLT algorithms
Enter, i.e., to each pixel x in image*, the H of a local dependence is calculated using weighting valuation*Carry out projection mapping:
Wherein
Value show more greatly apart from x*It is nearer,Calculation formula it is as follows:
Wherein, σ is scale factor.As i-th of match point xiWith pixel x*Apart from it is near when, weights are bigger, therefore H*Preferably
Local projection property is illustrated, further, since x*It is consecutive variations, H in the picture*It is also consecutive variations.
With each pixel x in image*I-th of match point x of distanceiBetween geometric distance positive correlation, cause in order to avoid getting too close to 0
Error is rightOperation is normalized, γ ∈ [0,1] are taken:
Before H-matrix is calculated using MDLT, the RANSAC algorithms based on DLT are first used to remove the exterior point of matching, it is assumed that just
Beginningization homography matrix H(I, J) ', calculate the characteristic point p [x in image I characteristic set Di, yi] intermediate quantity q ' [xi', yi′]:
q′[x′i, y 'i]=H(I, J) ' -1*p[xi, yi]
If calculating obtained intermediate vector q ' [xi', yi'] meet formula below, then by this feature point to being retained in feature set
In S;Otherwise this feature point is calculated to the quantity NUM for being retained in matching double points in collection S to being put into complementary set C, finally.
|q[x′i, y 'i]-q′[x′i, y 'i]|≤0.1
If NUM >=M, terminate RANSAC algorithms, and will current initial homography matrix H(I, J) 'It is used as optimal homography square
Battle array H, wherein M are characterized number of samples threshold value a little to collecting S.
If NUM < M, the cycle-index of RANSAC algorithms is determined whether whether in threshold value set in advance, if
The cycle-index of RANSAC algorithms is in threshold value, then repeatedly above procedure tries to achieve new initial homography matrix H(I, J) ';If
The cycle-index of RANSAC algorithms then terminates RANSAC algorithms beyond the threshold value of setting, and will current initial homography matrix
H(I, J) 'It is used as optimal homography matrix H.
S5:The projective transformation matrix of every piece of calculating.
Each pixel x is calculated using step S4*Projective transformation matrix H with extremely complex, in the present invention using point
The mode of block calculates each piece of homography matrix, and specific method is as follows:
The first step:Original image is uniformly divided into C1×C2Individual unit, and using the center pixel of each unit as
x*, then all pixels in same unit use same homography matrix.
Second step:Each unit is calculated using MDLTMatrix, makes weight matrix Wγ=γ I, it is assumed that V list
Show WγA right singular value decomposition.Eigenvalues Decomposition calculating process is as follows:
Matrix and WγMatrix is identical, except the W on diagonaliIt is not γ place, then:
Wherein,riThe i-th row of A matrixes is represented,Above above-mentioned characteristic equation diagonalization
Matrix in the middle of formulaObtain
Due toIt is the matrix on the right side of singular value decomposition, i.e. h estimate, homography matrix is done to each unit with this
Conversion, obtains the image of registration.
S6:Image it is seamless spliced.
Step S5 obtains two images to be spliced after registration, it is assumed that m1(x, y), m2(x, y) is respectively previous frame reference picture
With the grey scale pixel value of rear frame image to be spliced, m (x, y) is the grey scale pixel value after fusion, and fusion formula is as follows:
Wherein, k1、k2For weights, and meet k1+k2=1.In order that the overlapping region of image is preferably seamlessly transitted, this
Invention improves k1、k2Weights, it is as follows:
According to pixel in the pixel distance and overlapping region of pixel in overlapping region and reference picture and image to be spliced
Pixel distance relation, the normalized mode of adoption rate carries out pixel value distribution, so as to realize image overlapping region pixel value
Seamlessly transit, so as to complete the seamless spliced of image.
The present invention makes improvement based on the image split-joint method for approaching projective transformation (APAP), adds before characteristic extraction step
Enter pre-treatment step, creatively employ in characteristic extraction step ORB and/or BRISK binary features description,
Light stream pyramid is creatively employed in matching step, makes image mosaic time saving and registering accurate.
Above disclosed is only a kind of preferred embodiment of the invention, can not limit the power of the present invention with this certainly
Sharp scope, therefore the equivalent variations made according to the claims in the present invention, still belong to the scope that the present invention is covered.
Claims (8)
1. one kind approaches projective transformation image split-joint method based on improved, it is characterised in that including S1:Image preprocessing;S2:
Image characteristic point is extracted;S3:Based on the pyramidal Image Feature Point Matching of light stream;S4:Based on MDLT algorithms and RANSAC algorithms
Estimated projection transformation matrix;S5:The projective transformation matrix of every piece of calculating;S6:Image it is seamless spliced;
The mode of the S1 image preprocessings includes noise suppressed, the enhancing of texture and contrast and histogrammic normalization,
Also include demarcating the camera lens of video camera, corresponding mathematical modeling is set up according to the reason for image fault, demarcation is utilized
Parameter is corrected to image that is contaminated or distorting;
The S2 image characteristic points extract and use BRISK and/or ORB Feature Descriptors.
2. according to claim 1 approach projective transformation image split-joint method based on improved, it is characterised in that described
The extraction process of BRISK Feature Descriptors is as follows:
The first step:Build metric space;
During BRISK algorithms carry out feature point detection, the scale image pyramid of foundation is divided into n octave (octave) layer
With n internal (intra-octave) layer, there is an interior layer between the adjacent octave layer of each two;I-th of octave layer uses ci
Table, i-th of interior layer represent with di, wherein i={ 0,1 ..., n-1 };The scaling relation of ci, di layers and original image is represented with t
It is as follows:T (ci)=2i, t (di)=2i*1.5;
Second step:Feature point detection;
In the first step, n=4, obtains 8 images, and FAST9-16 Corner Detections are carried out to this 8 images;Original image img is entered
FAST5-8 Corner Detection of row, as d (- 1) virtual level, obtaining 9 width has the image of angle point information;
3rd step:Non-maxima suppression;
4th step:Characteristic point position is fitted;
First, it is right in the score value neighborhood of the scalogram picture where characteristic point, previous scalogram picture and latter scalogram picture
FAST score value carries out the Two-Dimensional Quadratic function interpolation of least square, obtains score extreme point and its accurate coordinate position;Again
One-dimensional interpolation is carried out to dimension, the yardstick corresponding to extreme point is obtained;
5th step:Calculate the Feature Descriptor of characteristic point;
Centered on characteristic point, in the N number of point of its surrounding sample, sampling configuration is centered on characteristic point, to build different radii
Concentric circles, the equal interval sampling point of certain amount is then taken in each concentric circles, N number of sampling including characteristic point is obtained
Point;The gaussian filtering that variance is δ is carried out to each sampled point, the standard deviation of filter radius size and gaussian filtering core is into just
Than the N number of sampled point finally used is the sampled point after Gaussian smoothing, and N span is 50~70;
Due to there is N number of sampled point, then sampled point combination of two in a pair, has and plantedCombination, all combinations
Set is referred to as sampled point pair, is expressed as follows:
Wherein,Represent the set of all sampled points pair, PiWith PjRepresent i-th and j sampled point;
If the gray value of two sampled points after gaussian filtering is I (Pi, σi) and I (Pj, σj), wherein σ represents radius, and
Partial gradient g (the P of this point-to-point transmissioni, Pj) be:
Above-mentioned set is divided into two subsets according to the distance between 2 points:Short distance point is to subsetPoint is to subset over long distances
Wherein, threshold θmax=9.75t, threshold θmin=13.67t, t represent the yardstick of this feature point;
The direction calculating formula of characteristic point is as follows:
Wherein, l refers to subsetElement number, gxRepresent the gradient of characteristic point in the x direction, gyRepresent characteristic point in y side
Upward gradient, g represents gradient of the characteristic point on x and y directions;
When calculating BRISK Feature Descriptors, first sample area rotation alpha=arctan2 (gy, gx) angle, then again in rotation
Resampling in region after turning, is obtainedIndividual sampling pair, selects short distance point therein to subsetIn 512 points pair,
Compared as follows, ultimately produce descriptor, calculation formula is as follows:
WhereinThe sampled point obtained after rotation is represented, b represents the value in each in Feature Descriptor, and the value can only be 0
Or 1.
3. according to claim 1 approach projective transformation image split-joint method based on improved, it is characterised in that described
The extraction process of ORB Feature Descriptors is as follows:
The first step:Feature point extraction based on FAST;
For given original image img, it is that the detailed process of characteristic point is to judge certain point P on image:
1) gray value for assuming the point is I (x, y), and the circle that a radius is 3 pixels, the border of this circle are drawn by the center of circle of P
On have 16 pixels;A suitable threshold T is set, when the absolute value of the difference of the gray value of 2 points is more than t, it is believed that this 2
Point is differed;Circumferentially if continuous m point is different with P points, m ∈ 9~12, then it is assumed that P is characterized a little;
2) using the measure of Harris angle points, the N number of of Harris angle points response maximum is picked out from FAST characteristic points
Angle point, the receptance function of wherein Harris angle points is defined as:
R=detM- α (traceM)2
Wherein, IxRepresent the partial derivatives of image I in the x direction, IyRepresent image I) partial derivative in y-direction, detM is matrix
M determinant, traceM is matrix M mark, and α is constant, and span is the sound that 0.04~0.06, R represents Harris angle points
Answer function;
3) scale image pyramid set up is divided into n octave (octave) layer and n internal (intra-octave) layer, every two
There is an interior layer between individual adjacent octave layer;I-th of octave layer uses ci tables, and i-th of interior layer is represented with di, wherein i=
{0,1,...,n-1};Assuming that there is the production method of img, octave layers of image as follows:C0 layers are exactly img original images, and c1 layers are c0
2 times of down-samplings of layer, c2 layers are c1 layers of 2 times of down-samplings, by that analogy;Intra-octave layers of production method is as follows:d0
Layer is img 1.5 times of down-samplings, and d2 layers are d1 layers of 2 times of down-samplings, and d3 layers are d2 layers of 2 times of down-samplings, by that analogy;ci、
The scaling relation of di layers and original image is expressed as follows with t:
T (ci)=2i, t (di)=2i*1.5
4) determine 3) to calculate the direction of all characteristic points using gray scale centroid method, i.e., characteristic point is calculated by square using r as radius
In the range of barycenter, feature point coordinates to barycenter formation one vector as this feature point direction;For any one feature
Point P, the square of neighborhood territory pixel for defining P is:
Wherein, I (x, y) represents the gray value at point (x, y) place;
The barycenter C of the square is:
Assuming that angle point P coordinate is O, then vectorial angle is the direction of this feature point;Calculation formula is as follows:
θ=arctan (m01, m10);
Second step:Feature point description based on BRIEF;
The orientation angle of characteristic point is contained in the characteristic point that the above-mentioned first step is calculated;Obtained spy is calculated for the first step
Point P is levied, its BRIEF (Binary Robust Independent Elementary Features) description are one long
The two-value sequence for n is spent, this two-value string is that n point is generated to (2n point) around characteristic point, by this 2n point (xi,
yi), i=1,2 ..., 2n constitutes a matrix S:
Turned clockwise by anglec of rotation θ, obtain new point pair:
Sθ=RθS
Wherein:
4. according to claim 1 approach projective transformation image split-joint method based on improved, it is characterised in that the S3
Calculating process based on the pyramidal Image Feature Point Matching of light stream is as follows:
The first step:For reference gray level image IgWith registering gray level image IkDown-sampling is carried out respectively, obtains IgAnd IkImage gold
Word tower, i.e., with the low pass filter pair of one [0.25 0.5 0.25]Convolution is carried out, calculation formula is as follows:
Wherein, IL g(x, y) represents reference gray level image IgIn in L tomographic images pixel coordinate (x, y) place pixel value, under
Sampling formula can obtain the second tomographic image of image pyramidWithTo the second tomographic imageWithDown-sampling is carried out, is obtained
The third layer image of image pyramidWithThe number of plies of image pyramid takes 3 to 5 layers;
Second step:Calculate imageMiddle characteristic point p [x, y] is relative to imageLuminous flux, calculation formula is as follows:
Wherein, IxRepresent the derivative of the line direction of each pixel in w*w windows, IyRepresent each pixel in w*w window
The derivative of column direction, ItRepresent imageWithTime-derivative, dn=[un, vn] represent imageMiddle characteristic point p [x, y] is relative
In imageLuminous flux, wherein unFor the luminous flux of the line direction of image pyramid n-th layer, vnFor image pyramid n-th layer
The luminous flux of column direction, n value is 4;
3rd step:By the luminous flux d of n-th layernPyramidal (n-1)th layer of light stream is transferred to as initial luminous flux, i.e., in imageLuminous flux d is superimposed on middle pixel q [x, y]n, the coordinate of each pixel is become q [x+2*un, y+2*vn], then
In imageWith the image for being superimposed luminous fluxBetween, repeat the light stream that previous step calculates (n-1)th layer of image pyramid
Measure dn-1=[un-1, vn-1];This process is repeated, until calculating the luminous flux d for obtaining image pyramid first layer, i.e. original image1
=[u1, v1];
4th step:According to first layer luminous flux d1=[u1, v1] and reference gray level image IgCharacteristic point p [x, y] coordinate, calculating matches somebody with somebody
Quasi- gray level image IkThe feature point coordinates (x ', y ') of middle matching, the x ' such as tried to achieve, y ' beyond image pixel coverage, i.e., it is special
Point matching failure is levied, then is lost characteristic point q [x ', y '], and corresponding characteristic point p [x, y] is rejected from set of characteristic points D
Go out;If the x ' tried to achieve, y ' are in the range of image pixel, then matching characteristic point q [x ', y '] is stored in feature point set D ', meter
Calculation method is as follows:
X '=x-u1Y '=y-v1。
5. according to claim 1 approach projective transformation image split-joint method based on improved, it is characterised in that the S4
Comprising the following steps that based on MDLT algorithms and RANSAC algorithm estimated projection transformation matrixs:
Assuming that there is the point x=[x y] in a pair of matching double points, reference picture in two images subject to registrationTWith the point in target image
X '=[x ' y ']T, the mapping relations between x and x ' represent with homography matrix H:
Wherein,X homogeneous coordinates are represented ,~representing as similar as possible, H is 3*3 matrix,
In inhomogeneous coordinate, there is following corresponding relation:
Wherein, rjThe jth row of H-matrix is represented, and is mapped as Nonlinear Mapping herein;
Assuming that there is a series of N number of initial matching points pair between reference picture and registering imageThen have
Wherein, h is represented H vector form, hiRepresent the i-th row of homography matrix;
Assuming thatThe front two row of matrix above is represented, i represents i-th of matching double points { xi, x 'i, give the h of estimation
It is worth, the object function of matrix H is:
Wherein, | | aiH | | the geometric error of ith feature point pair is represented,What DLT algorithms were calculated is entire image
Using a homography matrix, so being only applicable to the little situation of the anglec of rotation, MDLT algorithms are improved DLT algorithms, i.e.,
To each pixel x in image*, the H of a local dependence is calculated using weighting valuation*Carry out projection mapping:
Wherein
Value show more greatly apart from x*It is nearer,Calculation formula it is as follows:
Wherein, σ is scale factor;
It is rightOperation is normalized, γ ∈ [0,1] are taken:
Before H-matrix is calculated using MDLT, the RANSAC algorithms based on DLT are first used to remove the exterior point of matching, it is assumed that initialization
Homography matrix H(I, J) ', calculate the characteristic point p [x in image I characteristic set Di, yi] intermediate quantity q ' [xi', yi′]:
q′[x′i, y 'i]=H(I, J) ' -1*p[xi, yi]
If calculating obtained intermediate vector q ' [xi', yi'] meet formula below, then by this feature point to being retained in feature set S;
Otherwise this feature point is calculated to the quantity NUM for being retained in matching double points in collection S to being put into complementary set C, finally;
|q[x′i, y 'i]-q′[x′i, y 'i]|≤0.1
If NUM >=M, terminate RANSAC algorithms, and will current initial homography matrix H(I, J) 'As optimal homography matrix H,
Wherein M is characterized number of samples threshold value a little to collecting S;
If NUM < M, the cycle-index of RANSAC algorithms is determined whether whether in threshold value set in advance, if RANSAC
The cycle-index of algorithm is in threshold value, then repeatedly above procedure tries to achieve new initial homography matrix H(I, J) ';If RANSAC algorithms
Cycle-index beyond the threshold value of setting, then terminate RANSAC algorithms, and will current initial homography matrix H(I, J) 'As optimal
Homography matrix H.
6. according to claim 5 approach projective transformation image split-joint method based on improved, it is characterised in that the S5
The specific method for calculating every piece of projective transformation matrix is as follows:
The first step:Original image is uniformly divided into C1×C2Individual unit, and it regard the center pixel of each unit as x*, that
All pixels in same unit use same homography matrix;
Second step:Each unit is calculated using MDLTMatrix, makes weight matrix Wγ=γ I, it is assumed that V row are represented
Wγ ARight singular value decomposition;Eigenvalues Decomposition calculating process is as follows:
Matrix and WγMatrix is identical, except the W on diagonaliIt is not γ place, then:
Wherein,riRepresent the i-th row of A matrixes, iWith formula above above-mentioned characteristic equation diagonalization
Middle matrixObtain
7. according to claim 6 approach projective transformation image split-joint method based on improved, it is characterised in that the S6
The seamless spliced fusion formula of image is as follows:
m1(x, y), m2(x, y) is respectively the grey scale pixel value of previous frame reference picture and rear frame image to be spliced, and m (x, y) is fusion
Grey scale pixel value afterwards, k1、k2For weights, and meet k1+k2=1.
8. according to claim 7 approach projective transformation image split-joint method based on improved, it is characterised in that the k1
And k2Weights it is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710236732.4A CN107016646A (en) | 2017-04-12 | 2017-04-12 | One kind approaches projective transformation image split-joint method based on improved |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710236732.4A CN107016646A (en) | 2017-04-12 | 2017-04-12 | One kind approaches projective transformation image split-joint method based on improved |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107016646A true CN107016646A (en) | 2017-08-04 |
Family
ID=59446146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710236732.4A Pending CN107016646A (en) | 2017-04-12 | 2017-04-12 | One kind approaches projective transformation image split-joint method based on improved |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107016646A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108133226A (en) * | 2017-11-27 | 2018-06-08 | 西北工业大学 | One kind is based on the improved three-dimensional point cloud feature extracting methods of HARRIS |
CN108364320A (en) * | 2018-03-29 | 2018-08-03 | 深圳市自行科技有限公司 | camera calibration method, terminal device and computer readable storage medium |
CN108805812A (en) * | 2018-06-04 | 2018-11-13 | 东北林业大学 | Multiple dimensioned constant ORB algorithms for image mosaic |
CN108921787A (en) * | 2018-06-11 | 2018-11-30 | 东北电力大学 | Photovoltaic module image split-joint method based on infrared video |
CN109448033A (en) * | 2018-10-14 | 2019-03-08 | 哈尔滨理工大学 | A kind of method for registering images based on BRISK algorithm |
CN109712071A (en) * | 2018-12-14 | 2019-05-03 | 电子科技大学 | Unmanned plane image mosaic and localization method based on track constraint |
CN110084743A (en) * | 2019-01-25 | 2019-08-02 | 电子科技大学 | Image mosaic and localization method based on more air strips starting track constraint |
CN110147708A (en) * | 2018-10-30 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of image processing method and relevant apparatus |
CN111238404A (en) * | 2020-02-26 | 2020-06-05 | 江苏集萃华科智能装备科技有限公司 | Correction method for binary code phase shift periodic code dislocation |
CN111353933A (en) * | 2018-12-20 | 2020-06-30 | 重庆金山医疗器械有限公司 | Image splicing and fusing method and system |
CN112215192A (en) * | 2020-10-22 | 2021-01-12 | 常州大学 | Test paper and method for quickly inputting test paper score based on machine vision technology |
CN112614167A (en) * | 2020-12-17 | 2021-04-06 | 西南石油大学 | Rock slice image alignment method combining single-polarization and orthogonal-polarization images |
CN112649985A (en) * | 2020-12-25 | 2021-04-13 | 武汉华星光电技术有限公司 | Display panel and display device |
CN114519671A (en) * | 2022-02-16 | 2022-05-20 | 天津中科无人机应用研究院 | Unmanned aerial vehicle remote sensing image dynamic rapid splicing method |
CN117409071A (en) * | 2023-12-12 | 2024-01-16 | 知行汽车科技(苏州)股份有限公司 | Spatial scale recovery method, device, equipment and medium for monocular VSLAM |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034250A (en) * | 2010-11-26 | 2011-04-27 | 西安电子科技大学 | Edge structure information based block compression perception reconstruction method |
CN104503721A (en) * | 2014-12-22 | 2015-04-08 | 重庆文理学院 | Mixed band mathematic model based on fitting approximation algorithm |
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
-
2017
- 2017-04-12 CN CN201710236732.4A patent/CN107016646A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034250A (en) * | 2010-11-26 | 2011-04-27 | 西安电子科技大学 | Edge structure information based block compression perception reconstruction method |
CN104503721A (en) * | 2014-12-22 | 2015-04-08 | 重庆文理学院 | Mixed band mathematic model based on fitting approximation algorithm |
CN104751465A (en) * | 2015-03-31 | 2015-07-01 | 中国科学技术大学 | ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint |
Non-Patent Citations (4)
Title |
---|
ETHAN RUBLEE等: "ORB: an efficient alternative to SIFT or SURF", 《2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
JULIO ZARAGOZA等: "As-Projective-As-Possible Image Stitching with Moving DLT", 《 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
杜军等: "基于BRISK特征的图像配准提取与描述研究", 《十堰职业技术学院学报》 * |
江志军等: "一种基于图像金字塔光流的特征跟踪方法", 《武汉大学学报信息科学版》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108133226B (en) * | 2017-11-27 | 2021-07-13 | 西北工业大学 | Three-dimensional point cloud feature extraction method based on HARRIS improvement |
CN108133226A (en) * | 2017-11-27 | 2018-06-08 | 西北工业大学 | One kind is based on the improved three-dimensional point cloud feature extracting methods of HARRIS |
CN108364320A (en) * | 2018-03-29 | 2018-08-03 | 深圳市自行科技有限公司 | camera calibration method, terminal device and computer readable storage medium |
CN108364320B (en) * | 2018-03-29 | 2021-12-21 | 深圳市自行科技有限公司 | Camera calibration method, terminal device and computer readable storage medium |
CN108805812A (en) * | 2018-06-04 | 2018-11-13 | 东北林业大学 | Multiple dimensioned constant ORB algorithms for image mosaic |
CN108921787A (en) * | 2018-06-11 | 2018-11-30 | 东北电力大学 | Photovoltaic module image split-joint method based on infrared video |
CN109448033A (en) * | 2018-10-14 | 2019-03-08 | 哈尔滨理工大学 | A kind of method for registering images based on BRISK algorithm |
CN110147708A (en) * | 2018-10-30 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of image processing method and relevant apparatus |
CN110147708B (en) * | 2018-10-30 | 2023-03-31 | 腾讯科技(深圳)有限公司 | Image data processing method and related device |
CN109712071A (en) * | 2018-12-14 | 2019-05-03 | 电子科技大学 | Unmanned plane image mosaic and localization method based on track constraint |
CN109712071B (en) * | 2018-12-14 | 2022-11-29 | 电子科技大学 | Unmanned aerial vehicle image splicing and positioning method based on track constraint |
CN111353933A (en) * | 2018-12-20 | 2020-06-30 | 重庆金山医疗器械有限公司 | Image splicing and fusing method and system |
CN110084743A (en) * | 2019-01-25 | 2019-08-02 | 电子科技大学 | Image mosaic and localization method based on more air strips starting track constraint |
CN110084743B (en) * | 2019-01-25 | 2023-04-14 | 电子科技大学 | Image splicing and positioning method based on multi-flight-zone initial flight path constraint |
CN111238404B (en) * | 2020-02-26 | 2021-04-20 | 江苏集萃华科智能装备科技有限公司 | Correction method for binary code phase shift periodic code dislocation |
CN111238404A (en) * | 2020-02-26 | 2020-06-05 | 江苏集萃华科智能装备科技有限公司 | Correction method for binary code phase shift periodic code dislocation |
CN112215192A (en) * | 2020-10-22 | 2021-01-12 | 常州大学 | Test paper and method for quickly inputting test paper score based on machine vision technology |
CN112215192B (en) * | 2020-10-22 | 2024-01-23 | 常州大学 | Method for quickly inputting test paper score based on machine vision technology |
CN112614167A (en) * | 2020-12-17 | 2021-04-06 | 西南石油大学 | Rock slice image alignment method combining single-polarization and orthogonal-polarization images |
CN112649985A (en) * | 2020-12-25 | 2021-04-13 | 武汉华星光电技术有限公司 | Display panel and display device |
CN112649985B (en) * | 2020-12-25 | 2022-02-22 | 武汉华星光电技术有限公司 | Display panel and display device |
CN114519671A (en) * | 2022-02-16 | 2022-05-20 | 天津中科无人机应用研究院 | Unmanned aerial vehicle remote sensing image dynamic rapid splicing method |
CN117409071A (en) * | 2023-12-12 | 2024-01-16 | 知行汽车科技(苏州)股份有限公司 | Spatial scale recovery method, device, equipment and medium for monocular VSLAM |
CN117409071B (en) * | 2023-12-12 | 2024-03-01 | 知行汽车科技(苏州)股份有限公司 | Spatial scale recovery method, device, equipment and medium for monocular VSLAM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107016646A (en) | One kind approaches projective transformation image split-joint method based on improved | |
CN104599258B (en) | A kind of image split-joint method based on anisotropic character descriptor | |
CN105245841B (en) | A kind of panoramic video monitoring system based on CUDA | |
CN104484648B (en) | Robot variable visual angle obstacle detection method based on outline identification | |
CN105069746B (en) | Video real-time face replacement method and its system based on local affine invariant and color transfer technology | |
CN107481315A (en) | A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms | |
CN106960442A (en) | Based on the infrared night robot vision wide view-field three-D construction method of monocular | |
CN105809640B (en) | Low illumination level video image enhancement based on Multi-sensor Fusion | |
CN109035315A (en) | Merge the remote sensing image registration method and system of SIFT feature and CNN feature | |
CN109829853A (en) | A kind of unmanned plane image split-joint method | |
CN107833181A (en) | A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision | |
CN105279769B (en) | A kind of level particle filter tracking method for combining multiple features | |
CN112184604B (en) | Color image enhancement method based on image fusion | |
CN107784632A (en) | A kind of infrared panorama map generalization method based on infra-red thermal imaging system | |
CN111462198B (en) | Multi-mode image registration method with scale, rotation and radiation invariance | |
CN114973028B (en) | Aerial video image real-time change detection method and system | |
CN109087245A (en) | Unmanned aerial vehicle remote sensing image mosaic system based on neighbouring relations model | |
CN107154017A (en) | A kind of image split-joint method based on SIFT feature Point matching | |
CN111047513B (en) | Robust image alignment method and device for cylindrical panorama stitching | |
CN104700359A (en) | Super-resolution reconstruction method of image sequence in different polar axis directions of image plane | |
CN115035281B (en) | Rapid infrared panoramic image stitching method | |
CN111126508A (en) | Hopc-based improved heterogeneous image matching method | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN106558065A (en) | The real-time vision tracking to target is realized based on color of image and texture analysiss | |
CN115588033A (en) | Synthetic aperture radar and optical image registration system and method based on structure extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170804 |
|
RJ01 | Rejection of invention patent application after publication |