CN101763633B - Visible light image registration method based on salient region - Google Patents

Visible light image registration method based on salient region Download PDF

Info

Publication number
CN101763633B
CN101763633B CN2009100889753A CN200910088975A CN101763633B CN 101763633 B CN101763633 B CN 101763633B CN 2009100889753 A CN2009100889753 A CN 2009100889753A CN 200910088975 A CN200910088975 A CN 200910088975A CN 101763633 B CN101763633 B CN 101763633B
Authority
CN
China
Prior art keywords
region
salient region
registration
lfd
salient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009100889753A
Other languages
Chinese (zh)
Other versions
CN101763633A (en
Inventor
田捷
郑健
杨鑫
邓可欣
徐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN2009100889753A priority Critical patent/CN101763633B/en
Publication of CN101763633A publication Critical patent/CN101763633A/en
Application granted granted Critical
Publication of CN101763633B publication Critical patent/CN101763633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of image registration, in particular to a visible light image registration method based on salient region. The method comprises: (1) loading an image; (2) extracting the salient region of the image; (3) calculating the region feature describer for the extracted salient region and performing salient region match according to the similarity of the region feature describer; (4) performing local rigid registration to the salient region initially matched in step (3); and (5) adopting the center of the salient region after local rigid registration as a control point to perform overall quadratic polynomial transformation registration. The invention is an automatic image registration method characterized in fastness, accuracy and robustness, and has critical application value in image registration.

Description

Visible light image registration method based on salient region
Technical field
The present invention relates to Flame Image Process, mode identification technology, particularly a kind of automated graphics registration technology based on salient region.
Background technology
At present the automated graphics method for registering of main flow mainly contains the registration based on unique point, based on the registration of gradation of image value, based on the method for registering of mutual information etc.These methods part that all also comes with some shortcomings, registration based on unique point, to the relatively poor visible images of some mass ratioes, unique point is difficult to accurate extraction, requires the gray-scale value of two width of cloth images must be consistent based on the registration of gradation of image value, and is not high to the image registration accuracy that is subjected to environmental impacts such as illumination, registration based on mutual information, the time that registration needs is long, and may be absorbed in local extremum, can't obtain accurate registration results.Therefore, much still adopt manual registration for low-quality image, the success ratio of manual registration and precision are all than higher, but it has strengthened operator's burden, and the speed of registration is slow.
Summary of the invention
At the defective of prior art, the purpose of this invention is to provide a kind of fast, accurately, robust, based on the visible light image registration method of salient region.
For reaching described purpose, the invention provides a kind of automated graphics method for registering based on salient region, the step of this method is as follows:
Step 1: load two images subject to registration on computers, select a width of cloth as the reference image, another width of cloth is as floating image;
Step 2: reference picture and floating image are divided into M * N rectangular area, calculate the local significance function Ls (R) of each region R; Local significance function Ls (R) is carried out Gauss curve fitting, the local significance function value Fls (R) after the calculating match; The center in the local extremum zone of selection Fls (R) is as the salient region center; To each salient region center,, extract the salient region R of reference picture and floating image according to the radius in the Fls in the neighborhood (R) Distribution calculation zone;
Step 3: the yardstick extraneous features descriptor Lfd (R) that the salient region R that extracts is made up one 72 dimension; Defining a distance metric function is Dist (Lfd (R 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfd (R 2) between similarity; To arbitrary salient region of two width of cloth images to C (i, j), (i, regional matching similarity j) adopt by thick matching strategy to essence, carry out the coupling of salient region to calculate C;
Step 4: (i j), adopts the similarity measurement based on normalized correlation coefficient to carry out local rigid body registration to Cmp to the salient region on the preliminary coupling;
Step 5: to the zone behind the local rigid body registration, adopt the method for cluster analysis, extract the regional center point that accurately the match is successful and carry out overall quadratic polynomial conversion registration, realize the accurate registration of two width of cloth images as the reference mark.
Wherein, described local significance function Ls (R) is:
Ls (R)=Av (R) Lge (R), in the formula: Av (R) is that the normalization area differentiation function representation of region R is Av (R)=σ/μ, and σ is the standard deviation of region R, and μ is the average of region R; Lge (R) is that the gradient fields entropy of region R is expressed as follows:
Lge ( R ) = - Σ i = 1 36 p i ( R ) log 2 p i ( R ) ,
p i(R) be that the pixel point set that gradient direction is arranged in i sector region is expressed as follows in the gradient magnitude ratio that region R accounts for:
p i ( R ) = ∫ R i | g ( X i ) | d X i ∫ R | g ( X ) | dX , R iBe to be positioned at the point set that the pixel of i sector region constitutes by all gradient directions, | g ( X ) | = G x ( X ) 2 + G y ( X ) 2 Be pixel X gradient magnitude, X iBe point set R iIn pixel.
Wherein, being calculated as of local significance function value Fls (R) after the described match:
Fls ( R ab ) = Σ i = 1 M Σ j = 1 N Ls ( R ij ) 2 π σ 2 exp - ( ( a - i ) 2 + ( b - j ) 2 ) / ( 2 σ 2 ) , In the formula: the gaussian kernel function with σ=1.5 carries out match, and M is the rectangular area number that image is divided on directions X, and N is the rectangular area number that image is divided on the Y direction, (a, i) ∈ { 1,2, ..., M} is region R coordinate on the directions X in M * N rectangular area array, (b, j) ∈ 1,2 ..., N} is region R coordinate on the Y direction in M * N rectangular area array, and the rectangular area is R Ab
Wherein, the calculating of the zone radius at described each salient region center comprises: with rectangular area, place, salient region center R AbBe the center, make up the square rectangular area set omega of a radius maximum, Ω must meet the following conditions:
Fls(R ij)≥λ·Fls(R ab), ∀ R ij ∈ Ω ,
In the formula: Fls (R Ij) be region R IjLocal significance function value after the match, λ is a rectangular area radius controlled variable, the experience value is 0.75; The length of selection Ω and the smaller value in the width are as the salient region radius.
Wherein, the yardstick extraneous features descriptor of one 72 dimension of described structure is: Lfd (R)=(p 1(R) ... p 36(R), da 1(R) ... da 36(R)), p in the formula i(R) be that gradient direction is arranged in the gradient magnitude ratio that the pixel point set of i sector region accounts in region R, da i(R) ∈ [0,2 π) be the deflection of the geometric center of the gradient direction pixel point set that is positioned at i sector region to the salient region center; Define a distance metric function Dist (Lfd (R then 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfd (R 2) between similarity be:
Dist ( Lfd ( R 1 ) , Lfd ( R 2 ) ) = Σ i = 1 36 { Eud ( da i ( R 1 ) , da i ( R 2 ) ) 2 · Max ( p i ( R 1 ) , p i ( R 2 ) ) }
· Σ i = 1 36 { log Max ( p i ( R 1 ) , p i ( R 2 ) Min ( p i ( R 1 ) , p i ( R 2 ) · Max ( p i ( R 1 ) , p i ( R 2 ) ) } ,
Eud (da in the formula i(R 1), da i(R 2)) be R 1And R 2In angle between corresponding i deflection.
Wherein, the coupling of described salient region comprises: 1) (i, j), i represents i salient region R of reference picture to each possible salient region coupling C of traversal two width of cloth images in the formula i, j represents j salient region R of floating image jThe C that meets the following conditions (i, j) then think on the thick coupling salient region to Cmp (i, j):
Min ( Av ( R i ) , Av ( R j ) ) Max ( Av ( R i ) , Av ( R j ) ) · Min ( Lge ( R i ) , Lge ( R j ) ) Max ( Lge ( R i ) , Lge ( R j ) ) > T ,
Min () function is to minimize in the formula, and Max () function is a maximizing, and T slightly mates controlled variable, and the experience value is 0.6;
2) (i j), calculates R as follows to Cmp to each thick matching area iAnd R jBetween similarity S (i, j) and anglec of rotation θ Ij: θ ij = 2 kπ 36 ,
S(i,j)=Dist(Lfd(R i),Lfd(R jk)),
k=arg k?Min(Dist(Lfd(R i),Lfd(R jk))),k∈{0,1,...35},
In the formula, R jK is with R jBe rotated counterclockwise the new region that the 10k degree obtains; (i, j) determine that three global rigid transformation parameters are: two-dimension translational is (by R to Cmp for each thick matching area iAnd R jRegional center decision) and rotation parameter θ Ij
3) press S (i, j) ascending order is arranged all Cmp (i, j), (i j) as the input sample set, is provided with distance threshold in the suitable class to individual thick matching area to Cmp to choose preceding 2000 (if the thick matching area that extracts to number deficiency then be as the criterion with the number of reality), to Cmp (i, j) carry out cluster on the global rigid transformation parameter space, choose number is maximum in the cluster class as the salient region on the preliminary coupling to F (i, j); (i, size j) is rejected the zone of repetition, guarantees that (i does not comprise the zone of repetition in j), reduces follow-up calculated amount to F for salient region on the preliminary coupling to press S.
Wherein, described local rigid body registration be to the salient region on the preliminary coupling to F (i, j), with salient region R i, R jCenter and anglec of rotation θ IjAs the initial registration parameter, carry out local rigid body registration.
Wherein, the accurate registration of described realization two width of cloth images comprises: to the zone behind the local rigid body registration, distance threshold in the meticulousr class is set, carries out the cluster analysis on the global rigid transformation parameter space, extract accurately mate regional right; And carry out overall quadratic polynomial conversion registration as the reference mark with the regional center point on the accurate coupling, realize the accurate registration of two width of cloth images.
Beneficial effect of the present invention: the present invention utilizes the salient region that extracts in the image, zoning feature description, adopt then by thick to smart salient region matching strategy, with tentatively the coupling on salient region to carrying out local rigid body registration, the regional center point that extracts the success of local rigid body registration at last carries out overall quadratic polynomial conversion registration as the reference mark, realizes the accurate registration of two width of cloth images.Because we have well defined region significance function and feature description, employing is by thick salient region matching strategy and image registration strategy to essence, the computation amount of whole algorithm, also can well finish accurate registration to low-quality image simultaneously, algorithm has very high robustness.Experimental result shows that this method can be finished the registration of general pattern about 4s, and the low-quality image registration time can be longer, also can finish accurate registration about 10s, and the image registration accuracy to 1548 * 1260 can reach 2 pixels.Therefore, has great application value.
Description of drawings
Fig. 1 is that the inventive method is carried out schematic flow sheet;
Fig. 2 (a) floating image;
Fig. 2 (b) reference picture;
Fig. 2 (c) is the floating image salient region that extracts;
Fig. 2 (d) is the reference picture salient region that extracts:
Fig. 2 (e) is the salient region synoptic diagram on the preliminary coupling;
Fig. 3 is that gradient direction is divided synoptic diagram;
Fig. 4 (a) is the Gradient distribution synoptic diagram of salient region R;
Fig. 4 (b) is the preceding 36 dimensional feature vector synoptic diagram of salient region feature description;
Fig. 4 (c) is 36 dimensional feature vector synoptic diagram behind salient region feature description;
Fig. 5 is the floating image synoptic diagram behind the registration;
Fig. 6 is the image synoptic diagram after merging;
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
As Fig. 1 the inventive method is shown and carries out schematic flow sheet; Load two images subject to registration on computing machine 101, select a width of cloth as the reference image, another width of cloth is as floating image; Utilize computing machine 101 to realize that following four sequential processes unit are to visible light image registration: salient region extraction unit 102 is used for extracting the salient region in the visible images; The provincial characteristics descriptor calculates and salient region matching unit 103, is used for calculating feature description of salient region, and carries out the salient region coupling based on the similarity between feature description; Local rigid body registration unit 104 is carried out local rigid body registration to the salient region on the preliminary coupling; Overall situation quadratic polynomial conversion registration unit 105, the regional center point that extracts behind the local rigid body registration carries out overall quadratic polynomial conversion as the reference mark, the accurate registration of realization image.
Described salient region extraction unit 102, specific implementation are appliance computers 101, and can utilize programming language C++ coding, according to we the definition match after local significance function value Fls (R), extract the salient region in two width of cloth images.
The provincial characteristics descriptor calculates and salient region matching unit 103, specific implementation is an appliance computer 101, and can utilize programming language C++ coding, be achieved as follows function: to the salient region that extracts, calculate the yardstick extraneous features descriptor Lfd (R) of its 72 dimensions, definition distance metric function Dist (Lfd (R 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfd (R 2) between similarity, right to arbitrary salient region of two width of cloth images, calculate their regional matching similarity, adopt by thick matching strategy to essence, carry out the coupling of salient region.
Described local rigid body registration unit 104, specific implementation is an appliance computer 101, and can utilize programming language C++ coding, and be achieved as follows function: right to the salient region on the preliminary coupling, adopt similarity measurement to carry out local rigid body registration based on normalized correlation coefficient.
Described overall quadratic polynomial conversion registration unit 105, specific implementation is an appliance computer 101, and can utilize programming language C++ coding, be achieved as follows function: to the zone behind the local rigid body registration, distance threshold in the meticulousr class is set, carry out the cluster analysis on the global rigid transformation parameter space, extract the regional center point that accurately the match is successful and carry out overall quadratic polynomial conversion registration, realize the accurate registration of two width of cloth images as the reference mark.
Method for registering of the present invention mainly may further comprise the steps:
Step 1: load two images subject to registration, the appliance computer reading images is converted into two-dimensional array with image, be stored in the computing machine, handle, select a width of cloth as the reference image with convenient follow-up unit, another width of cloth is as floating image, as Fig. 2 (a), shown in Fig. 2 (b).
Step 2: move salient region extraction unit 102, extract the salient region of two width of cloth images.
The extraction of salient region is mainly finished by following several steps:
1, reference picture and floating image are divided into M * N rectangular area, M, the value of N is relevant with the image size, in our method, image with 1548 * 1260 is divided into 100 * 100 rectangular areas, calculates the local significance function Ls (R) of each rectangular area R:
Ls(R)=Av(R)·Lge(R),
Wherein: Av (R) is the normalization area differentiation function of region R, is expressed as: Av (R)=σ/μ, and σ is the standard deviation of region R in the formula, μ is the average of region R;
Lge (R) is the gradient fields entropy of region R, is expressed as follows:
Lge ( R ) = - Σ i = 1 36 p i ( R ) log 2 p i ( R ) In the formula: p i(R) be that (gradient direction of pixel X is by gradient vector g (X)=[G of this point for gradient direction x(X), G y(X)] determine that as shown in Figure 3, we are with whole G xG yThe plane is divided into 36 five equilibriums) the pixel point set that is arranged in i sector region is expressed as follows in the gradient magnitude ratio that region R accounts for:
p i ( R ) = ∫ R i | g ( X i ) | d X i ∫ R | g ( X ) | dX ,
R in the formula iBe to be positioned at the point set that the pixel of i sector region constitutes by all gradient directions, | g ( X ) | = G x ( X ) 2 + G y ( X ) 2 Be pixel X gradient magnitude, X iBe point set R iIn pixel.If the value of the normalization area differentiation function Av (R) in zone is smaller, the pixel value in zone distributes relatively more consistent so, the zone homogeney is more intense, region significance is not obvious, if the value of Av (R) is bigger, area pixel value distribution more complicated, the zone is heterogeneous more intense, and region significance is apparent in view.On the other hand, if the zone is a homogeneity, the partial gradient field distribution should be regular so, the value of gradient fields entropy Lge (R) is smaller, region significance is low, if the zone is heterogeneous, the distribution of partial gradient field is with regard to more complicated so, the value of gradient fields entropy Lge (R) is bigger, and region significance strengthens.In conjunction with the region significance tolerance of Av (R) and Lge (R), Ls (R) can better measure the level of significance in zone.
2, after the calculating of finishing local significance function Ls (R), we adopt the gaussian kernel function of σ=1.5 to carry out match, the local significance function value Fls (R) after the calculating match:
Fls ( R ab ) = Σ i = 1 M Σ j = 1 N Ls ( R ij ) 2 π σ 2 exp - ( ( a - i ) 2 + ( b - j ) 2 ) / ( 2 σ 2 ) ,
M is the rectangular area number that image is divided on directions X, N is the rectangular area number that image is divided on the Y direction, (a, i) ∈ { 1,2 ..., M}, it is region R coordinate on the directions X in M * N rectangular area array, (b, j) ∈ { 1,2, ..., N} is region R coordinate on the Y direction in M * N rectangular area array, by Ls (R) is carried out Gauss curve fitting, and the level of significance of the neighborhood that Fls (R) can reflecting regional R, thereby can measure the region significance of R more accurately, so we select the central point in local extremum zone of Fls (R) as the salient region center.
3, after finishing the extraction of salient region center, the zone radius according to each salient region center of the Fls in the neighborhood (R) Distribution calculation comprises:
With rectangular area, place, salient region center R AbBe the center, make up the square rectangular area set omega of a radius maximum, Ω must meet the following conditions:
Fls(R ij)≥λ·Fls(R ab), ∀ R ij ∈ Ω ,
Wherein: Fls (R Ij) be region R IjLocal significance function value after the match, λ is the zone radius controlled variable, the experience value is 0.75.The length (pixel count that directions X comprises) of selection Ω and the smaller value in the width (pixel count that the Y direction comprises) are as the salient region radius, and the salient region of finishing image extracts, as Fig. 2 (c), shown in Fig. 2 (d).
Step 3: feature description in operation area calculates and salient region matching unit 103, finishes the salient region coupling of two width of cloth images.
The salient region coupling is mainly finished by following several steps:
1, to each salient region R, we travel through each pixel of region R, calculate its gradient magnitude and gradient direction.The pixel set that comprises on the distribution of the gradient vector of statistical regions R and each gradient direction then makes up one 72 yardstick extraneous features descriptor Lfd (R) that ties up:
Lfd(R)=(p 1(R),...p 36(R),da 1(R),...da 36(R)),
P wherein i(R) be that gradient direction is arranged in the gradient magnitude ratio that the pixel point set of i sector region accounts in region R, da i(R) ∈ [0,2 π) be the deflection of the geometric center of the gradient direction pixel point set that is positioned at i sector region to the salient region center C.Preceding 36 dimensional features of Lfd (R) have been described in the region R pixel gradient magnitude on 36 gradient directions and have been distributed that (synoptic diagram is seen Fig. 4 (b), be limited to the image size and do not draw characteristic distribution on neat all 36 directions), back 36 dimensional features have been described the pixel geometric center of 36 gradient directions with respect to the orientative feature of regional center C (synoptic diagram is seen Fig. 4 (c), is limited to the image size and does not draw characteristic distribution on neat all 36 directions).The structure of a schematic yardstick extraneous features descriptor Lfd (R) such as Fig. 4 (a), Fig. 4 (b) is shown in Fig. 4 (c), wherein, m schematically shows (gradient direction is represented by dotted lines) geometric center of the 3rd the pixel point set on the gradient direction among Fig. 4 (a), and C is a regional center, da 3(R) deflection of expression from C to m.
2, distance metric function Dist (Lfd (R of definition 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfd (R 2) between similarity be:
Dist ( Lfd ( R 1 ) , Lfd ( R 2 ) ) = Σ i = 1 36 { Eud ( da i ( R 1 ) , da i ( R 2 ) ) 2 · Max ( p i ( R 1 ) , p i ( R 2 ) ) }
· Σ i = 1 36 { log Max ( p i ( R 1 ) , p i ( R 2 ) Min ( p i ( R 1 ) , p i ( R 2 ) · Max ( p i ( R 1 ) , p i ( R 2 ) ) } ,
Eud (da wherein i(R 1), da i(R 2)) be R 1And R 2In angle between corresponding i deflection.Dist (Lfd (R 1), Lfd (R 2)) second definition that is similar to the K-L divergence, but compare with the K-L divergence, our distance metric function definition has symmetric advantage.
3, (i, j), wherein i represents i salient region R of reference picture to each possible salient region coupling C of traversal two width of cloth images i, j represents j salient region R of floating image jThe C that meets the following conditions (i, j) then think on the thick coupling salient region to Cmp (i, j):
Min ( Av ( R i ) , Av ( R j ) ) Max ( Av ( R i ) , Av ( R j ) ) · Min ( Lge ( R i ) , Lge ( R j ) ) Max ( Lge ( R i ) , Lge ( R j ) ) > T ,
Wherein Min () function is to minimize, and Max () function is a maximizing, and T slightly mates controlled variable, and the T value is set high more, and then thick matching condition requires strict more, and (i, j) also few more, the experience value is 0.6 to the salient region on the thick coupling to Cmp.
4, (i j), calculates R as follows to Cmp to each thick matching area iAnd R jBetween similarity S (i, j) and anglec of rotation θ Ij:
θ ij = 2 kπ 36 ,
S(i,j)=Dist(Lfd(R i),Lfd(R jk)),
k=arg k?Min(Dist(Lfd(R i),Lfd(R jk))),k∈{0,1,...35},
Wherein, R jK is with R jBe rotated counterclockwise the new region that the 10k angle obtains.
(i j) can determine that three global rigid transformation parameters are: the translation t of floating image center relative reference picture centre to each thick matching area to Cmp x, t y(t x=O jX-O iX, t y=O jY-O iY) and around the anglec of rotation θ at floating image center IjO wherein iX, O iY is salient region R iCenter Q iX and the coordinate on the Y direction, O jX, O jY is salient region R jCenter O jX and the coordinate on the Y direction.
5, press S (i, j) ascending order is arranged all Cmp (i, j), (i j) as the input sample set, is provided with distance threshold in the suitable class to individual thick matching area to Cmp to choose preceding 2000 (if the thick matching area that extracts to number deficiency then be as the criterion with the number of reality), to Cmp (i, j) carry out cluster on the global rigid transformation parameter space, choose number is maximum in the cluster class as the salient region on the preliminary coupling to F (i, j).Concrete clustering method is as follows: the fixing center C m of floating image, (i, j) definite three rigid body translation transformation parameters are mapped to Cf on the reference picture with Cm to Cmp by each thick coupling IjPoint, it is as follows to embody formula:
Cf ij x Cf ij y = cos θ ij - sin θ ij sin θ ij cos θ ij × Cmx - O j x Cmy - O j y - t x t y + O j x O j y
Cmx wherein, Cmy, Cf IjX, Cf IjY is respectively Cm and Cf IjCoordinate on X and the Y direction.
We are to the Cf after shining upon IjPoint carries out cluster on two-dimentional theorem in Euclid space, it is (relevant by threshold value in the suitable class is set with the size of images size, threshold value t is 50 in the class of choosing in our experiment), choose number is maximum in the cluster class as the salient region on the preliminary coupling to F (i, j).Cluster step on the two dimension theorem in Euclid space is as follows:
1. initialization classification number N is 0.
2. the Cf after traversal is shone upon in order IjPoint, if N=0, with current C f IjAs the center of the first kind, N=N+1; Otherwise, calculate Cf successively IjWith k (k=1 ..., N) the class center apart from d k, choose minimum value d KminIf, d Kmin<t is then with Cf IjBe included into the kmin class and upgrade the center and the class interior element number of kmin class, if d Kmin〉=t is then with Cf IjAs the center of N+1 class, N=N+1.
3. choose the class correspondence that number is maximum in the class thick coupling Cmp (i, j) to as the salient region that tentatively mates to F (i, j), then, we press S (i, size j) is rejected the zone of repetition, if F (3,5) and F (3,7) all be F (i, j) element in, we compare S (3,5) and the size of S (3,7), the coupling that outlier is bigger.Guarantee that (i does not comprise the zone of repetition in j), reduces follow-up calculated amount to F for salient region on the preliminary coupling.Salient region on the preliminary coupling is to shown in Fig. 2 (e).
Step 4: move local rigid body registration unit 104.To the salient region on the preliminary coupling in the step 3 to F (i, j), with region R i, R jThe center determine initial translation parameters t x, t y, with anglec of rotation θ IjAs initial rotation parameter, as regional similarity measurement, carry out local rigid body registration with normalized correlation coefficient.Normalized correlation coefficient is defined as follows:
NCC ( F , M ) = - Σ i = 1 N ( F i ( x ) - F ( x ) ‾ ) · ( M i ( x ) - M ( x ) ‾ ) Σ i = 1 N ( F i ( x ) - F ( x ) ‾ ) 2 · Σ i = 1 N ( M i ( x ) - M ( x ) ‾ ) 2 ,
Wherein N is the pixel count that comprises in reference picture and the floating image overlapping region, F i(x) and M i(x) be i pixel value of reference picture and floating image respectively, F (x) and M (x) are respectively the averages of reference picture and floating image zone interior pixel.
Step 5: move overall quadratic polynomial conversion registration unit 105.After in step 4, finishing local rigid body registration, the regional registration results that may have mistake, before carrying out overall quadratic polynomial conversion, need to reject wrong regional registration results, our zone and corresponding global rigid registration parameter after to local rigid body registration here, adopt the 5th described clustering method of step in the step 3, it is (relevant with the image size that distance threshold in the meticulousr class is set, in our experiment, be taken as 20), carry out the cluster analysis on the global rigid transformation parameter space, select the class that number is maximum in the cluster regional right as what accurately mate.And carry out overall quadratic polynomial conversion registration as the reference mark with the regional center point on the accurate coupling, realize the accurate registration of two width of cloth images.The mathematical model of overall situation quadratic polynomial conversion registration is as follows:
X C=A·B,
A = a 00 a 10 a 01 a 11 a 20 a 02 b 00 b 10 b 01 b 11 b 20 b 02 ,
B = 1 x D y D x D y D x D 2 y D 2 T ,
X D=[x D, y D] TBe the point coordinate in the floating image, X C=[x C, y C] TBe the point coordinate in the reference picture, A is the quadratic polynomial transformation matrix, and A can find the solution by the following method, and wherein K is the reference mark number:
R = x C 1 x C 2 · · · x CK y C 1 y C 2 · · · y CK ,
D = 1 1 · · · 1 x D 1 x D 2 · · · x DK y D 1 y D 2 · · · y DK x D 1 y D 1 x D 2 y D 2 · · · x DK y DK x D 1 2 x D 2 2 · · · x DK 2 y D 1 2 y D 2 2 · · · y DK 2 ,
A=RD T(DD T) -1
Operation result:
In order to verify the inventive method, we have chosen 20 pairs of images as experiment sample, wherein comprise 5 pairs of low-quality images, experimental result shows, this algorithm can be finished the registration of general pattern about 4s, the low-quality image registration time can be longer, also can finish accurate registration about 10s, and the image registration accuracy to 1548 * 1260 can reach 2 pixels.Concrete registration results such as Fig. 5, shown in Figure 6.Experiment shows that our method is fast, accurately, robust, have huge using value.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (8)

1. based on the visible light image registration method of salient region, it is characterized in that, may further comprise the steps:
Step 1: load two images subject to registration on computers, select a width of cloth as the reference image, another width of cloth is as floating image;
Step 2: reference picture and floating image are divided into M * N rectangular area, calculate the local significance function Ls (R) of each region R; Local significance function Ls (R) is carried out Gauss curve fitting, the local significance function value Fls (R) after the calculating match; The center in the local extremum zone of selection Fls (R) is as the salient region center; To each salient region center,, extract the salient region R of reference picture and floating image according to the radius in the Fls in the neighborhood (R) Distribution calculation zone;
Step 3: the yardstick extraneous features descriptor Lfd (R) that the salient region R that extracts is made up one 72 dimension; Defining a distance metric function is Dist (Lfd (R 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfd (R 2) between similarity; To arbitrary salient region of two width of cloth images to C (i, j), (i, regional matching similarity j) adopt by thick matching strategy to essence, carry out the coupling of salient region to calculate C;
Step 4: (i j), adopts the similarity measurement based on normalized correlation coefficient to carry out local rigid body registration to Cmp to the salient region on the preliminary coupling;
Step 5: to the zone behind the local rigid body registration, adopt the method for cluster analysis, extract the regional center point that accurately the match is successful and carry out overall quadratic polynomial conversion registration, realize the accurate registration of two width of cloth images as the reference mark.
2. method according to claim 1 is characterized in that, described local significance function Ls (R) is: Ls (R)=Av (R) Lge (R), wherein:
Av (R) is the normalization area differentiation function of region R, is expressed as Av (R)=σ/μ, and σ is the standard deviation of region R in the formula, and μ is the average of region R;
Lge (R) is that the gradient fields entropy of region R is expressed as follows:
Lge ( R ) = - Σ i = 1 36 p i ( R ) log 2 p i ( R ) ,
In the formula: p i(R) be that the pixel point set that gradient direction is arranged in i sector region is expressed as follows in the gradient magnitude ratio that region R accounts for:
p i ( R ) = ∫ R i | g ( X i ) | d X i ∫ R | g ( X ) | dX ,
R in the formula iBe to be positioned at the point set that the pixel of i sector region constitutes by all gradient directions,
Figure FSB00000604181200022
Be pixel X gradient magnitude, X iBe point set R iIn pixel.
3. method according to claim 1 is characterized in that, local significance function value Fls (R) after the described match is calculated as:
Fls ( R ab ) = Σ i = 1 M Σ j = 1 N Ls ( R ij ) 2 π σ 2 exp - ( ( a - i ) 2 + ( b - j ) 2 / ( 2 σ 2 ) ) ,
In the formula: the gaussian kernel function with σ=1.5 carries out match, and M is the rectangular area number that image is divided on directions X, and N is the rectangular area number that image is divided on the Y direction, (a, i) ∈ { 1,2, ..., M} is region R coordinate on the directions X in M * N rectangular area array, (b, j) ∈ { 1,2 ..., N} is region R coordinate on the Y direction in M * N rectangular area array.
4. method according to claim 1 is characterized in that, the calculating of the zone radius at described each salient region center comprises:
With rectangular area, place, salient region center R AbBe the center, make up the square rectangular area set omega of a radius maximum, Ω must meet the following conditions:
Fls(R ij)≥λ·Fls(R ab), ∀ R ij ∈ Ω ,
In the formula: Fls (R Ij) be region R IjLocal significance function value after the match, λ is a rectangular area radius controlled variable, the experience value is 0.75; The length of selection Ω and the smaller value in the width are as the salient region radius.
5. method according to claim 1 is characterized in that, the yardstick extraneous features descriptor of one 72 dimension of described structure is:
Lfd (R)=(p 1(R) ... p 36(R), da 1(R) ... da 36(R)), p in the formula i(R) be that gradient direction is arranged in the gradient magnitude ratio that the pixel point set of i sector region accounts in region R, da i(R) ∈ [0,2 π) be the deflection of the geometric center of the gradient direction pixel point set that is positioned at i sector region to the salient region center; Define a distance metric function Dist (Lfd (R then 1), Lfd (R 2)), weigh two sub-Lfd (R of feature description 1), Lfdd (R 2) between similarity be:
Dist ( Lfd ( R 1 ) , Lfd ( R 2 ) ) = Σ i = 1 36 { Eud ( da i ( R 1 ) , da i ( R 2 ) ) 2 · Max ( p i ( R 1 ) , p i ( R 2 ) ) }
· Σ i = 1 36 { log Max ( p i ( R 1 ) , p i ( R 2 ) ) Min ( p i ( R 1 ) , p i ( R 2 ) ) · Max ( p i ( R 1 ) , p i ( R 2 ) ) ,
Eud (da in the formula i(R 1), da i(R 2)) be R 1And R 2In angle between corresponding i deflection.
6. method according to claim 1 is characterized in that, the coupling of described salient region comprises:
1) (i, j), wherein i represents i salient region R of reference picture to each possible salient region coupling C of traversal two width of cloth images i, j represents j salient region R of floating image jThe C that meets the following conditions (i, j) then think on the thick coupling salient region to Cmp (i, j):
Min ( Av ( R i , Av ( R j ) ) ) Max ( Av ( R i ) , Av ( R j ) ) · Min ( Lge ( R i ) , Lge ( R j ) ) Max ( Lge ( R i ) , Lge ( R j ) ) > T ,
Av in the formula (R) is the normalization area differentiation function of region R, and Lge (R) is the gradient fields entropy of region R, and Min () function is to minimize, and Max () function is a maximizing, and T slightly mates controlled variable, and the experience value is 0.6;
2) (i j), calculates R as follows to Cmp to each thick matching area iAnd R jBetween similarity S (i, j) and anglec of rotation θ Ij:
θ ij = 2 kπ 36 ,
S(i,j)=Dist(Lfd(R i),Lfd(R jk)),
k=arg k?Min(Dist(Lfd(R i),Lfd(R jk))),k∈{0,1,...35},
In the formula, R jK is with R jBe rotated counterclockwise the new region that the 10k degree obtains;
(i j) determines that three global rigid transformation parameters are: two-dimension translational and rotation parameter θ to each thick matching area to Cmp Ij, two-dimension translational is by R iAnd R jRegional center decision;
3) press S (i, j) ascending order is arranged all Cmp (i, j), (i is j) as the input sample set to Cmp to choose preceding 2000 thick matching areas, if the thick matching area that extracts is to number deficiency then be as the criterion with the number of reality, distance threshold in the suitable class is set, and (i j) carries out cluster on the global rigid transformation parameter space to Cmp, choose number is maximum in the cluster class as the salient region on the preliminary coupling to F (i, j); (i, size j) is rejected the zone of repetition, guarantees that (i does not comprise the zone of repetition in j), reduces follow-up calculated amount to F for salient region on the preliminary coupling to press S.
7. method according to claim 1 is characterized in that, described local rigid body registration be to the salient region on the preliminary coupling to F (i, j), with salient region R i, R jCenter and anglec of rotation θ IjAs the initial registration parameter, carry out local rigid body registration.
8. method according to claim 1, it is characterized in that the accurate registration of described realization two width of cloth images comprises:, distance threshold in the meticulousr class is set to the zone behind the local rigid body registration, carry out the cluster analysis on the global rigid transformation parameter space, extract accurately mate regional right; And carry out overall quadratic polynomial conversion registration as the reference mark with the regional center point on the accurate coupling, realize the accurate registration of two width of cloth images.
CN2009100889753A 2009-07-15 2009-07-15 Visible light image registration method based on salient region Active CN101763633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100889753A CN101763633B (en) 2009-07-15 2009-07-15 Visible light image registration method based on salient region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100889753A CN101763633B (en) 2009-07-15 2009-07-15 Visible light image registration method based on salient region

Publications (2)

Publication Number Publication Date
CN101763633A CN101763633A (en) 2010-06-30
CN101763633B true CN101763633B (en) 2011-11-09

Family

ID=42494788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100889753A Active CN101763633B (en) 2009-07-15 2009-07-15 Visible light image registration method based on salient region

Country Status (1)

Country Link
CN (1) CN101763633B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950419B (en) * 2010-08-26 2012-09-05 西安理工大学 Quick image rectification method in presence of translation and rotation at same time
CN102663738A (en) * 2012-03-20 2012-09-12 苏州生物医学工程技术研究所 Method and system for three-dimensional image registration
CN103400393B (en) * 2013-08-21 2016-07-20 中科创达软件股份有限公司 A kind of image matching method and system
CN103810709B (en) * 2014-02-25 2016-08-17 南京理工大学 Eye fundus image based on blood vessel projects method for registering images with SD-OCT
CN104392462B (en) * 2014-12-16 2017-06-16 西安电子科技大学 A kind of SAR image registration method based on significantly segmentation subregion pair
CN104504723B (en) * 2015-01-14 2017-05-17 西安电子科技大学 Image registration method based on remarkable visual features
CN106991694B (en) * 2017-03-17 2019-10-11 西安电子科技大学 Based on marking area area matched heart CT and ultrasound image registration method
CN110516618B (en) * 2019-08-29 2022-04-12 苏州大学 Assembly robot and assembly method and system based on vision and force position hybrid control

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920882A (en) * 2005-08-24 2007-02-28 西门子共同研究公司 System and method for salient region feature based 3d multi modality registration of medical images
CN1985275A (en) * 2003-09-22 2007-06-20 美国西门子医疗解决公司 Method and system for hybrid rigid registration based on joint correspondences between scale-invariant salient region features
CN101038669A (en) * 2007-04-12 2007-09-19 上海交通大学 Robust image registration method based on association saliency image in global abnormal signal environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985275A (en) * 2003-09-22 2007-06-20 美国西门子医疗解决公司 Method and system for hybrid rigid registration based on joint correspondences between scale-invariant salient region features
CN1920882A (en) * 2005-08-24 2007-02-28 西门子共同研究公司 System and method for salient region feature based 3d multi modality registration of medical images
CN101038669A (en) * 2007-04-12 2007-09-19 上海交通大学 Robust image registration method based on association saliency image in global abnormal signal environment

Also Published As

Publication number Publication date
CN101763633A (en) 2010-06-30

Similar Documents

Publication Publication Date Title
CN101763633B (en) Visible light image registration method based on salient region
Zhu et al. A novel neural network for remote sensing image matching
Cai et al. Perspective-SIFT: An efficient tool for low-altitude remote sensing image registration
Yang et al. Multi-attribute statistics histograms for accurate and robust pairwise registration of range images
Manno-Kovacs et al. Orientation-selective building detection in aerial images
CN113223068B (en) Multi-mode image registration method and system based on depth global features
CN105069811A (en) Multi-temporal remote sensing image change detection method
CN105354841A (en) Fast matching method and system for remote sensing images
CN103077512A (en) Feature extraction and matching method and device for digital image based on PCA (principal component analysis)
CN101650784B (en) Method for matching images by utilizing structural context characteristics
Jing et al. Image feature information extraction for interest point detection: A comprehensive review
Gesto-Diaz et al. Feature matching evaluation for multimodal correspondence
CN101833763B (en) Method for detecting reflection image on water surface
Ji et al. A fast and efficient 3D reflection symmetry detector based on neural networks
Li et al. Point cloud registration and localization based on voxel plane features
Li et al. Adaptive regional multiple features for large-scale high-resolution remote sensing image registration
Min et al. Non-rigid registration for infrared and visible images via gaussian weighted shape context and enhanced affine transformation
Yang et al. VOID: 3D object recognition based on voxelization in invariant distance space
Yao et al. Motif: Multi-orientation tensor index feature descriptor for sar-optical image registration
Zhang et al. Discriminative image warping with attribute flow
Kelenyi et al. SAM-Net: self-attention based feature matching with spatial transformers and knowledge distillation
Xia et al. Building instance mapping from ALS point clouds aided by polygonal maps
Ye et al. Fast hierarchical template matching strategy for real-time pose estimation of texture-less objects
Gao et al. Fast corner detection using approximate form of second-order Gaussian directional derivative
Dong et al. Superpixel-based local features for image matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant