CN103679193A - FREAK-based high-speed high-density packaging component rapid location method - Google Patents

FREAK-based high-speed high-density packaging component rapid location method Download PDF

Info

Publication number
CN103679193A
CN103679193A CN201310562520.7A CN201310562520A CN103679193A CN 103679193 A CN103679193 A CN 103679193A CN 201310562520 A CN201310562520 A CN 201310562520A CN 103679193 A CN103679193 A CN 103679193A
Authority
CN
China
Prior art keywords
point
image
freak
prime
unique point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310562520.7A
Other languages
Chinese (zh)
Inventor
高红霞
吴丽璇
陈安
胡跃明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201310562520.7A priority Critical patent/CN103679193A/en
Publication of CN103679193A publication Critical patent/CN103679193A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an FREAK-based high-speed high-density packaging component rapid location method. The method comprises the following steps: detecting the position of a key point by Hessian matrix in SURF registration method; determining the master direction of a feature point by the utilization of neighbor information of the key point; training and learning a sampling point contrast pair according to the distribution of a retina model, and constructing an FREAK feature vector according to the contrast pair; using Hamming distance as similarity measurement of the feature vector according to the FREAK feature vector so as to carry out nearest neighbor matching; and constructing affine transformation equation through matched feature vector pairs, solving the above transformation equation through a least square method, and calculating spatial alternation model parameters, namely translation parameter m in the x direction, translation parameter n in the y direction and rotation angle beta. In comparison with the prior art, the method provided by the invention is adopted to greatly reduce computation complexity of feature description and matching and storage cost and realize high-speed high-precision sub-pixel location.

Description

A kind of high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK
Technical field
The present invention relates to the identification Position Research field in precise electronic assembling, particularly a kind of high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK.
Background technology
Surface mounting technology originates from the U.S. of the sixties in 20th century, is mainly used in Military Electronics product at that time.The seventies, because NEC industry is greatly developed consumer product, the impayable manufacture advantage of SMT slowly highlights and in electron trade, is carried forward vigorously rapidly.Since the eighties, because the fast development of electrical type consumer products and various countries are to the fully realizing of the strategic status of electronics industry, as the SMT of " mounting revolution the 4th time ", unprecedented attention and development have been obtained.At present, SMT has had influence on the Product Level of the every field such as communication, household electrical appliances, computing machine, network, robotization, Aeronautics and Astronautics, navigation, and its correlation technique and equipment are the important symbols of various countries' electronic information manufacturing industry level.
Digital picture registration (Image Registration) is as a basic task in image processing, its definition refers to the process of two width images under the same target of not taking in the same time, obtain being carried out to geometric alignment from different visual angles or different conditions such as video camera, is to evaluate the similarity of two width or multiple image to determine the process of same place.Image registration is a basic problem during image is processed and applied, and it all has important application in fields such as aviation image Image Mosaics, three-dimensional imaging, machine vision and pattern-recognition, Remote Sensing Data Processing, medical image analysis.Because the task of visual processes in SMT and the task of registration are consistent, so image registration techniques is an important component part of SMT vision detection system, for subsequent detection provides necessary pre-service.Method for registering images based on feature because it does not directly depend on that gray scale, robustness are good, little, the speed of strong interference immunity, calculated amount becomes most widely used method for registering images soon.In general, the basic framework of registration comprises feature detection, characteristic matching, transformation model parameter estimation and four steps of image resampling.
Since Lowe and Bay philosophy have proposed after SIFT and SURF algorithm, pursuing Feature Descriptor faster, that have better robustness becomes a recent popular development trend.A desirable Feature Descriptor generally has high robustness, singularity and lower algorithm complex.In order to allow feature describe algorithm, can be applied in smart mobile phone and EMBEDDED AVIONICS, Alexandre Alahi has proposed a kind of new unique point descriptor FREAK(Fast Retina Keypoint at CVPR2012).This descriptor, in above-mentioned three performances of balance, be take arithmetic speed as guiding, the real-time performance of outstanding algorithm, and its robustness and singularity also have good performance simultaneously.The focus of FREAK registration is the description of feature, and the work of key point location can adopt more existing conventional methods, as Hessian matrix, DoH detection, Harris angle point location etc.Therefore unique point descriptor FREAK is applied to in precise electronic assembling, identify location and there is high Research Significance.
Summary of the invention
Fundamental purpose of the present invention is that the shortcoming that overcomes prior art is with not enough, a kind of high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK is provided, the method is on the basis of SURF critical point detection, to introduce FREAK binary features, can increase substantially the speed of image registration, realize high-speed, high precision components and parts vision detection and localization, there is very strong robustness simultaneously.
Object of the present invention realizes by following technical scheme: a kind of high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK, comprises the following steps:
(1) key point location: input image subject to registration and template image, adopt Hessian matrix to detect the place, position of key point;
(2) feature is described: utilize the neighborhood information of key point, determine the principal direction of unique point; According to the distribution of Modified Retinal Model, training study sampled point is to comparison, according to comparison is built to FREAK proper vector;
(3) characteristic matching: according to FREAK proper vector, adopt hamming distance (being xor operation) as the similarity measurement of proper vector, carry out arest neighbors coupling; Unique point by coupling, to building affine transformation equation, solves above-mentioned transformation equation by least square method, calculates spatial alternation model parameter, that is: the translation parameters m of x direction is, the translation parameters n of y direction and anglec of rotation β.
Concrete, the key point location detailed process of described step (1) is as follows:
(1-1) input image I subject to registration (x, y) and template image f (x, y), and difference formation product partial image;
(1-2) build quick Hessian matrix, and build metric space according to quick Hessian matrix; Then the integral image obtaining according to step (1-1) obtains three dimension scale roomage response figure;
(1-3) in the three dimension scale roomage response figure obtaining, carry out Threshold segmentation, only retain the pixel with strong response; Then, adopt maximum value to suppress to find candidate feature point; Finally, with three-dimensional quadratic fit function, unique point is carried out to adjacent pixels interpolation, obtain key point position.
Concrete, in described step (1-1), the method for formation product partial image is: for given image Q, point (x, y) be that in image Q certain is a bit, in integral image, the value of point (x, y) is the pixel value sum with all pixels in the initial point of Q and the formed rectangular area of (x, y) pixel.Use integral image, to the calculating of all pixel value sums in arbitrary rectangular area in integral image, can be reduced to 4 view data of access and 3 plus and minus calculations.No matter the area of rectangular area is much, in this region, the time of the calculating of pixel value sum is all constant.When region area is very large, the superiority of this method can be clearly.SURF method for registering utilizes this character of integral image to guarantee that computing time of image convolution of the variable rrame filter of size is almost constant.
Concrete, the method that builds quick Hessian matrix in described step (1-2) is:
For 1 X (x, y) on Given Graph picture, put the Hessian matrix H (x, σ) of X under yardstick σ and be defined as follows:
H ( X , σ ) = L xx ( X , σ ) L xy ( X , σ ) L xy ( X , σ ) L yy ( X , σ ) ;
Wherein, L xx(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure BDA0000413067440000032
convolution, L xy(X, σ) is that image is at an X place and Gauss's second order local derviation convolution, L yy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure BDA0000413067440000034
convolution;
If D xx, D yy, and D xybe respectively the result of x direction, y direction and xy direction upper frame wave filter and image convolution, the approximate evaluation of Hessian determinant is:
det(H approx)=D xxD yy-(0.9D xy) 2
Preferably, the step that described step (2) feature is described is as follows:
(2-1) each sampled point of key point neighborhood is carried out to smoothing denoising, according to the distribution of Modified Retinal Model, adopt different gaussian kernel filtering, the size of gaussian kernel function is along with the growth of the distance of sampled point and central point is exponential distribution;
(2-2) determine the principal direction of unique point: select to estimate Grad with respect to centrosymmetric receptive field, and using this Grad as unique point principal direction, the collection G that sets up an office is for the contrast set for compute gradient, unique point principal direction O is calculated as follows:
O = 1 M Σ P O ∈ G ( I ( P o γ 1 ) - I ( P o γ 2 ) ) P o γ 1 - P o γ 2 | | P o γ 1 - P o γ 2 | | ;
Wherein, M is the number of permutations of point set G,
Figure BDA0000413067440000036
with
Figure BDA0000413067440000037
the central point two dimensional image coordinate of contrast receptive field,
Figure BDA0000413067440000038
with
Figure BDA0000413067440000039
be
Figure BDA00004130674400000310
with corresponding gray-scale value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-3) according to the distribution of unique point principal direction and Modified Retinal Model, unique point neighborhood is rotated, postrotational neighborhood is carried out to Modified Retinal Model sampling, and press following formula generation FREAK two-value Feature Descriptor F:
F = &Sigma; 0 < &alpha; < N 2 &alpha; T ( P &alpha; ) ;
Wherein, P αrepresent a pair of sampling receptive field, N is descriptor length, i.e. the right number of receptive field, if receptive field total quantity is M,
Figure BDA0000413067440000041
α represents the binary shift left shift value of two valued description;
Figure BDA0000413067440000042
expression is to comparison P αin the image information of above, the image information of employing is gradation of image sum or regional average value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-4) from existing receptive field centering obtain high variance and non-correlation to comparison, step comprises:
(2-4-1) create matrix D, the descriptor of a sampled point of each behavior of D;
(2-4-2) calculate the average of each row, average and 0.5 difference represent the variance of each row;
(2-4-3) according to the variance of each row, arrange from small to large;
(2-4-4) retain row of variance minimum, from remaining row, select to have with reservation row iteratively the row of low correlation, until FREAK Feature Descriptor dimension reaches pre-provisioning request; And then obtain FREAK proper vector.
Preferably, the step of described step (3) characteristic matching is as follows:
(3-1) adopt hamming distance as the similarity measurement of proper vector, carry out arest neighbors coupling, method is: first search represents the Expressive Features of front 16 bytes of fuzzy message, if matching distance is less than set threshold value, the unique point pair that obtains coupling, enters step (3-2);
(3-2) to coupling unique point to (x 1, y 1) and (x 2, y 2), the following affined transformation formula of substitution:
x 2 y 2 = m &prime; &prime; n &prime; &prime; + s cos &beta; &prime; &prime; - sin &beta; &prime; &prime; sin &beta; &prime; &prime; cos &beta; &prime; &prime; x 1 y 1 ;
By least square method, solve, obtain transformation parameter (m ", n ", β ", s); The transformation parameter right to the unique point of all couplings (m ", n ", β ") averages, and obtains rough transformation relation (m, n, β) between I (x, y) and f (x, y).
Compared with prior art, tool has the following advantages and beneficial effect in the present invention:
1, the present invention adopts FREAK descriptor to detect the feature that various novel surfaces mount components and parts, makes this detection method have very strong robustness to noise and chip offset rotation.
2, the FREAK descriptor that the present invention adopts has the ability of quick Description Image feature, on the basis of Hessian matrix locator key point, greatly reducing feature describes and the computation complexity mating and storage cost, realized high-speed, high precision sub-pixel location, significant for actual vision-based detection location.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the inventive method.
Fig. 2 is the schematic diagram of integral image.
Fig. 3 is the sampling model schematic diagram that FREAK distributes based on retinal receptive field.
Fig. 4 chooses for calculating the sampled point schematic diagram of the anglec of rotation.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment 1
As shown in Figure 1, a kind of high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK of the present embodiment, comprises the following steps:
S1 key point location: input image I subject to registration (x, y) and template image f (x, y), utilize Hessian matrix to detect respectively the agglomerate position of I (x, y) and f (x, y), specific as follows:
S1.1 is respectively according to image I (x, y), f (x, y) formation product partial image:
For given image, as I (x, y), point (x, y) be in integral image certain a bit, the integral image values I of this point it is the pixel value sum with all pixels in the initial point of I (x, y) and the formed rectangular area of (x, y) pixel.Use integral image, to the calculating of all pixel value sums in arbitrary rectangular area in integral image, can be reduced to 4 view data of access and 3 plus and minus calculations.The schematic diagram of integral image as shown in Figure 2.For rectangular area as shown in Figure 2, the summit of definition rectangle is respectively A, B, C, D, and the pixel value sum in this region is:
I =A+D-(C+B)。
Obviously, no matter the area of rectangular area is much, in this region, the time of the calculating of pixel value sum is all constant.When region area is very large, the superiority of this method can be clearly.SURF method for registering utilizes this character of integral image to guarantee that computing time of image convolution of the variable rrame filter of size is almost constant.
S1.2 at the upper detected characteristics point of I (x, y), f (x, y), specifically comprises the following steps respectively:
S1.2.1 builds quick Hessian matrix, for 1 X (x, y) on Given Graph picture, puts the Hessian matrix H (x, σ) of X under yardstick σ and is defined as follows:
H ( X , &sigma; ) = L xx ( X , &sigma; ) L xy ( X , &sigma; ) L xy ( X , &sigma; ) L yy ( X , &sigma; ) - - - ( 1 )
Wherein, L xx(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure BDA0000413067440000062
convolution, L xy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure BDA0000413067440000063
convolution, L yy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure BDA0000413067440000064
convolution;
If D xx, D yyand D xybe the result of x direction, y direction and xy direction upper frame wave filter and image convolution, the approximate evaluation of Hessian determinant is:
det(H approx)=D xxD yy-(0.9D xy) 2 (2)
S1.2.2 builds metric space according to quick Hessian matrix, and utilizes the integral image obtaining in step S1.1, obtains three dimension scale roomage response figure.The structure of metric space need to complete by image pyramid.SURF method for registering is that the rrame filter that increases gradually by size and original image are done convolution and built pyramid.Because adopted integral image to process convolution, the computing velocity of the rrame filter of different size is the same, has so just improved the efficiency of algorithm.
The accurate location feature point of S1.2.3: first, in the three dimension scale roomage response figure obtaining, carry out Threshold segmentation, only retain the pixel with strong response; Then, adopt maximum value to suppress to find candidate feature point; Finally, with three-dimensional quadratic fit function, unique point is carried out to adjacent pixels interpolation, make it to have the precision of sub-space and sub-yardstick.
S2 feature is described: the characteristic information of statistics key point neighborhood, generates a string FREAK scale-of-two descriptor.Concrete steps are as follows:
S2.1 carries out smoothing denoising to each sampled point of key point neighborhood, according to the distribution of Modified Retinal Model, adopts different gaussian kernel filtering.As shown in Figure 3, the size of gaussian kernel function is along with the growth of the distance of sampled point and central point is exponential distribution.
S2.2 determines the principal direction of unique point: in order to calculate the gradient of target rotation, FREAK algorithm has selected 45 points as shown in Figure 4 to calculating angle.Set up an office collection G for the contrast set for compute gradient.Unique point principal direction O is calculated as follows:
O = 1 M &Sigma; P O &Element; G ( I ( P o &gamma; 1 ) - I ( P o &gamma; 2 ) ) P o &gamma; 1 - P o &gamma; 2 | | P o &gamma; 1 - P o &gamma; 2 | | ; - - - ( 3 )
Wherein, M is the number of permutations of point set G,
Figure BDA0000413067440000066
with
Figure BDA0000413067440000067
the central point two dimensional image coordinate of contrast receptive field,
Figure BDA0000413067440000068
with
Figure BDA0000413067440000069
be
Figure BDA00004130674400000610
with
Figure BDA00004130674400000611
corresponding gray-scale value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast.
S2.3, according to unique point principal direction, is rotated unique point neighborhood.Postrotational neighborhood is carried out to Modified Retinal Model sampling, and presses following formula generation FREAK two-value Feature Descriptor F:
F = &Sigma; 0 < &alpha; < N 2 &alpha; T ( P &alpha; ) ;
Wherein, P αrepresent a pair of sampling receptive field, its space distribution as shown in Figure 3; N is descriptor length, i.e. the right number of receptive field, if receptive field total quantity is M,
Figure BDA0000413067440000072
α represents the binary shift left shift value of two valued description;
Figure BDA0000413067440000073
expression is to comparison P αin the image information of above, the image information of employing is gradation of image sum or regional average value, what the present embodiment adopted is regional average value.γ 1and γ 2represent to carry out two receptive fields of intensity contrast.
S2.4 learning training has the sampled point of high variance and non-correlation to comparison.Concrete steps comprise: (1) creates matrix D, the descriptor of a sampled point of each behavior of D; (2) calculate the average of each row, average and 0.5 distance have represented the variance of each row; (3) according to the variance of each row, arrange from small to large; (4) retain row of variance minimum, from remaining row, select to have with reservation row iteratively the row of low correlation, until FREAK Feature Descriptor dimension reaches pre-provisioning request.
S3 characteristic matching: unique point is carried out to characteristic matching, obtain spatial alternation relation (m, n, β) between I (x, y) and f (x, y), wherein m, n are respectively the translation parameterss of x, y direction, and β is the anglec of rotation.Concrete steps are as follows:
S3.1, because the proper vector of FREAK descriptor is a string binary sequence, therefore can adopt hamming distance (xor operation) to replace traditional Euclidean distance matching characteristic vector when tolerance feature.
During FREAK algorithmic match, first search for the Expressive Features of front 16 bytes that represent fuzzy message.If matching distance is less than set threshold value, just carry out the coupling of feature below.Such search strategy can be rejected the uncorrelated match point up to 90% fast.
S3.2 to coupling unique point to (x 1, y 1) and (x 2, y 2), according to affined transformation formula
x 2 y 2 = m &prime; &prime; n &prime; &prime; + s cos &beta; &prime; &prime; - sin &beta; &prime; &prime; sin &beta; &prime; &prime; cos &beta; &prime; &prime; x 1 y 1 - - - ( 4 )
By least square method, solve, obtain formula (4) transformation parameter (m ", n ", β ", s); The transformation parameter right to the unique point of all couplings (m ", n ", β ") averages, and obtains rough transformation relation (m, n, β) between I (x, y) and f (x, y).
Above-described embodiment is preferably embodiment of the present invention; but embodiments of the present invention are not restricted to the described embodiments; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.

Claims (7)

1. the encapsulation of the high-speed and high-density based on a FREAK components and parts method for rapidly positioning, is characterized in that, comprises the following steps:
(1) key point location: input image subject to registration and template image, adopt Hessian matrix to detect the place, position of key point;
(2) feature is described: utilize the neighborhood information of key point, determine the principal direction of unique point; According to the distribution of Modified Retinal Model, training study sampled point is to comparison, according to comparison is built to FREAK proper vector;
(3) characteristic matching: according to FREAK proper vector, adopt hamming distance as the similarity measurement of proper vector, carry out arest neighbors coupling; Unique point by coupling, to building affine transformation equation, solves above-mentioned transformation equation by least square method, calculates spatial alternation model parameter, that is: the translation parameters m of x direction is, the translation parameters n of y direction and anglec of rotation β.
2. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the key point location detailed process of described step (1) is as follows:
(1-1) input image I subject to registration (x, y) and template image f (x, y), and difference formation product partial image;
(1-2) build quick Hessian matrix, and build metric space according to quick Hessian matrix; Then the integral image obtaining according to step (1-1) obtains three dimension scale roomage response figure;
(1-3) in the three dimension scale roomage response figure obtaining, carry out Threshold segmentation, only retain the pixel with strong response; Then, adopt maximum value to suppress to find candidate feature point; Finally, with three-dimensional quadratic fit function, unique point is carried out to adjacent pixels interpolation, obtain key point position.
3. the high-speed and high-density based on FREAK according to claim 2 encapsulates components and parts method for rapidly positioning, it is characterized in that, in described step (1-1), the method for formation product partial image is: for given image Q, point (x, y) be in image Q certain a bit, in integral image, point (x, y) value is the pixel value sum with all pixels in the initial point of Q and the formed rectangular area of (x, y) pixel.
4. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 2, is characterized in that, the method that builds quick Hessian matrix in described step (1-2) is:
For 1 X (x, y) on Given Graph picture, put the Hessian matrix H (x, σ) of X under yardstick σ and be defined as follows:
H ( X , &sigma; ) = L xx ( X , &sigma; ) L xy ( X , &sigma; ) L xy ( X , &sigma; ) L yy ( X , &sigma; ) ;
Wherein, L xx(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000012
convolution, L xy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000013
convolution, L yy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000014
convolution;
If D xx, D yy, and D xybe respectively the result of x direction, y direction and xy direction upper frame wave filter and image convolution, the approximate evaluation of Hessian determinant is:
det(H approx)=D xxD yy-(0.9D xy) 2
5. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the step that described step (2) feature is described is as follows:
(2-1) each sampled point of key point neighborhood is carried out to smoothing denoising;
(2-2) determine the principal direction of unique point: select to estimate Grad with respect to centrosymmetric receptive field, and using this Grad as unique point principal direction, the collection G that sets up an office is for the contrast set for compute gradient, unique point principal direction O is calculated as follows:
O = 1 M &Sigma; P O &Element; G ( I ( P o &gamma; 1 ) - I ( P o &gamma; 2 ) ) P o &gamma; 1 - P o &gamma; 2 | | P o &gamma; 1 - P o &gamma; 2 | | ;
Wherein, M is the number of permutations of point set G, with
Figure FDA0000413067430000023
the central point two dimensional image coordinate of contrast receptive field,
Figure FDA0000413067430000024
with be
Figure FDA0000413067430000026
with
Figure FDA0000413067430000027
corresponding gray-scale value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-3) according to the distribution of unique point principal direction and Modified Retinal Model, unique point neighborhood is rotated, postrotational neighborhood is carried out to Modified Retinal Model sampling, and press following formula generation FREAK two-value Feature Descriptor F:
F = &Sigma; 0 < &alpha; < N 2 &alpha; T ( P &alpha; ) ;
Wherein, P αrepresent a pair of sampling receptive field, N is descriptor length, i.e. the right number of receptive field, if receptive field total quantity is M,
Figure FDA0000413067430000029
α represents the binary shift left shift value of two valued description;
Figure FDA00004130674300000210
Figure FDA00004130674300000211
expression is to comparison P αin the image information of above, the image information of employing is gradation of image sum or regional average value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-4) from existing receptive field centering obtain high variance and non-correlation to comparison, step comprises:
(2-4-1) create matrix D, the descriptor of a sampled point of each behavior of D;
(2-4-2) calculate the average of each row, average and 0.5 difference represent the variance of each row;
(2-4-3) according to the variance of each row, arrange from small to large;
(2-4-4) retain row of variance minimum, from remaining row, select to have with reservation row iteratively the row of low correlation, until FREAK Feature Descriptor dimension reaches pre-provisioning request; And then obtain FREAK proper vector.
6. the high-speed and high-density based on FREAK according to claim 5 encapsulates components and parts method for rapidly positioning, it is characterized in that, described step (2-1) is carried out smoothing denoising to each sampled point of key point neighborhood, according to the distribution of Modified Retinal Model, adopt different gaussian kernel filtering, the size of gaussian kernel function is along with the growth of the distance of sampled point and central point is exponential distribution.
7. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the step of described step (3) characteristic matching is as follows:
(3-1) adopt hamming distance as the similarity measurement of proper vector, carry out arest neighbors coupling, method is: first search represents the Expressive Features of front 16 bytes of fuzzy message, if matching distance is less than set threshold value, the unique point pair that obtains coupling, enters step (3-2);
(3-2) to coupling unique point to (x 1, y 1) and (x 2, y 2), the following affined transformation formula of substitution:
x 2 y 2 = m &prime; &prime; n &prime; &prime; + s cos &beta; &prime; &prime; - sin &beta; &prime; &prime; sin &beta; &prime; &prime; cos &beta; &prime; &prime; x 1 y 1 ;
By least square method, solve, obtain transformation parameter (m ", n ", β ", s); The transformation parameter right to the unique point of all couplings (m ", n ", β ") averages, and obtains rough transformation relation (m, n, β) between I (x, y) and f (x, y).
CN201310562520.7A 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method Pending CN103679193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310562520.7A CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310562520.7A CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Publications (1)

Publication Number Publication Date
CN103679193A true CN103679193A (en) 2014-03-26

Family

ID=50316681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310562520.7A Pending CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Country Status (1)

Country Link
CN (1) CN103679193A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106128037A (en) * 2016-07-05 2016-11-16 董超超 A kind of monitoring early-warning device for natural disaster
CN106980852A (en) * 2017-03-22 2017-07-25 嘉兴闻达信息科技有限公司 Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN107256545A (en) * 2017-05-09 2017-10-17 华侨大学 A kind of broken hole flaw detection method of large circle machine
CN107369170A (en) * 2017-07-04 2017-11-21 云南师范大学 Image registration treating method and apparatus
CN108027248A (en) * 2015-09-04 2018-05-11 克朗设备公司 The industrial vehicle of positioning and navigation with feature based
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109509166A (en) * 2017-09-15 2019-03-22 凌云光技术集团有限责任公司 Printed circuit board image detection method and device
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN112386282A (en) * 2020-11-13 2021-02-23 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112464909A (en) * 2020-12-18 2021-03-09 杭州电子科技大学 Iris feature extraction method based on FREAK description

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013852A1 (en) * 2003-05-22 2008-01-17 Ge Medical Systems Global Technology Co., Llc. Systems and Methods for Optimized Region Growing Algorithm for Scale Space Analysis
CN102880866A (en) * 2012-09-29 2013-01-16 宁波大学 Method for extracting face features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013852A1 (en) * 2003-05-22 2008-01-17 Ge Medical Systems Global Technology Co., Llc. Systems and Methods for Optimized Region Growing Algorithm for Scale Space Analysis
CN102880866A (en) * 2012-09-29 2013-01-16 宁波大学 Method for extracting face features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDRE ALAHI ETC.: "FREAK:Fast Retina Keypoint", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
麦倩: "基于配准的新型表面贴装元器件的定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027248A (en) * 2015-09-04 2018-05-11 克朗设备公司 The industrial vehicle of positioning and navigation with feature based
CN106128037A (en) * 2016-07-05 2016-11-16 董超超 A kind of monitoring early-warning device for natural disaster
CN106980852A (en) * 2017-03-22 2017-07-25 嘉兴闻达信息科技有限公司 Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN107256545A (en) * 2017-05-09 2017-10-17 华侨大学 A kind of broken hole flaw detection method of large circle machine
CN107369170A (en) * 2017-07-04 2017-11-21 云南师范大学 Image registration treating method and apparatus
CN109509166A (en) * 2017-09-15 2019-03-22 凌云光技术集团有限责任公司 Printed circuit board image detection method and device
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108335306B (en) * 2018-02-28 2021-05-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN112386282A (en) * 2020-11-13 2021-02-23 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112386282B (en) * 2020-11-13 2022-08-26 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112464909A (en) * 2020-12-18 2021-03-09 杭州电子科技大学 Iris feature extraction method based on FREAK description

Similar Documents

Publication Publication Date Title
CN103679193A (en) FREAK-based high-speed high-density packaging component rapid location method
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
Mittal et al. Generalized projection-based M-estimator
CN106780557B (en) Moving object tracking method based on optical flow method and key point features
CN102661708B (en) High-density packaged element positioning method based on speeded up robust features (SURFs)
Pantilie et al. SORT-SGM: Subpixel optimized real-time semiglobal matching for intelligent vehicles
CN106355577A (en) Method and system for quickly matching images on basis of feature states and global consistency
CN104167003A (en) Method for fast registering remote-sensing image
CN109376744A (en) A kind of Image Feature Matching method and device that SURF and ORB is combined
CN102629330A (en) Rapid and high-precision matching method of depth image and color image
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN106056122A (en) KAZE feature point-based image region copying and pasting tampering detection method
CN102982561B (en) Method for detecting binary robust scale invariable feature of color of color image
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN113763269A (en) Stereo matching method for binocular images
CN104050675A (en) Feature point matching method based on triangle description
Liu et al. Iterating tensor voting: A perceptual grouping approach for crack detection on EL images
CN111199558A (en) Image matching method based on deep learning
CN109872343B (en) Weak texture object posture tracking method, system and device
Kim et al. Multiscale feature extractors for stereo matching cost computation
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN106408029A (en) Image texture classification method based on structural difference histogram
CN102496022B (en) Effective feature point description I-BRIEF method
CN110348286B (en) Face fitting and matching method based on least square method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140326