CN103700107B - A kind of characteristic point matching method based on the distribution of image sharpness - Google Patents

A kind of characteristic point matching method based on the distribution of image sharpness Download PDF

Info

Publication number
CN103700107B
CN103700107B CN201310732830.9A CN201310732830A CN103700107B CN 103700107 B CN103700107 B CN 103700107B CN 201310732830 A CN201310732830 A CN 201310732830A CN 103700107 B CN103700107 B CN 103700107B
Authority
CN
China
Prior art keywords
points
feature
image
feature point
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310732830.9A
Other languages
Chinese (zh)
Other versions
CN103700107A (en
Inventor
赵玥
苏剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ling Technology Co Ltd
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201310732830.9A priority Critical patent/CN103700107B/en
Publication of CN103700107A publication Critical patent/CN103700107A/en
Application granted granted Critical
Publication of CN103700107B publication Critical patent/CN103700107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

In the characteristic point matching method based on the distribution of image sharpness provided by the invention, the sharpness using edge detection operator to calculate image characteristic point carrys out the describing method of feature points, mainly through calculating the normalized correlation coefficient of unique point in two width images, carry out the preliminary matches of unique point, the Feature Points Matching of deletion error, the perspective projection matrix of computed image successively to realize the coupling of all unique points, improve precision and the matching efficiency of coupling.

Description

Feature point matching method based on image sharpness distribution
Technical Field
The invention relates to the technical field of image processing, in particular to a feature point matching method based on image sharpness distribution.
Background
The feature point matching in the image processing and computer vision fields has wide application backgrounds, such as image registration, object recognition, target tracking detection and the like. The feature point matching methods generated at present include the following methods:
the characteristic point matching method based on Harris detection characteristic points is inspired by an autocorrelation function in signal processing, gives an autocorrelation matrix associated with the autocorrelation function, and performs characteristic point matching according to the autocorrelation matrix. The Harris algorithm is stable and can correctly match most characteristic points, but a small part of mismatching point pairs still exist.
The feature point matching method based on Susan corner detection has high matching accuracy and good noise robustness, but the method adopts a circular template to detect the corners in an image, and under the condition that an image window is large, a large amount of time is consumed for acquiring a Susan area, so that the working efficiency is reduced, and the method has no universal applicability.
In order to improve matching efficiency, Qian proposes a coarse-to-fine feature point matching method, which first extracts feature points by using multi-scale Harris corner detection, then generates initial feature point matching pairs by using a matching method based on a voting strategy, and matches the rest of feature points by using a rigid body transformation model. But this method still has a false match.
Awrangjeb proposes an affine length parameterization-based corner detection algorithm for improving a curvature scale space, and the method can achieve a high matching effect by utilizing an iterative process to calculate the optimal matching point, but the method needs a large amount of time.
The SIFT method is characterized in that a Gaussian difference scale space is used for generating key points with unchanged scale and rotation, the feature description of each point is given, and feature points are matched according to the fact that the ratio of the Euclidean distance nearest neighbor to the next nearest neighbor is larger than a certain threshold value. The SIFT method is wide in application range, can generate more feature matching points, but is high in feature dimension, low in calculation speed and high in error matching rate.
Since the above several existing feature point matching methods have the disadvantages of low matching rate, large time consumption, and no universal applicability, it is a problem to be solved by those skilled in the art to study a feature point matching method with high accuracy matching rate, fast calculation speed, and universal applicability.
Disclosure of Invention
The invention aims to provide a feature point matching method based on image sharpness distribution, and aims to solve the problems that the matching rate is low, a large amount of time is consumed for calculation, and the method is not universal in applicability in the conventional feature point matching method, so that the feature point matching accuracy is low and the matching efficiency is low.
In order to solve the above technical problem, the present invention provides a feature point matching method based on image sharpness distribution, comprising the steps of:
shooting two images of the same rigid body or the same approximate rigid body from different angles, and inputting the two images into a computer;
detecting image characteristic points by using an edge detection operator, and calculating the sharpness of the characteristic points;
defining a description method of the feature points based on the sharpness of the feature points;
calculating the normalized correlation coefficient of the feature points, and performing primary matching on the feature points;
deleting the wrong feature point matching;
selecting matched characteristic points, and calculating a perspective projection matrix of the image;
all feature points are matched using a perspective projection matrix.
Optionally, in the feature point matching method based on image sharpness distribution, the method for calculating sharpness of feature points includes:
using any one feature point P on any edge contour line of the imageiTaking k points forward and backward respectively for the center to obtain 2k +1 points to form a characteristic point PiA support region of (a); from the feature point PiTo the feature point Pi-kAnd a feature point Pi+kConnecting to obtain two supporting arms PiPi-k、PiPi+kThe two support arms PiPi-k、PiPi+kThe included angle between the two is a supporting angle αi(ii) a Characteristic point Pi,Pi-k,Pi+kThree points which are considered to be closely spaced on a section of circular arc, then | PiPi-k|=|PiPi+k|,
Calculating a feature point PiSharpness of s h a r p ( P i ) = 1 - s i n ( α i / 2 ) = 1 - | P i - k P i + k | | P i P i - k | + | P i P i + k | ;
By support angle αiCalculating the sharpness of all the characteristic points on the edge contour line; and similarly, calculating the sharpness of all the feature points on all the edge contour lines on the image.
Optionally, in the feature point matching method based on image sharpness distribution, before the detecting the image feature point by using an edge detection operator, the method further includes: and carrying out preprocessing operations of graying, depth search and edge detection on the image.
Optionally, in the feature point matching method based on image sharpness distribution, the description method of the defined feature point is a 6-dimensional feature vectorThe 6-dimensional feature vectorThe method comprises the following steps: abscissa x of feature pointgOrdinate y of the feature pointgMean of sharpness distribution of feature pointsVariance of feature point sharpness distributionThe number g of the edge contour line where the feature point is located and the number q of the feature point.
Optionally, in the feature point matching method based on image sharpness distribution, the method for calculating a normalized correlation coefficient of a feature point includes:
step one, a feature point is respectively taken from the two images, and a description method of the feature point is respectively defined for the two images:
L h e = ( x h , y h , M h e , D h e , h , e ) and L ‾ 1 f = ( x ‾ 1 , y ‾ 1 , M ‾ 1 f , D ‾ 1 f , l , f ) ;
secondly, calculating the normalized correlation coefficient of the two characteristic points according to the description method of the definition characteristic points of the two characteristic points: R h l = Σ d = - s s ( s h a r p ( P h + d e ) - M h e ) ( s h a r p ‾ ( P l + d f ) - M ‾ l f ) ( 2 s + 1 ) D h e × D ‾ l f .
optionally, in the feature point matching method based on image sharpness distribution, the larger the normalized correlation coefficient of the two feature points is calculated, the better the matching degree of the two feature points is.
Optionally, in the feature point matching method based on image sharpness distribution, the feature point matching for removing errors is implemented by a rule one and a rule two performed after the rule one; wherein,
the first rule is to find out, through a voting method, a feature point pair in which two edges in two images are less than or equal to 1 pair of feature points, and delete the feature point pair: selecting one of the two images, traversing all edge contour lines in the image, adding 1 to the vote number of the edge contour line if one feature point in the edge contour line is successfully matched, finally counting the vote numbers of all the edge contour lines in the image, and deleting the matched feature point pairs corresponding to the edge contour lines of which the vote number is less than or equal to 1;
and the second rule is used for sequencing the normalized correlation coefficients of all the characteristic points after the first rule, selecting two characteristic point pairs with the maximum correlation coefficients as reference point pairs, and sequentially calculating the distance rd between the point to be determined and the two reference points1And rd2Then calculate rd1And rd2And (3) judging the magnitude relation between rd-1 and sigma, and deleting characteristic point pairs with rd-1 exceeding sigma.
Optionally, in the feature point matching method based on image sharpness distribution, the method for calculating a perspective projection matrix of an image includes:
sorting the normalized correlation coefficients of all the remaining feature points after the matching step of the feature points with the errors deleted, and selecting feature point pairs with larger correlation coefficients, wherein the number of the feature point pairs with larger correlation coefficients is more than or equal to 4;
calculating a perspective projection matrix H of the two images by using the characteristic point pairs selected in the previous step;
wherein the coordinate (x) of the matching characteristic point is obtained from the perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe relationship between:
x t y t 1 = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 · x ‾ t y ‾ t 1 = H · x ‾ t y ‾ t 1 .
optionally, in the feature point matching method based on image sharpness distribution, the matched feature point coordinate (x) is obtained from a perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe relation between the two images, the matching image characteristics are calculatedEstimated coordinates of feature points on image to be matched
x ^ t = ( h 2 - h 8 x t ) ( y t - h 6 ) - ( h 5 - h 8 y t ) ( x t - h 3 ) ( h 2 - h 8 x t ) ( h 4 - h 7 y t ) - ( h 5 - h 8 y t ) ( h 1 - h 7 x i ) y ^ t = ( h 1 - h 7 x t ) ( y t - h 6 ) - ( h 4 - h 7 y t ) ( x t - h 3 ) ( h 1 - h 7 x t ) ( h 5 - h 8 y t ) - ( h 4 - h 7 y t ) ( h 2 - h 8 x t ) ;
At the calculated estimated coordinatesNearby searching for coordinates of feature points to be matched on image to be matchedAnd matching the matching image with all the characteristic points of the image to be matched.
In the feature point matching method based on image sharpness distribution, the edge detection operator is used for calculating the sharpness of the image feature points to define the feature point description method, the normalized correlation coefficients of the feature points in the two images are calculated, the primary matching of the feature points, the false feature point matching are deleted, and the perspective projection matrix of the images is calculated to realize the matching of all the feature points, so that the matching accuracy and the matching efficiency are improved.
Drawings
FIG. 1 is a flow chart of a feature point matching method based on image sharpness distribution of the present invention;
FIG. 2 is a diagram illustrating the effect of the present invention after the edge detection operator processing is performed on the image;
FIG. 3 is a schematic diagram of the principle of calculating feature point sharpness of the present invention;
fig. 4 is a schematic diagram of rule two used in matching the feature points for deleting errors according to the present invention.
Detailed Description
The feature point matching method based on the image sharpness distribution according to the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. Advantages and features of the present invention will become apparent from the following description and from the claims. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Please refer to fig. 1 to 4, which are schematic diagrams related to the present invention. As shown in fig. 1, the feature point matching method based on the image sharpness distribution according to the present invention includes the steps of:
step S1, shooting two images of the same rigid body or the same approximate rigid body from different angles, and inputting the images into a computer; the rigid body is an ideal physical model which keeps the shape and the size of an object unchanged and keeps the relative position of each part in the object constant under the action of external force; the approximate rigid body is that the shape and size change range of the object is smaller relative to the size of the object, and the relative position change range of each part in the object is smaller relative to the whole object, such as the face of a human body.
Step S2, detecting image characteristic points by using an edge detection operator, and calculating the sharpness of the characteristic points;
reading two images input into a computer, and sequentially carrying out the following steps on the two images by using an edge detection operator: graying, depth search, edge detection, feature point detection and the like.
The preferred embodiment is as follows:
please refer to fig. 2, which is an effect diagram of the image after the edge detection operator processing according to the present invention, wherein the sharpness of the feature point is calculated according to each edge contour line in fig. 2. Referring to fig. 3, an arbitrary feature point P on an arbitrary edge contour line of an image is usediTaking k points forward and backward respectively for the center to obtain 2k +1 points to form a characteristic point PiA support region of (a); from the feature point PiTo the feature point Pi-kAnd a feature point Pi+kConnecting to obtain two supporting arms PiPi-k、PiPi+kThe two support arms PiPi-k、PiPi+kThe included angle between the two is a supporting angle αi(ii) a Characteristic point Pi,Pi-k,Pi+kThree points which are considered to be closely spaced on a section of circular arc, then | PiPi-k|=|PiPi+k|,Calculating a feature point PiSharpness ofBy support angle αiCalculating the sharpness of all the characteristic points on the edge contour line; and similarly, calculating the sharpness of all the feature points on all the edge contour lines on the image.
Step S3, defining a description method of the feature points based on the sharpness of the feature points;
specifically, the description method of the definition feature point is a 6-dimensional feature vectorThe 6-dimensional feature vectorThe method comprises the following steps: abscissa x of feature pointgOrdinate y of the feature pointgMean of sharpness distribution of feature pointsVariance of feature point sharpness distributionThe number g of the edge contour line where the feature point is located and the number q of the feature point; characteristic pointThe feature point is represented by the feature point of the edge contour line of number q, and the feature point is represented by the feature point of number gSelecting s points forward and backward respectively for the center, the sharpness distribution is:
LSD g q = ( s h a r p ( P g - s q ) , ... , s h a r p ( P g q ) , ... , s h a r p ( P g + s q ) ) ;
the mean value is obtained from the sharpness distribution: M g q = Σ d = - s s s h a r p ( P g + d q ) / ( 2 s + 1 ) ;
the variance is: D g q = Σ d = - s s ( s h a r p ( P g + d q ) - M g q ) 2 / ( 2 s + 1 ) .
step S4, calculating the normalized correlation coefficient of the feature points, and performing preliminary matching of the feature points;
specifically, the method for calculating the normalized correlation coefficient of the feature point includes:
step one, a feature point is respectively taken from the two images, and a description method of the feature point is respectively defined for the two images:
L h e = ( x h , y h , M h e , D h e , h , e ) and L ‾ 1 f = ( x ‾ 1 , y ‾ 1 , M ‾ 1 f , D ‾ 1 f , l , f ) ;
secondly, calculating the normalized correlation coefficient of the two characteristic points according to the description method of the definition characteristic points of the two characteristic points: R h l = Σ d = - s s ( s h a r p ( P h + d e ) - M h e ) ( s h a r p ‾ ( P l + d f ) - M ‾ l f ) ( 2 s + 1 ) D h e × D ‾ l f .
the preliminary matching of the feature points is as follows: the normalized correlation coefficients of all the feature points in the two images are calculated by the method, and the larger the normalized correlation coefficient of the two feature points is, the better the matching degree of the two feature points is. Specifically, for each feature point in the first image, the point with the minimum normalized correlation coefficient calculated from the feature point of the first image in the second image is removed (it is considered that the two feature points are unsuccessfully matched), so that the preliminary matching of all the feature points in the two images is realized.
Step S5, deleting the matching of the wrong feature points;
the matching of the characteristic points with the deletion errors is realized through a rule I and a rule II which is carried out after the rule I; wherein:
the first rule is to find out, through a voting method, a feature point pair in which two edges in two images are less than or equal to 1 pair of feature points, and delete the feature point pair: selecting one of the two images, traversing all edge contour lines in the image, adding 1 to the vote number of the edge contour line if one feature point in the edge contour line is successfully matched, finally counting the vote numbers of all the edge contour lines in the image, and deleting the matched feature point pairs corresponding to the edge contour lines of which the vote number is less than or equal to 1.
And the second rule is used for sequencing the normalized correlation coefficients of all the characteristic points after the first rule, selecting two characteristic point pairs with the maximum correlation coefficients as reference point pairs, and sequentially calculating the distance rd between the point to be determined and the two reference points1And rd2Then calculate rd1And rd2And (3) judging the magnitude relation between rd-1 and sigma, and deleting characteristic point pairs with rd-1 exceeding sigma.
The preferred embodiment is as follows:
referring to fig. 4, the normalized correlation coefficients of all the feature points after the first rule are sorted, and two feature point pairs with the largest correlation coefficients are selectedAndas reference point pairs, characteristic point pairsIs a point to be determined.
In the first step, the distance ratio between the point D and the points A and B is calculated:
second, calculate the pointTo pointAnd pointThe distance ratio between:
third, the ratio between these two values is calculated
r d = rd 1 rd 2 = ( x m - x o ) 2 + ( y m - y o ) 2 · ( x ‾ n - x ‾ o ) 2 + ( y ‾ n - y ‾ o ) 2 ( x n - x o ) 2 + ( y n - y o ) 2 · ( x ‾ m - x ‾ o ) 2 + ( y ‾ m - y ‾ o ) 2 ;
And fourthly, setting the distance between the ratio rd and 1 to be | rd-1|, judging the magnitude relation between | rd-1| and sigma, deleting characteristic point pairs of which | rd-1| exceeds sigma, and considering that the matching is unsuccessful.
S6, selecting matched characteristic points, and calculating a perspective projection matrix of the image;
specifically, the method for calculating the perspective projection matrix of the image includes: sorting the normalized correlation coefficients of all the remaining feature points after the matching step of the feature points with the errors deleted, and selecting feature point pairs with larger correlation coefficients, wherein the number of the feature point pairs with larger correlation coefficients is more than or equal to 4; calculating a perspective projection matrix H of the two images by using the characteristic point pairs selected in the previous step;
wherein the coordinate (x) of the matching characteristic point is obtained from the calculated perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe relationship between:
x t y t 1 = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 · x ‾ t y ‾ t 1 = H · x ‾ t y ‾ t 1 .
and step S7, matching all the characteristic points by using the perspective projection matrix.
Specifically, the coordinate (x) of the matching characteristic point is obtained from the perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe estimated coordinates of the characteristic points of the matched image on the image to be matched are calculated
x ^ t = ( h 2 - h 8 x t ) ( y t - h 6 ) - ( h 5 - h 8 y t ) ( x t - h 3 ) ( h 2 - h 8 x t ) ( h 4 - h 7 y t ) - ( h 5 - h 8 y t ) ( h 1 - h 7 x t ) y ^ t = ( h 1 - h 7 x t ) ( y t - h 6 ) - ( h 4 - h 7 y t ) ( x t - h 3 ) ( h 1 - h 7 x t ) ( h 5 - h 8 y t ) - ( h 4 - h 7 y t ) ( h 2 - h 8 x t ) ;
Wherein the perspective projection matrix H has been calculated in step S6, and coordinates (x) of the matching feature points on the images are to be matchedt,yt) Substituting the estimated coordinatesCan calculate the estimated coordinates of the feature points of the matched image on the image to be matchedAt the calculated estimated coordinatesNearby searching for coordinates of feature points to be matched on image to be matchedAnd matching the matching image with all the characteristic points of the image to be matched.
In summary, in the feature point matching method based on image sharpness distribution provided in the embodiment of the present invention, an edge detection operator is used to calculate the sharpness of an image feature point to define a description method of the feature point, and the matching of all feature points is realized by mainly calculating normalized correlation coefficients of feature points in two images, sequentially performing preliminary matching of feature points, deleting wrong feature point matching, and calculating a perspective projection matrix of the images, so as to improve the matching accuracy and matching efficiency.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the appended claims.

Claims (2)

1. A feature point matching method based on image sharpness distribution is characterized by comprising the following steps:
shooting two images of the same rigid body or the same approximate rigid body from different angles, and inputting the two images into a computer;
detecting image characteristic points by using an edge detection operator, and calculating the sharpness of the characteristic points;
defining a description method of the feature points based on the sharpness of the feature points;
calculating the normalized correlation coefficient of the feature points, and performing primary matching on the feature points;
deleting the wrong feature point matching;
selecting matched characteristic points, and calculating a perspective projection matrix of the image;
matching all the characteristic points by using a perspective projection matrix; wherein,
the method for calculating the sharpness of the feature points comprises the following steps:
using any one feature point P on any edge contour line of the imageiTaking k points forward and backward respectively for the center to obtain 2k +1 points to form a characteristic point PiA support region of (a); from the feature point PiTo the feature point Pi-kAnd a feature point Pi+kConnecting to obtain two supporting arms PiPi-k、PiPi+kThe two support arms PiPi-k、PiPi+kThe included angle between the two is a supporting angle αi(ii) a Characteristic point Pi,Pi-k,Pi+kThree points which are considered to be closely spaced on a section of circular arc, then | PiPi-k|=|PiPi+k|,
Calculating a feature point PiSharpness of s h a r p ( P i ) = 1 - s i n ( α i / 2 ) = 1 - | P i - k P i + k | | P i P i - k | + | P i P i + k | ;
By support angle αiCalculating the sharpness of all the characteristic points on the edge contour line; similarly, calculating the sharpness of all the feature points on all the edge contour lines on the image;
the description method of defining the feature point is a 6-dimensional feature vectorThe 6-dimensional feature vectorThe method comprises the following steps: abscissa x of feature pointgOrdinate y of the feature pointgMean of sharpness distribution of feature pointsVariance of feature point sharpness distributionThe number g of the edge contour line where the feature point is located and the number q of the feature point;
the method for calculating the normalized correlation coefficient of the feature point comprises the following steps:
step one, a feature point is respectively taken from the two images, and a description method of the feature point is respectively defined for the two images:
L h e = ( x h , y h , M h e , D h e , h , e ) and L ‾ l f = ( x ‾ l , y ‾ l , M ‾ l f , D ‾ l f , l , f ) ;
secondly, calculating the normalized correlation coefficient of the two characteristic points according to the description method of the definition characteristic points of the two characteristic points: R h l = Σ d = - s s ( s h a r p ( P h + d e ) - M h e ) ( s h a r p ‾ ( P l + d f ) - M ‾ l f ) ( 2 s + 1 ) D h e × D ‾ l f ;
the larger the normalized correlation coefficient of the two feature points is calculated, the better the matching degree of the two feature points is; the preliminary matching of the feature points is as follows: removing the point with the minimum normalized correlation coefficient calculated by the feature point of the first image in the second image aiming at each feature point in the first image so as to realize the preliminary matching of all the feature points in the two images;
the matching of the characteristic points with the deletion errors is realized through a rule I and a rule II which is carried out after the rule I; wherein,
the first rule is to find out, through a voting method, a feature point pair in which two edges in two images are less than or equal to 1 pair of feature points, and delete the feature point pair: selecting one of the two images, traversing all edge contour lines in the image, adding 1 to the vote number of the edge contour line if one feature point in the edge contour line is successfully matched, finally counting the vote numbers of all the edge contour lines in the image, and deleting the matched feature point pairs corresponding to the edge contour lines of which the vote number is less than or equal to 1;
and the second rule is used for sequencing the normalized correlation coefficients of all the characteristic points after the first rule, selecting two characteristic point pairs with the maximum correlation coefficients as reference point pairs, and sequentially calculating the distance rd between the point to be determined and the two reference points1And rd2Then calculate rd1And rd2Judging the size relationship between rd-1 and sigma, and deleting the characteristic point pairs with rd-1 exceeding sigma;
the method for calculating the perspective projection matrix of the image comprises the following steps:
sorting the normalized correlation coefficients of all the remaining feature points after the matching step of the feature points with the errors deleted, and selecting feature point pairs with larger correlation coefficients, wherein the number of the feature point pairs with larger correlation coefficients is more than or equal to 4;
calculating a perspective projection matrix H of the two images by using the characteristic point pairs selected in the previous step;
wherein the coordinate (x) of the matching characteristic point is obtained from the perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe relationship between:
x t y t 1 = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 · x ‾ t y ‾ t 1 = H · x ‾ t y ‾ t 1 ;
obtaining the coordinate (x) of the matched characteristic point from the perspective projection matrix Ht,yt) Coordinates of characteristic points to be matchedThe estimated coordinates of the characteristic points of the matched image on the image to be matched are calculated
x ^ t = ( h 2 - h 8 x t ) ( y t - h 6 ) - ( h 5 - h 8 y t ) ( x t - h 3 ) ( h 2 - h 8 x t ) ( h 4 - h 7 y t ) - ( h 5 - h 8 y t ) ( h 1 - h 7 x t ) y ^ t = ( h 1 - h 7 x t ) ( y t - h 6 ) - ( h 4 - h 7 y t ) ( x t - h 3 ) ( h 1 - h 7 x t ) ( h 5 - h 8 y t ) - ( h 4 - h 7 y t ) ( h 2 - h 8 x t ) ;
At the calculated estimated coordinatesNearby searching for coordinates of feature points to be matched on image to be matchedAnd matching the matching image with all the characteristic points of the image to be matched.
2. The method of feature point matching based on image sharpness distribution of claim 1, wherein before the detecting of the image feature points using the edge detection operator, further comprising: and carrying out preprocessing operations of graying, depth search and edge detection on the image.
CN201310732830.9A 2013-12-26 2013-12-26 A kind of characteristic point matching method based on the distribution of image sharpness Active CN103700107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310732830.9A CN103700107B (en) 2013-12-26 2013-12-26 A kind of characteristic point matching method based on the distribution of image sharpness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310732830.9A CN103700107B (en) 2013-12-26 2013-12-26 A kind of characteristic point matching method based on the distribution of image sharpness

Publications (2)

Publication Number Publication Date
CN103700107A CN103700107A (en) 2014-04-02
CN103700107B true CN103700107B (en) 2016-04-20

Family

ID=50361625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310732830.9A Active CN103700107B (en) 2013-12-26 2013-12-26 A kind of characteristic point matching method based on the distribution of image sharpness

Country Status (1)

Country Link
CN (1) CN103700107B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825526B (en) * 2016-03-22 2018-09-07 辽宁师范大学 Video multi-scale geometric analysis method based on video data 3D autocorrelations
CN106295683A (en) * 2016-08-01 2017-01-04 上海理工大学 A kind of outlier detection method of time series data based on sharpness
CN106845494B (en) * 2016-12-22 2019-12-13 歌尔科技有限公司 Method and device for detecting contour corner points in image
CN108399627B (en) * 2018-03-23 2020-09-29 云南大学 Video inter-frame target motion estimation method and device and implementation device
CN110070564B (en) * 2019-05-08 2021-05-11 广州市百果园信息技术有限公司 Feature point matching method, device, equipment and storage medium
CN111124113A (en) * 2019-12-12 2020-05-08 厦门厦华科技有限公司 Application starting method based on contour information and electronic whiteboard

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887586A (en) * 2010-07-30 2010-11-17 上海交通大学 Self-adaptive angular-point detection method based on image contour sharpness
CN102982537A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Scene change detection method and scene change detection system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887586A (en) * 2010-07-30 2010-11-17 上海交通大学 Self-adaptive angular-point detection method based on image contour sharpness
CN102982537A (en) * 2012-11-05 2013-03-20 安维思电子科技(广州)有限公司 Scene change detection method and scene change detection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Adaptive algorithm for corner detecting based on the degree of sharpness of the contour》;Jianli Xiao, etc.;《Optical Engineering》;20110430;第50卷(第4期);全文 *

Also Published As

Publication number Publication date
CN103700107A (en) 2014-04-02

Similar Documents

Publication Publication Date Title
CN103700107B (en) A kind of characteristic point matching method based on the distribution of image sharpness
CN105844669B (en) A kind of video object method for real time tracking based on local Hash feature
Mittal et al. Generalized projection-based M-estimator
CN103136520B (en) The form fit of Based PC A-SC algorithm and target identification method
CN113012212A (en) Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN102750537B (en) Automatic registering method of high accuracy images
EP3690700A1 (en) Image similarity calculation method and device, and storage medium
US9092697B2 (en) Image recognition system and method for identifying similarities in different images
CN111815686B (en) Geometric feature-based coarse-to-fine point cloud registration method
CN103632129A (en) Facial feature point positioning method and device
CN104102904B (en) A kind of static gesture identification method
CN106897990B (en) The character defect inspection method of tire-mold
CN108550166B (en) Spatial target image matching method
CN110222661B (en) Feature extraction method for moving target identification and tracking
Rahman et al. Human ear recognition using geometric features
CN110472651B (en) Target matching and positioning method based on edge point local characteristic value
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN106295710A (en) Image local feature matching process, device and terminal of based on non-geometric constraint
CN117132630A (en) Point cloud registration method based on second-order spatial compatibility measurement
Shi et al. Robust image registration using structure features
CN109840529A (en) A kind of image matching method based on local sensitivity confidence level estimation
US7295707B2 (en) Method for aligning gesture features of image
CN102679871B (en) Rapid detection method of sub-pixel precision industrial object
CN111104922A (en) Feature matching algorithm based on ordered sampling
CN105631896A (en) Hybrid classifier decision-based compressed sensing tracking method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180723

Address after: 201201 1 floor of No. 977, Shanghai Feng Road, Pudong New Area, Shanghai.

Patentee after: Shanghai Ling Technology Co., Ltd.

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Patentee before: Shanghai Jiao Tong University