CN110490913B - Image matching method based on feature description operator of corner and single line segment grouping - Google Patents
Image matching method based on feature description operator of corner and single line segment grouping Download PDFInfo
- Publication number
- CN110490913B CN110490913B CN201910660833.3A CN201910660833A CN110490913B CN 110490913 B CN110490913 B CN 110490913B CN 201910660833 A CN201910660833 A CN 201910660833A CN 110490913 B CN110490913 B CN 110490913B
- Authority
- CN
- China
- Prior art keywords
- line segment
- matching
- corner
- image
- texture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image matching method based on a characteristic description operator composed of angular points and single line segments, which comprises the steps of firstly, extracting line segments and Harris angular points from an image, searching and grouping the line segments, and constructing an angular point-single line segment texture descriptor with scale, rotation and illumination invariance between the detected angular points and the line segments, wherein the Harris angular points have the advantage of rotation invariance, and the reliability of the line segments in a parallax change scene can be improved by adopting a half-width descriptor; carrying out space weighting shortest distance measurement to obtain a local matching result; finally, establishing candidate matching of each line segment, establishing a matching matrix, and solving a global matching result by utilizing spectral analysis; the image matching description operator has the characteristics of scale, rotation and illumination invariance; the image pyramids are respectively established for the stereoscopic images, and the pyramids in different layers are matched one by one, so that the influence of the scale can be eliminated; and the defects of large marshalling calculation amount and long time consumption in multi-line segment matching can be overcome.
Description
Technical Field
The invention belongs to the technical field of remote sensing image recognition and computer vision, relates to an image matching method, and particularly relates to an image matching method based on Harris corner and single-line grouping feature description operators and combining geometric constraints to perform pyramid transfer strategies, and feature points and feature lines can be matched at the same time.
Background
The image matching technology is widely applied to the aspects of three-dimensional reconstruction, image retrieval, target tracking, military reconnaissance and the like, and has important military, medical and ecological environment monitoring and other application values. Although the current feature matching algorithm obtains remarkable results in the research based on point feature matching, the matching technology based on linear features still has some problems due to the influence of factors such as illumination, noise, shielding and the like: 1) The line segment features have no significance, so the significant SIFT operator can avoid points on the line segment deliberately; 2) Line segment matching based on geometric constraint also has problems, for example, end points extracted by directly adopting a matching technology of epipolar constraint are not accurate enough, and the visual angle change is required to be small or the geometric relationship between images is required to be predicted; 3) The matching method based on the line segment grouping generally comprises the matching relation among a plurality of internal line segments, and a plurality of possible feature groupings are constructed, so that the calculation is complex and the time consumption is long; 4) The general line segment feature descriptor will make buffer areas to both sides of the line segment and statistically describe the texture of the whole area, but due to the change of the camera view angle, the texture of only one side of the line segment may be stable, and the texture of the other side may change greatly, resulting in uncertainty of matching.
Harris corners are used for edge detection and corner detection, are usually located at the intersection of edges, resist changes in the shooting angle of view, have rotational invariance, and are invariant to image grayscale translation changes since only the first derivative of the image is used. These good features make it possible to use it as matching feature points. Meanwhile, the Harris corner points and the line segments have proximity on the plane position, and the end points of the line segments which are complete and accurate in positioning are probably the Harris corner points, so that a matching description operator for grouping the Harris corner points and the single line segments is constructed, unreasonable line segment groups can be effectively filtered on the basis of overcoming rotation and gray change, and the method is shorter in time consumption and better in effect compared with a matching algorithm for multi-line segment grouping.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the image matching method which has high time and storage efficiency and can be used for matching the corner points with the single-line segment grouped image matching description operators with stable image rotation, translation and scale transformation.
The technical scheme adopted by the invention is as follows: a method for matching images based on feature description operators grouped by corners and single line segments is characterized by comprising the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multi-level Gaussian image pyramid, performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating an optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
searching Harris corners in a specified range for the end points of each extracted straight line segment, grouping the single straight line segments and the nearest corners in a correlated manner, and combining feature vectors of the corners and spectral information descriptors of the straight line segments to form corner-line segment half-width texture feature descriptors;
and step 3: matching point lines of the reference image and the image to be matched according to the corner-line half-width texture feature descriptor of the line segment to obtain a candidate matching set;
and 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; solving the established matching matrix M through spectral analysis, and judging whether the candidate matching is accepted or rejected;
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
The image matching technology provided by the invention has the beneficial effects that: (1) Aiming at the characteristic that the remote sensing image contains abundant line features, the angular points and the single line segments are grouped and an angular point-line segment descriptor is constructed, the texture features of the line segments and the geometric relation between the line segments and the corresponding angular points are fully utilized to screen out candidate matching, the time complexity of the algorithm is effectively reduced, and the reliability of the matching result is improved; and (2) establishing an image pyramid. (3) The image rotation and scaling influence is eliminated in the matching process, the whole algorithm has rotation and scale invariance, and pixel gray information is not involved in the matching process, so that the method also has good invariance to image brightness transformation.
Drawings
FIG. 1 is a flow chart of an embodiment of the invention;
FIG. 2 is a schematic diagram of association between a corner and a line segment according to an embodiment of the present invention;
FIG. 3 is a half-width line segment descriptor diagram of an embodiment of the present invention;
FIG. 4 is a schematic diagram of the Harris corners and line segments after grouping having rotational invariance, in accordance with an embodiment of the present invention.
Detailed Description
In order to facilitate understanding and implementation of the present invention for persons of ordinary skill in the art, the present invention is further described in detail with reference to the drawings and examples, it is to be understood that the implementation examples described herein are only for illustration and explanation of the present invention and are not to be construed as limiting the present invention.
Referring to fig. 1, the method for image matching based on feature description operators of corner and single line grouping provided by the present invention includes the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multilevel Gaussian image pyramid based on Gaussian down-sampling (other algorithms such as wavelet decomposition can be adopted), performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating the optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
referring to fig. 2, searching Harris corner points in a specified range for the end points of each extracted straight line segment, grouping the single straight line segment and the nearest corner point in association, and combining the feature vectors of the corner points and the spectral information descriptors of the straight line segments to form corner-segment half-width texture feature descriptors;
for a straight line segment, rectangular regions with the same length in the vertical direction on both sides are taken as texture description regions and are divided into m sub-regions. Obtaining a line segment spectral feature description vector L with the dimensionality of 2 m:
L=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m ,S m ) T
in the formula M i 、S i Respectively, the results of pixel gradient mean and standard deviation of each segment of sub-region in the region after being independently normalized.
Referring to fig. 3, in consideration of the problem of texture stability at two sides of a line segment, for each line segment, after a texture region description is established, variance calculation is performed on pixel values at two sides of the line segment, and if the variance is large, it indicates that the region at the side has poor texture stability and is most likely to be in a depth change region. And adopting a unilateral matching strategy for the line segment with the unilateral variance larger than a certain threshold value, namely constructing texture constraint by using only the stable half of the description vector so as to reduce missing matching. At this time, the spectral feature description vector L of the line segment becomes:
L=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ) T
when extracting the Harris corner, firstly calculating gradient values X and Y in X and Y directions in a local region of the corner, and constructing a corner response function E x,y :
Wherein, the first and the second end of the pipe are connected with each other,w is the Gaussian weight of points in the local region of the corner point, I is the gray value in the local region of the corner point,is a convolution operation symbol.
Will E x,y Written in matrix form, E (x, y) = (x, y) H (x, y) T WhereinTwo characteristic values [ alpha beta ] of H are calculated]And the minimum eigenvalue alpha corresponds to the eigenvector [ alpha ] of the ellipse major semi-axis determined by Harris corner 1 α 2 ]As one of the description vector factors of the corner point. Wherein the characteristic value [ alpha beta ]]Representing the magnitude of the gradient change in a local region of the corner, and a feature vector alpha 1 α 2 ]The direction of the overall gradient change (direction of the semiaxis of the ellipse) in the local region of the corner point is represented.
And (3) executing the step 2.1 on the end point of each extracted line segment, and executing the step 2.2 if the Harris angular point is searched in a specified range, wherein the rotation problem of the image is mainly solved by using the rotation invariance of the Harris angular point, a single line segment is associated (grouped) with the corresponding angular point, and the angular point is ensured to be the end point of the corresponding single line segment as far as possible. And combining the feature vector of the corner and the spectral information descriptor of the line segment to form a corner-line segment texture descriptor.
And 2.1, searching angular points of two end points of each extracted line segment of the reference image and the image to be matched within a distance threshold t according to the proximity between the angular points and the end points of the line segments, and executing the step 2.2 if one or more angular points exist.
In this embodiment, two end points of each extracted line segment are used as centers for the reference image and the image to be matched, a circular window with a radius of 3 pixels is opened, and an angular point is searched in the window.
Step 2.2: please refer to fig. 4, the corner points with the closest distance are selected to establish a group with a single line segment, other corner points are discarded, one corner point may correspond to a plurality of line segments, and a single line segment only corresponds to at most two corner points. The rotation angles of the long axis direction and the line segment direction are determined by using the rotation invariance of Harris angular points, namely the corresponding directions of the long axis and the short axis of the characteristic ellipse are unchanged, so that the rotation deformation between the two images is eliminated. Let the coordinates of the starting point and the ending point of the straight line segment be s x s y ]、[e x e y ]Then, the counterclockwise included angle θ between the straight line and the Harris angular point major axis gradient direction can be calculated by the following formula:
and finally combining the description vector [ alpha beta ] of the angular point, the included angle theta of the straight line and the angular point gradient direction and the texture description vector L of the line segment to form an angular point-line segment texture feature descriptor of the candidate grouping, wherein when only one angular point is matched, the angular point-line segment texture feature descriptor is as follows:
LΜ=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ,α,β,θ) T
when there are two matched corners, the corner-line segment texture feature descriptor is:
LΜ=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ,α 1 ,β 1 ,θ 1 ,α 2 ,β 2 ,θ 2 ) T
wherein alpha is 1 、β 1 、θ 1 、α 2 、β 2 、θ 2 Respectively representing the eigenvectors corresponding to the 2 Harris corner points and the corners counterclockwise to the line segment.
And step 3: matching point lines of the reference image and the image to be matched according to the corner-line half-width texture feature descriptor of the line segment to obtain a candidate matching set;
because the line segments have different widths, the weight ratio of the line segments to the corner points also changes. For a half-width line spectral feature descriptor vector L with dimension m, each value in the corner-line texture feature descriptor, the weights of the line descriptor and the Harris corner feature are more generally weighted as follows:
the LHD represents the weighted value corresponding to the corner-line texture feature descriptor, and should satisfy:
and giving corresponding weight according to the rules:
(1) If the line segment matches two corners, the weight corresponding to the line segment descriptor and the corner descriptor is:
in the formulaIs the weight of the i-th line segment neighborhood subregion statistics (including standard deviation and mean), W H The weights of the vectors are described for the corner features,the Gaussian weight of the ith line segment neighborhood subregion; the farther the sub-region is from the line segment, the lower the weight, and the calculation formula of the Gaussian weight is:
wherein d is the distance from the ith line segment neighborhood subregion to the line segment, and comprises:
the weight matrix W is then expressed as:
(2) If a line segment matches only one corner, the weight of the line segment descriptor is:
the weight matrix W is now expressed as:
the final weighted corner-line segment texture feature descriptor considering the Gaussian weight is L Mm W T 。
And 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; and solving the established matching matrix M through spectrum analysis, and judging whether the candidate matching is accepted or rejected.
When matching, for each line segment in the reference image, describing the sub-weighted value by the angular point-line segment textural feature, then calculating the Euclidean distance between the sub-weighted value and the sub-weighted value of the description of all the line segments in the image to be matched, and sequencing the Euclidean distances, wherein the shortest distance is assumed to be s 1 The next shortest distance is s 2 When s is 1 And s 2 When the following conditions are satisfied, s is taken 1 And combining the corresponding two line segments into a candidate matching pair:
where t represents a distance limit threshold, the corresponding line segment will be considered as a candidate matching line segment only if the shortest distance is less than this threshold. t is t s Represents a threshold of shortest-to-second shortest distance ratio, indicating that no correctly matching line segment exists when the shortest distance is too close to the second shortest distance, s 1 And s 2 May all be a mismatch.
And finally, calculating all candidate matches in the two images, and establishing a line segment candidate matching set:
in the formulaRespectively are line segments corresponding to the ith group of candidate matching in the reference image and the image to be matched, and n is the number of candidate matching pairs.
Calculating the texture and geometric consistency scores between the two candidate matches, constructing an adjacency matrix M, regarding the M as an undirected graph, solving the undirected graph through spectral analysis, and screening the candidate matches to obtain a final line segment matching result.
For two pairs of candidate matches M (a, b), whereThe following constraints are used to construct their similarities.
(1) Cross ratio I i And represents the coincidence degree of the two line segments in the horizontal direction.
(2) Projection ratio P i And represents the distance in the vertical direction of the two line segments.
(3) The line segment angle theta, i.e. the angle between two line segments, can be calculated by a line segment angle formula.
(4) Texture similarity V, their texture descriptors for line segments in the reference image and the image to be matched
after texture and geometric constraints are constructed for the candidate matching pairs (a, b), the texture and geometric constraints can be obtained respectively,and
having obtained the texture and geometric constraints, a texture and geometric consistency score M between matching candidate pairs can be calculated ij If the geometric change or texture change of the two groups of candidate matches is larger than a preset value, M is carried out ij The value of (d) is set to 0.M is a group of ij Is the value of the ith row and jth column element in the adjacency matrix M.
The texture and geometric consistency score M is calculated by ij :
Wherein t is I ,t P ,t Θ T is a threshold limiting the variation of geometry and texture of two sets of candidate matches, only if d is I ,d p ,d Θ ,When all are less than 1, Γ is true, otherwise the texture and geometric consistency scores M ij Is set to zero.
After constructing the adjacency matrix M, the line segment matching problem is converted into finding out a matching cluster C, so thatMaximum; representing all candidate matches by a vector x, for the ith set of candidate matches, if it belongs to the matching cluster C, then x (i) =1, otherwise, x (i) =0; thus best matching solution x * Comprises the following steps:
x * =argmax(x T Mx)。
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
It should be understood that parts of the specification not set forth in detail are of the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. A method for image matching based on feature description operators of corner and single line segment grouping is characterized by comprising the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multi-level Gaussian image pyramid, performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating an optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
searching Harris corners in a specified range for the end points of each extracted straight line segment, grouping the single straight line segments and the nearest corners in a correlated manner, and combining feature vectors of the corners and spectral information descriptors of the straight line segments to form corner-line segment half-width texture feature descriptors;
and step 3: matching point lines of the reference image and the image to be matched according to the corner-line half-width texture feature descriptor of the line segment to obtain a candidate matching set;
and 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; solving the established matching matrix M through spectrum analysis, and judging whether the candidate matching is accepted or rejected;
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
2. The method for image matching based on the feature description operators of the corner point and single line segment grouping as claimed in claim 1, wherein: in the step 1, a multi-level Gaussian image pyramid is constructed by adopting Gaussian down-sampling or wavelet decomposition.
3. The method for image matching based on the feature description operators of the corner point and single line segment grouping as claimed in claim 1, wherein: in the step 2, regarding the straight line segment, taking rectangular areas with the same length in the vertical direction on the two sides of the straight line segment as texture description areas, and dividing the texture description areas into m sections of sub-areas; obtaining a line segment spectral feature description vector L with the dimensionality of 2 m:
L=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m ,S m ) T ;
in the formula M i 、S i Respectively obtaining the result of independent normalization of the pixel gradient mean value and the standard deviation of each section of sub-area in the area;
for each line segment, after texture region description is established, variance calculation is carried out on pixel values on two sides of the line segment, and a unilateral matching strategy is adopted for the line segment with the unilateral variance larger than a certain threshold value, namely, only the stable half of the description vector is used for constructing texture constraint so as to reduce missing matching; at this time, the spectral feature description vector L of the line segment becomes:
L=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ) T
when extracting the Harris corner, firstly calculating gradient values X and Y in the X and Y directions in a local region of the corner, and constructing a corner response function E x,y :
E x,y =Ax 2 +2Cxy+By 2
Wherein the content of the first and second substances,w is the Gaussian weight of points in the local region of the corner pointI is the gray value in the local area of the corner,is a convolution operation symbol;
will E x,y Written in matrix form, E (x, y) = (x, y) H (x, y) T In whichTwo characteristic values [ alpha beta ] of H are calculated]And the minimum characteristic value alpha corresponds to the characteristic vector [ alpha ] of the ellipse long semi-axis determined by the Harris corner 1 α 2 ]As one of the description vector factors of the corner point; wherein the characteristic value [ alpha beta ]]Representing the magnitude of the gradient change in a local region of the corner, and a feature vector alpha 1 α 2 ]Represents the direction of the overall gradient change in the local area of the corner point, namely the direction of the semiaxis of the ellipse.
4. The image matching method for the feature descriptors based on the corner point and single line segment grouping as claimed in claim 3, wherein the step 2 of combining the feature vectors of the corner points and the spectral information descriptors of the line segments to form the corner point-line segment half-width texture feature descriptors is to perform the following steps for each extracted straight line segment:
step 2.1: searching angular points in a distance threshold t for the reference image and the image to be matched respectively by taking two end points of each extracted line segment as centers according to the proximity between the angular points and the end points of the line segment; if one or more corner points exist, executing the step 2.2;
step 2.2: sorting the diagonal points according to the distance from the angular points to the end points, taking the angular point with the closest distance to establish a grouping with the line segment, abandoning other angular points, enabling one angular point to correspond to a plurality of line segments, and enabling a single line segment to correspond to only one angular point;
let the coordinates of the start point and the end point of the straight line segment be s x s y ]、[e x e y ]And then, calculating the anticlockwise included angle theta between the straight line segment and the Harris angular point long semiaxis gradient direction by the following formula:
finally, combining the description vector [ alpha beta ] of the angular point, the included angle theta between the straight line segment and the gradient direction of the angular point and the texture description vector L of the straight line segment to form an angular point-straight line segment texture feature descriptor of the candidate grouping;
wherein: when only one corner is matched, the corner-line segment half-width texture feature descriptor is as follows:
LΜ=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ,α,β,θ) T ;
when there are two matched corners, the corner-line half-width texture feature descriptor is:
LΜ=(M 1 ,S 1 ,M 2 ,S 2 ,...,M m/2 ,S m/2 ,α 1 ,β 1 ,θ 1 ,α 2 ,β 2 ,θ 2 ) T ;
wherein alpha is 1 、β 1 、θ 1 、α 2 、β 2 、θ 2 Respectively representing the eigenvectors corresponding to the 2 Harris corner points and the corners counterclockwise to the line segment.
5. The method of claim 4, wherein the method comprises the following steps: in step 3, for each value of the corner-straight line segment texture feature descriptor in a half-width line segment spectral feature description vector L with a dimension of m, the weight of the straight line segment descriptor and the weight of the Harris corner feature are weighted more generally as follows:
the LHD represents the weighted value corresponding to the corner-line texture feature descriptor, and should satisfy:
and giving corresponding weight according to the rules:
(1) If the line segment matches two corners, the weight corresponding to the line segment descriptor and the corner descriptor is:
in the formulaAs the weight of the i-th line segment neighborhood subregion statistic, W H Describing the weight of the vector for the corner feature; since the farther the sub-region is from the line segment, the lower the weight, the calculation formula of the gaussian weight is:
wherein d is the distance from the ith line segment neighborhood subregion to the line segment, and comprises:
the weight matrix W is now expressed as:
(2) If a line segment only matches one corner, the weight of the line segment descriptor is:
the weight matrix W is now expressed as:
the final weighted corner-line segment texture feature descriptor considering the Gaussian weight is L Mm W T 。
6. The method of claim 5, wherein the method comprises: step 4, calculating the similarity of candidate matching according to the geometric relationship between the images to be matched, when matching, describing the sub-weighted value of the corner-straight line segment textural feature of each straight line segment in the reference image, then calculating the Euclidean distances between the straight line segment and the sub-weighted values of the descriptors of all the line segments in the images to be matched, and sorting the Euclidean distances, wherein the shortest distance is assumed to be s 1 The next shortest distance is s 2 When s is 1 And s 2 When the following conditions are satisfied, s is taken 1 And combining the corresponding two line segments into a candidate matching pair:
wherein t represents a distance limit threshold, and only when the shortest distance is less than the threshold, the corresponding line segment is considered as a candidate matching line segment;
and finally, calculating all candidate matches in the two images, and establishing a line segment candidate matching set:
in the formulaRespectively matching corresponding line segments for the ith group of candidates in the reference image and the image to be matched, wherein n is the number of the candidate matching pairs;
calculating the texture and geometric consistency scores between the two candidate matches, constructing an adjacency matrix M, regarding the M as an undirected graph, solving the undirected graph through spectral analysis, and screening the candidate matches to obtain a final line segment matching result;
for two pairs of candidate matches M (a, b), whereThe following constraints are used to construct their similarity:
(1) Cross ratio I i Representing the contact ratio of the two line segments in the horizontal direction;
(2) Projection ratio P i Indicating the distance between two line segments in the vertical direction;
(3) The line segment included angle theta, namely the included angle of two line segments, can be calculated by a line segment included angle formula;
(4) The texture similarity V, the texture descriptors for the line segments in the reference image and the image to be matched are respectively expressed as:
after texture and geometric constraint are constructed for the candidate matching pairs (a, b), the texture and geometric constraint are respectively obtained,and
after texture and geometric constraints are obtained, the texture sum between matching candidate pairs is calculatedWhat consistency score M ij If the geometric change or texture change of the two groups of candidate matches is larger than a preset value, M is carried out ij Is set to 0; m ij Is the value of the ith row and jth column element in the adjacency matrix M; after the adjacency matrix M is constructed, the line segment matching problem is converted into finding out a matching cluster C, so thatMaximum;
representing all candidate matches by a vector x, for the ith set of candidate matches, if it belongs to the matching cluster C, then x (i) =1, otherwise, x (i) =0; thus best matching solution x * Comprises the following steps:
x * =argmax(x T Mx)。
7. the method of claim 6, wherein the method comprises: the texture and geometric consistency score M is calculated by ij :
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660833.3A CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660833.3A CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490913A CN110490913A (en) | 2019-11-22 |
CN110490913B true CN110490913B (en) | 2022-11-22 |
Family
ID=68547833
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910660833.3A Active CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490913B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898646B (en) * | 2020-07-06 | 2022-05-13 | 武汉大学 | Cross-view image straight line feature matching method based on point-line graph optimization solution |
CN112163622B (en) * | 2020-09-30 | 2022-07-05 | 山东建筑大学 | Global and local fusion constrained aviation wide-baseline stereopair line segment matching method |
CN113095384B (en) * | 2021-03-31 | 2023-04-28 | 安徽工业大学 | Remote sensing image matching method based on linear segment context characteristics |
CN113378507B (en) * | 2021-06-01 | 2023-12-05 | 中科晶源微电子技术(北京)有限公司 | Mask data cutting method and device, equipment and storage medium |
CN113538503B (en) * | 2021-08-21 | 2023-09-01 | 西北工业大学 | Solar panel defect detection method based on infrared image |
CN114012736A (en) * | 2021-12-08 | 2022-02-08 | 北京云迹科技有限公司 | Positioning object for assisting environment positioning and robot system |
CN116309837B (en) * | 2023-03-16 | 2024-04-26 | 南京理工大学 | Method for identifying and positioning damaged element by combining characteristic points and contour points |
CN117114971B (en) * | 2023-08-01 | 2024-03-08 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184801A (en) * | 2015-09-28 | 2015-12-23 | 武汉大学 | Optical and SAR image high-precision registration method based on multilevel strategy |
WO2017049994A1 (en) * | 2015-09-25 | 2017-03-30 | 深圳大学 | Hyperspectral image corner detection method and system |
CN106709870A (en) * | 2017-01-11 | 2017-05-24 | 辽宁工程技术大学 | Close-range image straight-line segment matching method |
-
2019
- 2019-07-22 CN CN201910660833.3A patent/CN110490913B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017049994A1 (en) * | 2015-09-25 | 2017-03-30 | 深圳大学 | Hyperspectral image corner detection method and system |
CN105184801A (en) * | 2015-09-28 | 2015-12-23 | 武汉大学 | Optical and SAR image high-precision registration method based on multilevel strategy |
CN106709870A (en) * | 2017-01-11 | 2017-05-24 | 辽宁工程技术大学 | Close-range image straight-line segment matching method |
Also Published As
Publication number | Publication date |
---|---|
CN110490913A (en) | 2019-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490913B (en) | Image matching method based on feature description operator of corner and single line segment grouping | |
CN110363122B (en) | Cross-domain target detection method based on multi-layer feature alignment | |
CN113012212B (en) | Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system | |
Tian et al. | HyNet: Learning local descriptor with hybrid similarity measure and triplet loss | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
Zhou et al. | Neurvps: Neural vanishing point scanning via conic convolution | |
CN112633382B (en) | Method and system for classifying few sample images based on mutual neighbor | |
Liu et al. | A review of keypoints’ detection and feature description in image registration | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN109472770B (en) | Method for quickly matching image characteristic points in printed circuit board detection | |
CN112883850A (en) | Multi-view aerospace remote sensing image matching method based on convolutional neural network | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
Li et al. | Image Matching Algorithm based on Feature-point and DAISY Descriptor. | |
CN111199245A (en) | Rape pest identification method | |
Chen et al. | Method on water level ruler reading recognition based on image processing | |
CN114358166B (en) | Multi-target positioning method based on self-adaptive k-means clustering | |
CN117557804A (en) | Multi-label classification method combining target structure embedding and multi-level feature fusion | |
US20190318196A1 (en) | Guided sparse feature matching via coarsely defined dense matches | |
Ma et al. | Visual homing via guided locality preserving matching | |
Shen et al. | Gestalt rule feature points | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN111144469B (en) | End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network | |
CN111311657B (en) | Infrared image homologous registration method based on improved corner principal direction distribution | |
CN113344110A (en) | Fuzzy image classification method based on super-resolution reconstruction | |
CN111160433B (en) | High-speed matching method and system for high-resolution image feature points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |