CN114926659B - Deformation target positioning algorithm based on SIFT and CM - Google Patents

Deformation target positioning algorithm based on SIFT and CM Download PDF

Info

Publication number
CN114926659B
CN114926659B CN202210529546.0A CN202210529546A CN114926659B CN 114926659 B CN114926659 B CN 114926659B CN 202210529546 A CN202210529546 A CN 202210529546A CN 114926659 B CN114926659 B CN 114926659B
Authority
CN
China
Prior art keywords
template
image
matching
distance
sift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210529546.0A
Other languages
Chinese (zh)
Other versions
CN114926659A (en
Inventor
吴昊
陈红光
卢兴中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Betterway Automation Technology Co ltd
Original Assignee
Shanghai Betterway Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Betterway Automation Technology Co ltd filed Critical Shanghai Betterway Automation Technology Co ltd
Priority to CN202210529546.0A priority Critical patent/CN114926659B/en
Publication of CN114926659A publication Critical patent/CN114926659A/en
Application granted granted Critical
Publication of CN114926659B publication Critical patent/CN114926659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of template matching, and discloses a deformation target positioning algorithm based on SIFT and CM, which comprises the following steps: s1, establishing a plurality of layers of pyramids for an image and a template; s2, obtaining a plurality of pairs of matching points by adopting a SIFT algorithm on the top template and the top image, and screening out a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance; s3, setting an outlier distance threshold, selecting three pairs of matching points with the least outlier from the matching point pairs screened in the previous step by adopting a RANSAC algorithm, and storing a corresponding affine transformation matrix; s4, resolving the rotation angle theta, the x-direction scaling factor scalex and the y-direction scaling factor scaley from the affine transformation matrix stored in the previous step, and obtaining the positioning result of the top-layer template on the top-layer image. The method can reduce the calculated amount, quickly position the deformation target, has better real-time performance and has higher positioning precision.

Description

Deformation target positioning algorithm based on SIFT and CM
Technical Field
The invention belongs to the technical field of template matching, and particularly relates to a deformation target positioning algorithm based on SIFT and CM.
Background
SIFT and CM are collectively called Scale Invariant Feature Transform and Chamfer Matching, and the template Matching technique in the prior art is mainly divided into a pixel gray value-based method and a feature-based method.
The method based on the pixel gray value comprises the following steps: the method comprises the steps of firstly scaling and rotating templates to obtain a series of templates, then sliding each template on an image, calculating indexes at each position, such as sum the absolute gray value differences between template and image (SAD) or sum the squared gray value differences between template and image (SSD) or normalized cross-correlation (NCC), obtaining an index image, selecting a proper threshold value to thresholde the index image, and selecting a minimum value point or a maximum value point in the thresholded index image to obtain a positioning result, wherein the gray value-based method is sensitive to illumination, and the positioning effect is not ideal when illumination is changed.
Feature-based methods, such as edge features. Classical algorithms are chamfer matching. The method comprises the following steps: and performing edge detection on the image, and performing distance transformation on an edge map of the image to obtain a distance image D. Scaling and rotating the templates to obtain a series of templates, performing edge detection on each template to obtain a template edge map T, sliding the T on D, and calculating a chamfer distance according to the formula at each position
Wherein, (x) i ,y i ) The number of edge points in T is n, and (x) topleft ,y topleft ) Is the position of the upper left corner of T in D. And selecting the template and the position corresponding to the minimum chamfer distance as a positioning result. However, when the target is partially occluded, the chamfer distance becomes relatively large, and the positioning may fail.
The positioning method based on SIFT is also widely applied, and comprises the following steps: extracting feature points from the template and the image respectively, calculating the feature of each feature point, finding out two feature points which are most similar to each feature point of the template in the image, screening out a part of matching point pairs with good quality according to the ratio of the nearest distance to the next nearest distance, substituting the matching point pairs into a square program group to solve a scaling coefficient, and rotating the angle and the position to obtain a positioning result. However, it is difficult to ensure that the screened matching point pairs are correct matching point pairs.
Disclosure of Invention
The invention aims to provide a deformation target positioning algorithm based on SIFT and CM so as to solve the problems in the background technology.
In order to achieve the above object, the present invention provides the following technical solutions: a deformation target positioning algorithm based on SIFT and CM comprises the following steps
S1, establishing a plurality of layers of pyramids for an image and a template;
s2, obtaining a plurality of pairs of matching points by adopting a SIFT algorithm on the top template and the top image, and screening out a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance;
s3, setting an outlier distance threshold, selecting three pairs of matching points with the least outlier from the matching point pairs screened in the previous step by adopting a RANSAC algorithm, and storing a corresponding affine transformation matrix;
s4, resolving the rotation angle theta, the x-direction scaling factor scalex and the y-direction scaling factor scaley from the affine transformation matrix stored in the previous step, and obtaining the positioning result of the top-layer template on the top-layer image by the position of the coordinate system origin of the template on the image;
s5, respectively taking θ, scalex and scaley as centers to obtain U (θ), U (scalex) and U (scaley), taking Cartesian products of the three neighbors, and then carrying out corresponding transformation on the bottom template to obtain a series of templates;
s6, the coordinates (x 0 ,y 0 ) Multiplying the image by a magnification to obtain the approximate coordinate (x 1 ,y 1 ) In (x) 1 ,y 1 ) And (3) making a neighborhood for the center, sliding the coordinate system origin of the series of templates obtained in the step (S5) in the neighborhood, calculating the distance of the chamfer, finding out the template corresponding to the minimum distance of the chamfer and the position of the template, and obtaining the coordinate of the coordinate system origin on the image, namely the final positioning result, by the rotation angle, the x-direction scaling coefficient, the y-direction scaling coefficient and the y-direction scaling coefficient corresponding to the template.
Preferably, in the step S2, feature points are detected by using a SIFT algorithm on the top template and the top image, feature point descriptions are calculated, each feature point description of the template is calculated, two feature point descriptions which are most matched with each feature point description are found out from all feature point descriptions of the image, the ratio of the nearest distance to the next nearest distance is calculated, and if the ratio is smaller than a set threshold, the feature point of the template and the feature point of the image which is most matched with the feature point are used as a matching pair.
Preferably, in the step S3, for the matching point pair obtained in the previous step, several iterations are performed, each iteration randomly extracts three pairs of matching points from the matching point pair, and their coordinates are substituted into the equation set shown below,
wherein x is i ,y i (i=1, 2, 3) is the coordinates of the template feature point in the template coordinate system, u i ,v i (i=1, 2, 3) is the coordinates of the corresponding matching point on the image, solving the system of equations, recombining the solution vectors into an affine transformation matrix as shown below,
for each pair of matching points, the template feature points are affine transformed, as shown below,
and calculating Euclidean distance between the point (x ', y') and the template feature point matching point, if the distance exceeds a set threshold value, the matching point is an outlier, thus, calculating the number of outliers corresponding to the affine transformation, and storing an affine transformation matrix corresponding to the least outlier.
Preferably, in S5, a cartesian product is performed on U (θ), U (scalex), U (scaley) to obtain a=u (scalex) ×u (scaley) ×u (θ), and for each element in a, a corresponding affine transformation is performed on the bottom template to obtain a series of templates.
Preferably, the step S6 is to perform edge detection on the bottom image and then perform distance transformation to obtain a distance image D;
let the pyramid layer number be nlevel, then coordinate (x 0 ,y 0 ) Multiplied by 2 nlevel-1 Obtaining the approximate coordinates (x) 1 ,y 1 ) I.e. (x) 1 ,y 1 )=2 nlevel-1 ×(x 0 ,y 0 ) Radius R is set to (x) 1 ,y 1 ) And taking R as a radius to make a square neighborhood, sliding the origin of the coordinate system of the series of templates obtained in the previous step in the neighborhood, and calculating the chamfer distance of each position.
The beneficial effects of the invention are as follows:
1. according to the invention, a plurality of layers of pyramids are established for the image and the template, and a plurality of pairs of matching points are obtained for the top-layer template and the top-layer image by adopting a SIFT algorithm. Screening a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance, setting an outlier distance threshold, adopting a RANSAC algorithm to pick three pairs of matching points with the least outlier from the matching point pairs screened in the previous step, storing a corresponding affine transformation matrix, and analyzing a rotation angle theta, an x-direction scaling factor scalex, a y-direction scaling factor scaley from the affine transformation matrix, and the position (x 0 ,y 0 ) And (3) obtaining a positioning result of the top template on the top image, taking θ, scalex and scaley as centers to obtain U (θ), U (scalex) and U (scaley), and carrying out Cartesian product on the three neighbors and then carrying out corresponding transformation on the bottom template to obtain a series of templates. And then the coordinates (x 0 ,y 0 ) Multiplying the image by a magnification to obtain the approximate coordinates (x 1 ,y 1 ) In (x) 1 ,y 1 ) A neighborhood is made for the center, the origin of a coordinate system of a series of templates obtained before slides in the neighborhood, the template and the position thereof corresponding to the minimum chamfer distance are found, the rotation angle, the x-direction scaling factor, the y-direction scaling factor and the coordinate system origin corresponding to the template are foundThe coordinates of the points on the image are the final positioning result, so that the calculated amount can be reduced, the deformation target can be rapidly positioned, the real-time performance is better, and the positioning precision is higher.
Drawings
FIG. 1 is a schematic diagram of a pyramid structure according to the present invention;
FIG. 2 is a schematic flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1 to 2, an embodiment of the present invention provides a deformation target positioning algorithm based on SIFT and CM, and the present invention includes the following steps: step 1: and establishing a plurality of layers of pyramids for the image and the template. Step 2: and obtaining a plurality of pairs of matching points by adopting a SIFT algorithm on the top template and the top image, and screening out a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance. Step 3: setting an outlier distance threshold, selecting three pairs of matching points with the least outlier from the matching point pairs screened in the previous step by adopting a RANSAC algorithm, and storing a corresponding affine transformation matrix. Step 4: and resolving the positions of the rotation angle theta, the x-direction scaling factor scalex and the y-direction scaling factor scaley and the coordinate system origin of the template on the image from the affine transformation matrix stored in the last step, and obtaining the positioning result of the top-layer template on the top-layer image. Step 5: and respectively taking θ, scalex and scaley as centers to obtain U (θ), U (scalex) and U (scaley). And carrying out Cartesian product on the three neighborhoods, and then carrying out corresponding transformation on the bottom template to obtain a series of templates. Step 6: coordinates (x) 0 ,y 0 ) Multiplying the image by a magnification to obtain the approximate coordinate (x 1 ,y 1 ) In (x) 1 ,y 1 ) And (3) making a neighborhood for the center, sliding the coordinate system origin of the series of templates obtained in the previous step in the neighborhood, calculating the distance of the chamfer, finding out the template corresponding to the minimum distance of the chamfer and the position of the template, and obtaining the coordinate of the coordinate system origin on the image, namely the final positioning result, by the rotation angle, the x-direction scaling coefficient, the y-direction scaling coefficient and the y-direction scaling coefficient corresponding to the template.
The steps are specifically described below.
1. Establishing multiple layers of pyramids for images and templates
In the field of machine vision, the resolution of an image is generally larger, and if a SIFT algorithm is directly used on an original image, the calculated amount is large, the time consumption is long, and the real-time requirement cannot be met. In order to reduce the calculation amount, a plurality of layers of pyramids are respectively built for the template and the image, and the specific layers depend on the actual situation, so that a little attention is needed to be paid: the object is clearly visible on the top template and top image. The method for creating the pyramid is shown in fig. 1. Taking the pixel at the second layer (0, 0) as an example, it is averaged from the pixel addition at the first layer (0, 0), (0, 1), (1, 0), (1, 1), and so on.
2. And obtaining a plurality of pairs of matching points by adopting a SIFT algorithm on the top template and the top image, and screening out a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance.
And respectively detecting characteristic points of the top-layer template and the top-layer image by adopting a SIFT algorithm, calculating characteristic point descriptions, finding out two characteristic point descriptions which are most matched with each characteristic point description in all characteristic point descriptions of the template, calculating the ratio of the nearest distance to the next nearest distance, and taking the characteristic point of the template and the image characteristic point which is most matched with the template as a matching pair if the ratio is smaller than a set threshold value.
3. Setting an outlier distance threshold, adopting a RANSAC algorithm to pick three pairs of matching points with the least outlier from the matching point pairs screened in the previous step, and storing a corresponding affine transformation matrix
Performing several iterations on the matching point pairs obtained in the last step, randomly extracting three pairs of matching points from the matching point pairs in each iteration, substituting the coordinates of the three pairs of matching points into an equation set shown in the following,
wherein x is i ,y i (i=1, 2, 3) is the coordinates of the template feature point in the template coordinate system, u i ,v i (i=1, 2, 3) is the coordinates of the corresponding matching point on the image. Solving the equation set, recombining the solution vectors into an affine transformation matrix as shown below,
for each pair of matching points, the template feature points are affine transformed, as shown below,
and calculating Euclidean distance between the point (x ', y') and the matching point of the template feature point, and if the distance exceeds a set threshold value, the matching point is an outlier. Thus, the number of outliers corresponding to the affine transformation is calculated, and an affine transformation matrix corresponding to the least outlier is stored.
4. Resolving rotation angle theta, x-direction scaling factor scalex and y-direction scaling factor scaley from the affine transformation matrix stored in the last step, and locating the coordinate system origin of the template on the image (x 0 ,y 0 ) Obtaining the positioning result of the top template on the top image
Comparing the affine transformation matrix shown in the formula (4) with the affine transformation matrix shown in the formula (2),
can be found out
x 0 =t x (8)
y 0 =t y (9)
The above is the result of the positioning of the top template on the top image.
5. And respectively taking θ, scalex and scaley as centers to obtain U (scalex), U (scaley) and U (θ). The three neighborhoods are processed by Cartesian product, and then the bottom template is processed by corresponding transformation to obtain a series of templates
Selecting a step length scalestop and a radius scaler of an x-direction scaling coefficient; step size scaley step of the scaling factor in y direction, radius scaley; the step size of the rotation angle, sitar, radius sitar. Then the first time period of the first time period,
U(scalex)=[scalex-scalexstep*scalexr,scalex+scalexstep*scalexr] (10)
U(scaley)=[scaley-scaleystep*scaleyr,scaley+scaleystep*scaleyr] (11)
U(θ)=[θ-sitastep*sitar,θ+sitastep*sitar] (12)
they are subjected to Cartesian product to obtain
A=U(scalex)×U(scaley)×U(θ) (13)
And (3) carrying out corresponding affine transformation on the bottom template for each element in the A to obtain a series of templates.
6. Coordinates (x) 0 ,y 0 ) Multiplying the image by a magnification to obtain the approximate coordinate (x 1 ,y 1 ) In (x) 1 ,y 1 ) A neighborhood is made for the center, and the coordinate system of a series of templates obtained in the last step is originalAnd sliding the point in the neighborhood, calculating the distance of the character, and finding out a template corresponding to the minimum distance of the character and the position of the template, wherein the coordinate of the coordinate system origin on the image is the final positioning result, and the rotation angle, the x-direction scaling factor, the y-direction scaling factor and the y-direction scaling factor corresponding to the template.
And performing edge detection on the bottom layer image, and performing distance conversion to obtain a distance image D. Assuming that the pyramid layer number is nlevel, the coordinates (x 0 ,y 0 ) Multiplied by 2 nlevel-1 Obtaining the approximate coordinates (x) 1 ,y 1 ) I.e. (x) 1 ,y 1 )=2 nlevel-1 ×(x 0 ,y 0 ). Setting a radius R to (x) 1 ,y 1 ) And taking the center as R, and taking the radius as a square neighborhood. Sliding the origin of the coordinate system of the series of templates obtained in the previous step in the neighborhood, and calculating the distance of each position by the following method,
wherein E is the edge point set of the template, (x) topleft ,y topleft ) The coordinates of the upper left corner of the template on the image are given, and n is the number of template edge points. And finding out a template corresponding to the minimum chamfer distance and the position of the template, wherein the coordinate of the coordinate system origin on the image is the final positioning result, and the rotation angle, the x-direction scaling factor, the y-direction scaling factor and the y-direction scaling factor corresponding to the template. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or desiredIncluding as an element inherent to such a process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (5)

1. A deformation target positioning algorithm based on SIFT and CM is characterized in that: the method comprises the following steps:
s1, establishing a plurality of layers of pyramids for an image and a template;
s2, obtaining a plurality of pairs of matching points by adopting a SIFT algorithm on the top template and the top image, and screening out a part of matching point pairs according to the ratio of the nearest distance to the next nearest distance;
s3, setting an outlier distance threshold, selecting three pairs of matching points with the least outlier from the matching point pairs screened in the previous step by adopting a RANSAC algorithm, and storing a corresponding affine transformation matrix;
s4, resolving the rotation angle theta, the x-direction scaling factor scalex and the y-direction scaling factor scaley from the affine transformation matrix stored in the previous step, and obtaining the positioning result of the top-layer template on the top-layer image by the position of the coordinate system origin of the template on the image, wherein:
s5, respectively taking θ, scalex and scaley as centers to obtain U (θ), U (scalex) and U (scaley), taking Cartesian products of the three neighbors, and then carrying out corresponding transformation on the bottom template to obtain a series of templates;
s6, the coordinates (x 0 ,y 0 ) Multiplying the image by a magnification to obtain the coordinate (x) of the origin of the coordinate system of the bottom template on the bottom image 1 ,y 1 ) In (x) 1 ,y 1 ) And (3) making a neighborhood for the center, sliding the coordinate system origin of the series of templates obtained in the step (S5) in the neighborhood, calculating the distance of the chamfer, finding out the template corresponding to the minimum distance of the chamfer and the position of the template, and obtaining the coordinate of the coordinate system origin on the image, namely the final positioning result, by the rotation angle, the x-direction scaling coefficient, the y-direction scaling coefficient and the y-direction scaling coefficient corresponding to the template.
2. A deformation target positioning algorithm based on SIFT and CM according to claim 1, wherein: in the step S2, feature points are detected respectively by adopting a SIFT algorithm on a top-layer template and a top-layer image, feature point descriptions are calculated, each feature point description of the template is obtained, two feature point descriptions which are most matched with the template are found out from all feature point descriptions of the image, the ratio of the nearest distance to the next nearest distance is calculated, and if the ratio is smaller than a set threshold value, the feature point of the template and the feature point of the image which is most matched with the template are used as a matching pair.
3. A deformation target positioning algorithm based on SIFT and CM according to claim 1, wherein: and S3, carrying out a plurality of iterations on the matching point pair obtained in the last step, randomly extracting three pairs of matching points from the matching point pair in each iteration, substituting the coordinates of the three pairs of matching points into an equation set shown in the following,
wherein x is i ,y i I=1, 2,3 is the coordinates of the template feature point in the template coordinate system, u i ,v i I=1, 2,3 is the coordinates of the corresponding matching point on the image, solving the system of equations, recombining the solution vectors into an affine transformation matrix as shown below,
for each pair of matching points, the template feature points are affine transformed, as shown below,
and calculating Euclidean distance between the point (x ', y') and the template feature point matching point, if the distance exceeds a set threshold value, the matching point is an outlier, thus, calculating the number of outliers corresponding to the affine transformation, and storing an affine transformation matrix corresponding to the least outlier.
4. A deformation target positioning algorithm based on SIFT and CM according to claim 1, wherein: in S5, a cartesian product is performed on U (θ), U (scalex), U (scaley) to obtain a=u (scalex) ×u (scaley) ×u (θ), and for each element in a, a corresponding affine transformation is performed on the bottom template to obtain a series of templates.
5. A deformation target positioning algorithm based on SIFT and CM according to claim 1, wherein: s6, performing edge detection on the bottom layer image, and performing distance conversion to obtain a distance image D;
let the pyramid layer number be nlevel, then coordinate (x 0 ,y 0 ) Multiplied by 2 nlevel-1 Obtaining the coordinates (x) 1 ,y 1 ) I.e. (x) 1 ,y 1 )=2 nlevel-1 ×(x 0 ,y 0 ) Radius R is set to (x) 1 ,y 1 ) Taking the center as R and taking the radius as a square neighborhood, and obtaining the product from the previous stepThe origin of the coordinate system of the series of templates to be reached is slid in the neighborhood and the chamfer distance for each position is calculated.
CN202210529546.0A 2022-05-16 2022-05-16 Deformation target positioning algorithm based on SIFT and CM Active CN114926659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210529546.0A CN114926659B (en) 2022-05-16 2022-05-16 Deformation target positioning algorithm based on SIFT and CM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210529546.0A CN114926659B (en) 2022-05-16 2022-05-16 Deformation target positioning algorithm based on SIFT and CM

Publications (2)

Publication Number Publication Date
CN114926659A CN114926659A (en) 2022-08-19
CN114926659B true CN114926659B (en) 2023-08-08

Family

ID=82808130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210529546.0A Active CN114926659B (en) 2022-05-16 2022-05-16 Deformation target positioning algorithm based on SIFT and CM

Country Status (1)

Country Link
CN (1) CN114926659B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140028809A (en) * 2012-08-30 2014-03-10 삼성테크윈 주식회사 Adaptive image processing apparatus and method in image pyramid
CN104134220A (en) * 2014-08-15 2014-11-05 北京东方泰坦科技股份有限公司 Low-altitude remote sensing image high-precision matching method with consistent image space
CN105719285A (en) * 2016-01-19 2016-06-29 东南大学 Pedestrian detection method based on directional chamfering distance characteristics
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
CN106651827A (en) * 2016-09-09 2017-05-10 浙江大学 Fundus image registering method based on SIFT characteristics
CN107240130A (en) * 2017-06-06 2017-10-10 苍穹数码技术股份有限公司 Remote Sensing Image Matching method, apparatus and system
WO2017193596A1 (en) * 2016-05-13 2017-11-16 广州视源电子科技股份有限公司 Image matching method and system for printed circuit board
CN110660092A (en) * 2019-08-26 2020-01-07 广东奥普特科技股份有限公司 Log-Polar transform-based image fast matching algorithm
CN112288009A (en) * 2020-10-29 2021-01-29 西安电子科技大学 R-SIFT chip hardware Trojan horse image registration method based on template matching
CN112329880A (en) * 2020-11-18 2021-02-05 德中(天津)技术发展股份有限公司 Template fast matching method based on similarity measurement and geometric features
CN112434705A (en) * 2020-11-09 2021-03-02 中国航空工业集团公司洛阳电光设备研究所 Real-time SIFT image matching method based on Gaussian pyramid grouping

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI521448B (en) * 2014-03-18 2016-02-11 Univ Yuan Ze Vehicle identification system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140028809A (en) * 2012-08-30 2014-03-10 삼성테크윈 주식회사 Adaptive image processing apparatus and method in image pyramid
CN104134220A (en) * 2014-08-15 2014-11-05 北京东方泰坦科技股份有限公司 Low-altitude remote sensing image high-precision matching method with consistent image space
CN105719285A (en) * 2016-01-19 2016-06-29 东南大学 Pedestrian detection method based on directional chamfering distance characteristics
CN105930858A (en) * 2016-04-06 2016-09-07 吴晓军 Fast high-precision geometric template matching method enabling rotation and scaling functions
WO2017193596A1 (en) * 2016-05-13 2017-11-16 广州视源电子科技股份有限公司 Image matching method and system for printed circuit board
CN106651827A (en) * 2016-09-09 2017-05-10 浙江大学 Fundus image registering method based on SIFT characteristics
CN107240130A (en) * 2017-06-06 2017-10-10 苍穹数码技术股份有限公司 Remote Sensing Image Matching method, apparatus and system
CN110660092A (en) * 2019-08-26 2020-01-07 广东奥普特科技股份有限公司 Log-Polar transform-based image fast matching algorithm
CN112288009A (en) * 2020-10-29 2021-01-29 西安电子科技大学 R-SIFT chip hardware Trojan horse image registration method based on template matching
CN112434705A (en) * 2020-11-09 2021-03-02 中国航空工业集团公司洛阳电光设备研究所 Real-time SIFT image matching method based on Gaussian pyramid grouping
CN112329880A (en) * 2020-11-18 2021-02-05 德中(天津)技术发展股份有限公司 Template fast matching method based on similarity measurement and geometric features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于SIFT算法与Harris角点检测的PCB板Mark点匹配研究;李大双等;《信息与电脑》(第14期);第53-55页 *

Also Published As

Publication number Publication date
CN114926659A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN109740665B (en) Method and system for detecting ship target with occluded image based on expert knowledge constraint
CN105427298B (en) Remote sensing image registration method based on anisotropic gradient metric space
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
US8509536B2 (en) Character recognition device and method and computer-readable medium controlling the same
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN111461113B (en) Large-angle license plate detection method based on deformed plane object detection network
EP1092206A1 (en) Method of accurately locating the fractional position of a template match point
CN111009001A (en) Image registration method, device, equipment and storage medium
CN109948135A (en) A kind of method and apparatus based on table features normalized image
CN112085709A (en) Image contrast method and equipment
CN116433733A (en) Registration method and device between optical image and infrared image of circuit board
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN114674826A (en) Visual detection method and detection system based on cloth
CN117557565A (en) Detection method and device for lithium battery pole piece
CN111311657B (en) Infrared image homologous registration method based on improved corner principal direction distribution
CN114926659B (en) Deformation target positioning algorithm based on SIFT and CM
CN117314901A (en) Scale-adaptive chip detection neural network system
CN109827504B (en) Machine vision-based steel coil end face local radial detection method
CN115588109B (en) Image template matching method, device, equipment and application
CN113643370B (en) NCC algorithm-based image positioning method and device
CN114926668B (en) Deformation target positioning algorithm based on SIFT
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
CN114742705A (en) Image splicing method based on halcon
CN111768436B (en) Improved image feature block registration method based on fast-RCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant