CN112633304A - Robust fuzzy image matching method - Google Patents

Robust fuzzy image matching method Download PDF

Info

Publication number
CN112633304A
CN112633304A CN201910898199.7A CN201910898199A CN112633304A CN 112633304 A CN112633304 A CN 112633304A CN 201910898199 A CN201910898199 A CN 201910898199A CN 112633304 A CN112633304 A CN 112633304A
Authority
CN
China
Prior art keywords
descriptor
feature
tpd
nearest neighbor
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910898199.7A
Other languages
Chinese (zh)
Other versions
CN112633304B (en
Inventor
陈月玲
夏仁波
赵吉宾
刘明洋
于彦凤
赵亮
付生鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN201910898199.7A priority Critical patent/CN112633304B/en
Publication of CN112633304A publication Critical patent/CN112633304A/en
Application granted granted Critical
Publication of CN112633304B publication Critical patent/CN112633304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a robust fuzzy image matching method. The method comprises the following steps: first, two images with different degrees of blur are input. Secondly, a group of Scale Invariant Feature Transform (SIFT) points are extracted, and in order to further improve the specificity of SIFT descriptors, three-scale invariant concentric circle regions are applied to generate descriptors. Third, to reduce the high dimension and complexity of the SIFT descriptor, a Local Preserving Projection (LPP) technique is employed to reduce the size of the descriptor. And finally, obtaining the matching feature points by using Euclidean distance similarity measurement. The method can reduce the data volume, improve the matching speed and the matching precision, and can be suitable for other image matching methods.

Description

Robust fuzzy image matching method
Technical Field
The invention relates to the technical field of computer vision, in particular to a robust fuzzy image matching method.
Background
The image matching is a special field of image processing, consistent feature points are extracted among different images of the same scene through image matching to determine the corresponding geometric relationship among the images to obtain a matched image, the matched image can describe an image scene more accurately than a single image, generally, image matching can be carried out by adopting methods based on local feature extraction and matching, the methods mainly consider the scale and rotation invariance of an input image, larger calculation data amount and real-time performance are not considered, and corresponding matching point pairs cannot be effectively and accurately obtained for image matching in a fuzzy scene.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a robust fuzzy image matching method, which utilizes a three-scale central invariant circular region and an LPP (low-power point) technology to reduce the dimension of a descriptor, greatly improves the operation efficiency while enhancing the distinguishability of characteristic points, greatly improves the correct matching rate and enhances the robustness.
The technical scheme adopted by the invention for realizing the purpose is as follows: a robust blurred image matching method comprising the steps of:
s1: inputting two original images with different fuzzy degrees;
s2: extracting feature points on the two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
s3: respectively establishing three central circular areas with unchanged scales around the feature points on the two original images, and describing the feature points to form respective feature point descriptors of the two original images;
s4: the dimension of the characteristic point descriptor is reduced by adopting a local projection mapping (LPP) method, so that the arithmetic efficiency of the algorithm is improved;
s5: and matching the respective feature point descriptors of the two original images after the dimension reduction, and selecting an accurate matching point pair from the two images.
The feature points are described as directivity information of the designated descriptor in the step S3.
The directivity information of the designated descriptor includes:
describing each feature point by 16 seed points of 4 multiplied by 4, dividing a gradient histogram of a region where each seed point is located into 8 directional intervals between 0 degrees and 360 degrees, and performing weighting operation on the gradient histograms by using a Gaussian window to generate a 128-dimensional feature vector;
defining the feature point descriptor LSIFT described by the three scale invariant central regions is expressed as:
PD=α1L12L23L3
wherein L isi(i-1, 2,3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α123Is a preset weighting coefficient.
The step S4 of applying the local projection mapping LPP method to reduce the feature descriptor dimension includes:
a. defining a feature point descriptor LSIFT described by a central region that is invariant over three scales as (x)1,x2,…xm),xiA feature point descriptor LSIFT representing one of the images; y isi=wTxiRepresenting a one-dimensional description of the transformed vector w, defining a similarity matrix S (S)ij=sji):
Figure BDA0002210961840000021
b. Selecting a suitable projection as the solution for minimizing the objective function f:
Figure BDA0002210961840000022
where D is a diagonal matrix Dii=∑jSijAnd L-D-S is a laplacian matrix. The following constraints exist:
YTDY=wTXDXTw=1
c. the solution problem to minimize the objective function f can be simplified as:
Figure BDA0002210961840000031
wTXDXTw=1
d. can be converted into a generalized eigenvalue problem:
XLXTw=λXDXTw
wherein XLXT,XDXTAre all symmetric and semi-positive definite matrixes;
e. let W be the column vector of the generalized eigenvalue λ, projecting the matrix WLPP=(w0,w1,…wl-1) Each vector w of PDi(i-0, 1, …, l-1) all have 128 dimensions, the projection matrix reduces the 128-dimensional descriptor vector to l dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·WLPP
wherein TPD is a local descriptor of dimension l, l < 128.
The descriptor matching in the step S5 includes:
calculating two descriptors TPDi,TPDjThe Euclidean distance between the two points is obtained by adopting a nearest neighbor specific neighbor algorithm to obtain an accurate matching point pair:
Dnearest neighbor/DSub-nearest neighbor<T
Wherein D isNearest neighborAnd DSub-nearest neighborRespectively representing the nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as the origin, wherein T represents the threshold value used for matching, and the current pixel point is the matching point pair.
Step (ii) ofD in S5Nearest neighborAnd DSub-nearest neighborCalculated according to the following formula:
Figure BDA0002210961840000032
wherein TPDiDescriptor, TPD, representing any characteristic point i after dimensionality reductionjDescriptor, TPD, representing a feature point j after dimensionality reductioni,mM-dimensional vector, TPD, representing a descriptor of a feature point ij,mThe m-th dimension vector representing the descriptor of the feature point j, and l represents the dimension after dimension reduction.
The invention has the following beneficial effects and advantages:
1. the robust fuzzy image matching method provided by the invention describes the feature points by means of the central circular area with three unchanged scales, enhances the distinguishability of the feature descriptors and improves the correct matching rate.
2. The robust fuzzy image matching method reduces the dimension of the feature point descriptor by using the local projection mapping technology, improves the matching efficiency of the image on the basis of ensuring the correct matching rate, and has stronger real-time property.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a structural scene blurred image of the method of the present invention;
FIG. 3 is a diagram showing the effect of different parameters of a structural blurred image on the performance of the blurred image;
FIG. 4 is a blurred image of a texture scene according to the method of the present invention;
FIG. 5 shows the effect of different parameters of a blurred image of a texture scene on the performance of the blurred image according to the method of the present invention;
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the specific steps of the robust blurred image matching method of the present invention are as follows:
step 1: inputting two original images with different fuzzy degrees;
step 2: extracting feature points on the two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
and step 3: respectively establishing three central circular areas with unchanged scales around the feature points on the two original images, and describing the feature points to form respective feature point descriptors of the two original images;
describing each feature point by 16 seed points of 4 multiplied by 4, dividing a gradient histogram of a region where each seed point is located into 8 directional intervals between 0 degrees and 360 degrees, and performing weighting operation on the gradient histograms by using a Gaussian window to generate a 128-dimensional feature vector;
defining the feature point descriptor LSIFT described by the three scale invariant central regions is expressed as:
PD=α1L12L23L3
wherein L isi(i-1, 2,3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α123Is a preset weighting coefficient.
And 4, step 4: in order to improve the arithmetic efficiency of the algorithm, the dimension of the descriptor of the feature point is reduced by applying a local projection mapping (LPP) method;
defining a feature point descriptor LSIFT described by a central region that is invariant over three scales as (x)1,x2,…xm),xiLSIFT representing one of the images; y isi=wTxiRepresenting a one-dimensional description of the transformed vector w, defining a similarity matrix S (S)ij=sji):
Figure BDA0002210961840000051
b. Selecting a suitable projection as the solution for minimizing the objective function f:
Figure BDA0002210961840000052
where D is a diagonal matrix Dii=∑jSijAnd L-D-S is a laplacian matrix. The following constraints exist:
YTDY=wTXDXTw=1
c. the solution problem to minimize the objective function f can be simplified as:
Figure BDA0002210961840000053
wTXDXTw=1
d. can be converted into a generalized eigenvalue problem:
XLXTw=λXDXT w
wherein XLXT,XDXTAre all symmetric and semi-positive definite matrixes;
e. let W be the column vector of the generalized eigenvalue λ, projecting the matrix WLPP=(w0,w1,…wl-1) Each wi(i-0, 1, …, l-1) all have 128 dimensions, the projection matrix reduces the 128-dimensional descriptor vector to l dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·WLPP
wherein TPD is a local descriptor l < 128 of dimension l.
And 5, matching the respective feature point descriptors of the two original images subjected to the dimensionality reduction, and selecting an accurate matching point pair from the two images.
Calculating two descriptors TPDi,TPDjThe Euclidean distance between the two points is obtained by adopting a nearest neighbor specific neighbor algorithm to obtain an accurate matching point pair:
Dnearest neighbor/DSub-nearest neighbor<T
Wherein D isNearest neighborAnd DSub-nearest neighborRespectively representing the nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as the origin, and T represents the threshold value used for matching. Wherein:
Figure BDA0002210961840000061
wherein TPDiDescriptor, TPD, representing any characteristic point i after dimensionality reductionjDescriptor for a feature point j after dimensionality reduction, j not containing i, TPDi,mM-dimensional vector, TPD, representing a descriptor of a feature point ij,mThe m-th dimension vector representing the descriptor of the feature point j, and l represents the dimension after dimensionality reduction.
The effect of the present invention will be further described with reference to the attached simulation diagram.
In order to verify the effectiveness and the correctness of the method, two groups of fuzzy images of a structural scene and a texture scene are adopted to carry out a matching simulation experiment. All simulation experiments are realized by adopting Visual Studio2010 software under a Windows XP operating system.
Simulation example 1:
fig. 2 shows blurred images of six structural scenes obtained under different blurring degrees, where the size of the image is 800 × 600, where (a) is a reference image, and the other (b) - (f) are images to be matched. FIG. 3(a) shows the number of correct matches for a structural image, the abscissa shows the degree of blur, the ordinate shows the number of correct match points, FIG. 3(b) shows the correct match rate for a structural image, the abscissa shows the degree of blur, and the ordinate shows the correct match rate; as can be seen from FIG. 3(a) and FIG. 3(b), the number of correct matching points obtained by the method of the present invention under all fuzzy variation conditions is significantly higher than that obtained by the SIFT method.
Simulation example 2:
fig. 4 shows blurred images of six texture-type scenes obtained under different blurring degrees, where the size of the image is 800 × 600, where (a) is a reference image and the other (b) - (f) are images to be matched. FIG. 5(a) shows the number of correct matches for a texture-based image, with the abscissa showing the degree of blurring and the ordinate showing the number of correct match points, FIG. 5(b) shows the correct match rate for a texture-based image, with the abscissa showing the degree of blurring and the ordinate showing the correct match rate; as can be seen from FIG. 5(a) and FIG. 5(b), the number of correct matching points obtained by the method of the present invention under all fuzzy variation conditions is significantly higher than that obtained by the SIFT method.
The invention can accurately match the image with fuzzy change, can obtain higher matching point pairs and has higher correct matching rate.

Claims (6)

1. A robust blurred image matching method is characterized by comprising the following steps:
s1: inputting two original images with different fuzzy degrees;
s2: extracting feature points on the two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
s3: respectively establishing three central circular areas with unchanged scales around the feature points on the two original images, and describing the feature points to form respective feature point descriptors of the two original images;
s4: the dimension of the characteristic point descriptor is reduced by adopting a local projection mapping (LPP) method, so that the arithmetic efficiency of the algorithm is improved;
s5: and matching the respective feature point descriptors of the two original images after the dimension reduction, and selecting an accurate matching point pair from the two images.
2. The robust blurred image matching method as claimed in claim 1, wherein the describing of the feature points as directionality information of the designated descriptor in step S3.
3. A robust blurred image matching method as claimed in claim 2, wherein the directionality information of the specified descriptor comprises:
describing each feature point by 16 seed points of 4 multiplied by 4, dividing a gradient histogram of a region where each seed point is located into 8 directional intervals between 0 degrees and 360 degrees, and performing weighting operation on the gradient histograms by using a Gaussian window to generate a 128-dimensional feature vector;
defining the feature point descriptor LSIFT described by the three scale invariant central regions is expressed as:
PD=α1L12L23L3
wherein L isi(i-1, 2,3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α123Is a preset weighting coefficient.
4. The robust blurred image matching method as claimed in claim 1, wherein said applying the local projection mapping (LPP) method in step S4 reduces the feature descriptor dimension, comprising:
a. defining a feature point descriptor LSIFT described by a central region that is invariant over three scales as (x)1,x2,…xm),xiA feature point descriptor LSIFT representing one of the images; y isi=wTxiRepresenting a one-dimensional description of the transformed vector w, defining a similarity matrix S (S)ij=sji):
Figure FDA0002210961830000021
b. Selecting a suitable projection as the solution for minimizing the objective function f:
Figure FDA0002210961830000022
where D is a diagonal matrix Dii=∑jSijAnd L-D-S is a laplacian matrix. The following constraints exist:
YTDY=wTXDXTw=1
c. the solution problem to minimize the objective function f can be simplified as:
Figure FDA0002210961830000023
wTXDXTw=1
d. can be converted into a generalized eigenvalue problem:
XLXTw=λXDXT w
wherein XLXT,XDXTAre all symmetric and semi-positive definite matrixes;
e. let W be the column vector of the generalized eigenvalue λ, projecting the matrix WLPP=(w0,w1,…wl-1) Each vector w of PDi(i-0, 1, …, l-1) all have 128 dimensions, the projection matrix reduces the 128-dimensional descriptor vector to l dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·WLPP
wherein TPD is a local descriptor of dimension l, l < 128.
5. The robust blurred image matching method as claimed in claim 1, wherein said descriptor matching in step S5 comprises:
calculating two descriptors TPDi,TPDjThe Euclidean distance between the two points is obtained by adopting a nearest neighbor specific neighbor algorithm to obtain an accurate matching point pair:
Dnearest neighbor/DSub-nearest neighbor<T
Wherein D isNearest neighborAnd DSub-nearest neighborRespectively representing the nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as the origin, wherein T represents the threshold value used for matching, and the current pixel point is the matching point pair.
6. A robust blurred image matching method as claimed in claim 5, wherein D is in step S5Nearest neighborAnd DSub-nearest neighborCalculated according to the following formula:
Figure FDA0002210961830000031
wherein TPDiIndicate descendingDescriptor of any post-dimensional feature point i, TPDjDescriptor, TPD, representing a feature point j after dimensionality reductioni,mM-dimensional vector, TPD, representing a descriptor of a feature point ij,mThe m-th dimension vector representing the descriptor of the feature point j, and l represents the dimension after dimension reduction.
CN201910898199.7A 2019-09-23 2019-09-23 Robust fuzzy image matching method Active CN112633304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910898199.7A CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910898199.7A CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Publications (2)

Publication Number Publication Date
CN112633304A true CN112633304A (en) 2021-04-09
CN112633304B CN112633304B (en) 2023-07-25

Family

ID=75282554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910898199.7A Active CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Country Status (1)

Country Link
CN (1) CN112633304B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
WO2015035462A1 (en) * 2013-09-12 2015-03-19 Reservoir Rock Technologies Pvt Ltd Point feature based 2d-3d registration
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
WO2015035462A1 (en) * 2013-09-12 2015-03-19 Reservoir Rock Technologies Pvt Ltd Point feature based 2d-3d registration
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵小强;岳宗达;: "基于局部二进制模式和图变换的快速匹配算法", 电子学报, no. 09 *
陈丽芳;刘一鸣;刘渊;: "一种结合SIFT和对应尺度LTP综合特征的图像匹配算法", 计算机工程与科学, no. 03 *

Also Published As

Publication number Publication date
CN112633304B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN110807473B (en) Target detection method, device and computer storage medium
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
WO2011069023A2 (en) Fast subspace projection of descriptor patches for image recognition
CN113822246B (en) Vehicle weight identification method based on global reference attention mechanism
CN111612024B (en) Feature extraction method, device, electronic equipment and computer readable storage medium
CN110738222B (en) Image matching method and device, computer equipment and storage medium
CN110942471A (en) Long-term target tracking method based on space-time constraint
Badr et al. A robust copy-move forgery detection in digital image forensics using SURF
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
CN104537381B (en) A kind of fuzzy image recognition method based on fuzzy invariant features
CN113313002A (en) Multi-mode remote sensing image feature extraction method based on neural network
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
Lee et al. Learning rotation-equivariant features for visual correspondence
CN111126296A (en) Fruit positioning method and device
CN114049491A (en) Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium
CN109766924A (en) Image detecting method based on image information entropy Yu adaptive threshold DAISY characteristic point
CN111582142B (en) Image matching method and device
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN111931757A (en) Finger vein quick sorting method and device based on MDLBP block histogram and PCA dimension reduction
CN111612063A (en) Image matching method, device and equipment and computer readable storage medium
CN111488811A (en) Face recognition method and device, terminal equipment and computer readable medium
Zou et al. DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal From Optical Satellite Images
CN110969128A (en) Method for detecting infrared ship under sea surface background based on multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant