CN115620030B - Image matching method, device, equipment and medium - Google Patents

Image matching method, device, equipment and medium Download PDF

Info

Publication number
CN115620030B
CN115620030B CN202211553079.1A CN202211553079A CN115620030B CN 115620030 B CN115620030 B CN 115620030B CN 202211553079 A CN202211553079 A CN 202211553079A CN 115620030 B CN115620030 B CN 115620030B
Authority
CN
China
Prior art keywords
image
matching
light image
visible light
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211553079.1A
Other languages
Chinese (zh)
Other versions
CN115620030A (en
Inventor
张天文
向巧罗
历小润
郭浩
陈璐
芦清
杨淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chint Group R & D Center Shanghai Co ltd
Zhejiang Zhengtai Zhiwei Energy Service Co ltd
Zhejiang University ZJU
Original Assignee
Chint Group R & D Center Shanghai Co ltd
Zhejiang Zhengtai Zhiwei Energy Service Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chint Group R & D Center Shanghai Co ltd, Zhejiang Zhengtai Zhiwei Energy Service Co ltd, Zhejiang University ZJU filed Critical Chint Group R & D Center Shanghai Co ltd
Priority to CN202211553079.1A priority Critical patent/CN115620030B/en
Publication of CN115620030A publication Critical patent/CN115620030A/en
Application granted granted Critical
Publication of CN115620030B publication Critical patent/CN115620030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image matching method, device, equipment and medium, relating to the technical field of image matching and comprising the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. The initial parameters are obtained through the image imaging parameters and the imaging characteristics, so that the problem that rough matching is difficult to realize due to insufficient infrared and visible light image matching feature points is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fusion images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.

Description

Image matching method, device, equipment and medium
Technical Field
The present invention relates to the field of image matching technologies, and in particular, to an image matching method, apparatus, device, and medium.
Background
In recent years, with the development of unmanned aerial vehicle technology, unmanned aerial vehicle imaging is utilized to carry out patrol operation in many industries such as power grids, railways and wind power. Due to the influence of various factors such as seasons, terrain, weather conditions and the like, a single sensor in a complex environment can only provide partial or inaccurate doped information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors such as visible light, multispectral, hyperspectral, thermal infrared and laser radar, the imaging characteristics of different sensors are different, the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, and the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is less influenced by illumination or weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a synthetic image more suitable for human eye observation or computer vision tasks can be generated. The exact matching of the heterogeneous images is the basis for the co-processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
In summary, how to make subjective and objective evaluations of matching results of an infrared light image and a visible light image more consistent, and the matching accuracy is higher is a technical problem to be solved in the field.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image matching method, an image matching apparatus, an image matching device, and an image matching medium, which can make subjective and objective evaluations of matching results of an infrared light image and a visible light image more consistent and make matching accuracy higher. The specific scheme is as follows:
in a first aspect, the present application discloses an image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
and constructing an image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, region mutual information or rotation-invariant region mutual information contained in the mutual information.
Optionally, before constructing the image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image, the method further includes:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
Optionally, the determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling manner and an offset includes:
center position information of a rectangular overlapping area is determined based on image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and an offset.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-graph information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
Optionally, the determining the visual fidelity of the first overlapping area and the second overlapping area based on the sub-image information corresponding to the first overlapping area and the second overlapping area includes:
respectively performing wavelet transformation on a first sub-image corresponding to the first overlapping area and a second sub-image corresponding to the second overlapping area to obtain a preset number of coefficient blocks and extract wavelet coefficient vectors;
calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector;
constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance;
calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance.
Optionally, the iteratively calculating the image optimization function based on the multiple sets of matching parameters to determine target matching parameters includes:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
In a second aspect, the present application discloses an image matching apparatus, comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing multiple groups of matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative computation on the image optimization function based on the multiple groups of matching parameters to determine target matching parameters;
and the image matching module is used for carrying out affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the image matching method disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method disclosed in the foregoing when being executed by a processor.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an image matching method disclosed in the present application;
FIG. 2 is a fused image under initial parameters as disclosed herein;
FIG. 3 is a fused image under optimal parameters according to the present disclosure;
FIG. 4 is a flow chart of a specific image matching method disclosed in the present application;
FIG. 5 is a schematic diagram of an image matching apparatus according to the present disclosure;
fig. 6 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, with the development of unmanned aerial vehicle technology, many industries such as power grids, railways, wind power and the like utilize unmanned aerial vehicle imaging to carry out inspection operation. Due to the influence of various factors such as seasons, terrain, weather conditions and the like, a single sensor in a complex environment can only provide partial or inaccurate doped information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors of visible light, multispectral light, hyperspectral light, thermal infrared, laser radar and the like, different sensors have different imaging characteristics, and the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, so that the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is less influenced by illumination or weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a synthetic image more suitable for human eye observation or computer vision tasks can be generated. The accurate matching of the heterogeneous images is the basis of the cooperative processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
Therefore, according to the image matching scheme disclosed by the application, subjective and objective evaluation of the matching result of the infrared light image and the visible light image can be more consistent, and the matching precision is higher.
Referring to fig. 1, an embodiment of the present invention discloses an image matching method, including:
step S11: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
In this embodiment, the visible light image I r As a reference image, an infrared image I s For the image to be matched, the initial matching parameter a is obtained according to the imaging parameters and the imaging characteristics of the image 1 (0) ,b 1 (0) ,c 1 (0) ,a 2 (0) ,b 2 (0) ,c 2 (0) (ii) a Specifically, firstly, a scale ratio s of the visible light image and the infrared image is calculated according to the camera focal length and the unit pixel physical length of the visible light image and the infrared image, and the calculation formula is as follows:
Figure 698574DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 504463DEST_PATH_IMAGE002
and &>
Figure 893987DEST_PATH_IMAGE003
A camera focal length representing a visible light image and a camera focal length representing an infrared image, respectively->
Figure 459835DEST_PATH_IMAGE004
And
Figure 352836DEST_PATH_IMAGE005
respectively representing visible light images and infraredAnd calculating the physical length of the unit pixel of the image according to the camera parameters.
After the scale ratio of the visible light image and the infrared light image is obtained, calculating initial parameters according to the scale ratio, the length and the width of the visible light image and the length and the width of the infrared light image, wherein the calculation formula is as follows:
Figure 972036DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 816233DEST_PATH_IMAGE008
is long and/or long in the visible image>
Figure 483975DEST_PATH_IMAGE009
Is the width of the visible light image>
Figure 746329DEST_PATH_IMAGE010
Is long and/or long in the infrared light image>
Figure 942955DEST_PATH_IMAGE011
Is the width of the infrared light image; therefore, the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared light image and the visible light image is solved by determining the initial parameters through the image imaging parameters and the image imaging characteristics.
In this embodiment, the initial parameters are regarded as initial particles, and n sets of matching parameters a are constructed within a certain range of the initial particles by using a random perturbation method 1i (0) ,b 1i (0) ,c 1i (0) ,a 2i (0) ,b 2i (0) ,c 2i (0) I =1, \ 8943j, n; taking a group of matching parameters as a population; initializing the iteration number t =0; the matching parameter data is subjected to random disturbance by adopting a random disturbance method for capacity expansion, namely, the matching parameter data is subjected to up-and-down floating within a range to increase the data volume so as to improve the robustness of the algorithm, and the limitation of the specific range is self-limited according to the actual condition of a userAnd (4) determining.
Step S12: and constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image.
In this embodiment, an image optimization function is constructed by using visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, region mutual information or rotation invariant region mutual information included in the mutual information. It is understood that the visual fidelity is an image quality evaluation parameter applied to a method for evaluating image quality, and the similarity measure commonly used in image matching includes various mutual information and structural similarity of matching images, and the mutual information specifically includes: the mutual information, the regional mutual information and the rotation invariant regional mutual information are normalized, so that one of the mutual information can be selected from the mutual information to serve as a mutual information parameter in the embodiment, the defect that the subjective evaluation of human eyes is not in conformity with the common similarity measurement in image matching can exist, and the visual fidelity is an image quality evaluation index in conformity with the subjective evaluation of human eyes.
Step S13: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, based on the matching parameters, the image optimization function is iteratively calculated by using any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm, or an ant colony optimization algorithm, so as to determine target matching parameters. It can be understood that the image optimization parameters constructed by combining the mutual information with the visual fidelity of the fused image are iteratively solved by using any one of the PSO algorithm, the QPSO algorithm, or the ant colony algorithm to obtain the optimal matching parameters, i.e., the target matching parameters. It can be understood that when the target matching parameter is found by using the QPSO algorithm, the maximum iteration number MAX is setIte =100, given an error of 0.0001, note
Figure 307072DEST_PATH_IMAGE012
(ii) a I.e. also>
Figure 575242DEST_PATH_IMAGE013
Is the current position of the ith particle; />
Figure 410212DEST_PATH_IMAGE015
(ii) a I.e. is>
Figure 981001DEST_PATH_IMAGE016
The current optimal position of the ith particle is taken as the current optimal position of the ith particle; />
Figure 957048DEST_PATH_IMAGE018
(ii) a I.e. is>
Figure 825646DEST_PATH_IMAGE019
Is the global optimal position of the particle swarm; initialization
Figure 780702DEST_PATH_IMAGE020
. Sequentially executing the steps of determining an initial global optimal position, updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population, and the like, wherein the step of determining the initial global optimal position may specifically include: directly taking initial particles as target matching parameters, namely taking the initial parameters as the target matching parameters, correcting the infrared light image to obtain a corrected infrared light image, then calculating the average optimal position of the population according to the global optimal position of each particle, and then based on a formula->
Figure 912606DEST_PATH_IMAGE021
Calculates a random position and then holds it according to the formula>
Figure 454577DEST_PATH_IMAGE022
Updating the particle position, wherein>
Figure 267812DEST_PATH_IMAGE023
(ii) a Setting iteration times t = t +1, and repeating the steps of updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population and the like until the iteration times>The maximum, or the difference between the matching parameter values of the two optimal matches is smaller than the given error, and the global optimal position of the output population is the optimal matching parameter, i.e. the target matching parameter.
Step S14: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
In this embodiment, the target matching parameters are used to match I s Affine transformation is carried out, and the corrected graph to be matched is output
Figure 359265DEST_PATH_IMAGE024
And matching the fused result->
Figure 599753DEST_PATH_IMAGE025
Therefore, a target infrared light matching image, namely a matched fusion image result, can be obtained.
The following describes a specific embodiment by taking a real hyperspectral image as an example. Adopt the photovoltaic module aerial photography picture of shooing under a set of roof scene of big jiangchan si XT2 two photothermographic camera acquisition, carry out geometric correction and barrel correction to it, relevant image acquisition parameter is as shown in Table 1:
TABLE 1
Item Visible light image Infrared image
Image resolution WxH 4000x3000 640x512
Focal length f 8mm 19mm
Pixel pitch 1.9μm 17μm
The initial parameters obtained by the image imaging parameters and the image imaging characteristics and the QPSO iterative optimization to obtain the matching parameters are shown in the table 2,
TABLE 2
Type of parameter a 1 b 1 c 1 a 2 b 2 c 2 RMSE
Real parameters 3.6259 0.0110 839.2111 -0.0234 3.6326 563.4898 -
Initial parameters 3.7673 0 794.4598 0 3.7673 535.5678 21.5342
Optimizing parameters 3.6324 -0.0058 837.6085 0.0273 3.6350 563.5631 0.6553
The result of the fusion experiment performed on the initial parameters determined by calculation and the visible light image is shown in fig. 2, and it can be seen that the component has a large overlap offset and fails to correspond accurately. From the steps of constructing multiple sets of matching parameters, the initial position of the particle is set to be (3.7673, 0,794.4598,0,3.7673, 535.5678), the iteration number is 100, the given error is 0.0001, the number of particles in the particle group is set to be 50, the initial particle group is constructed through a random perturbation algorithm, and the matching parameters obtained through QPSO iterative optimization are shown in Table 2. The real registration parameters are obtained by manually selecting 20 pairs of characteristic point pairs with errors smaller than 0.5 pixel by ENVI and calculating, the root mean square error RMSE of the optimized parameters and the real parameters is obviously reduced compared with the initial parameters, and the result verifies the effectiveness of the proposed fine matching method in optimizing the position of the matching point. The result obtained by using the optimized registration parameters in the fusion experiment is shown in fig. 3, compared with the initial parameter fusion fig. 2, the overlapping part of the image matched by the method is smoothly connected, and the high precision of the method is verified visually.
Thus, the application discloses an image matching method, which comprises the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Referring to fig. 4, the embodiment of the present invention discloses a specific image matching method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. Specifically, the method comprises the following steps:
step S21: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
Step S22: and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
In this embodiment, the center position information of the rectangular overlapping area is determined based on the image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and the offset. It can be understood that the upper left corner coordinate and the lower right corner coordinate of the rectangular overlapping area are first determined according to the length and width of the visible light image, the length and width of the infrared light image, and the offset, wherein the upper left corner coordinate and the lower right corner coordinate (x) are calculated L ,y L ),(x R ,y R ) The calculation formula is as follows:
Figure 596397DEST_PATH_IMAGE026
Figure 541219DEST_PATH_IMAGE027
Figure 706752DEST_PATH_IMAGE028
Figure 242776DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 149552DEST_PATH_IMAGE030
is long and/or long in the visible image>
Figure 412912DEST_PATH_IMAGE031
Is long and/or long in the infrared light image>
Figure 354323DEST_PATH_IMAGE032
Is a width of the visible-light image,
Figure 61248DEST_PATH_IMAGE033
p is the offset for the width of the infrared light image. Then, according to the upper left corner and the right cornerThe coordinates of the lower corners determine the central position information of the overlapping area at this time, the range of the overlapping area is determined based on the central position information and the offset amount, and the first overlapping area and the second overlapping area are determined according to the determined range of the overlapping area in the visible light image and the infrared light image respectively.
Step S23: determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region; and constructing an image optimization function by using mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
In this embodiment, wavelet transformation is performed on a first sub-image corresponding to the first overlapping region and a second sub-image corresponding to the second overlapping region, so as to obtain a preset number of coefficient blocks and extract a wavelet coefficient vector; calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector; constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance; calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance. It can be understood that s-level wavelet transformation is respectively carried out on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, each wavelet sub-band is divided into N non-overlapping coefficient blocks, and a wavelet coefficient vector set c is extracted l ={c l1 ,c l2 ,⋯,c lN And d l ={d l1 ,d l2 ,⋯,d lN H =1,2, \ 8943j, s; computing a covariance matrix
Figure 799528DEST_PATH_IMAGE034
Let c assume lj Is a random vector in a Gaussian mixture model, and the likelihood is estimated as
Figure 86153DEST_PATH_IMAGE035
Wherein M is l Is a wavelet coefficient vector c lj The dimension of (a); is arranged and/or is>
Figure 413229DEST_PATH_IMAGE036
Represents independent, steady zero-mean Gaussian white noise with variance of->
Figure 166422DEST_PATH_IMAGE037
Based on the distortion model>
Figure 421691DEST_PATH_IMAGE038
Respectively recording B multiplied by B window samples in the middle of the jth coefficient block of the two sub-images in the step to form vectors C and D, and fusing a gain scalar->
Figure 918532DEST_PATH_IMAGE039
And variance->
Figure 693590DEST_PATH_IMAGE037
The estimation is as follows:
Figure 352104DEST_PATH_IMAGE041
Figure 596135DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 631087DEST_PATH_IMAGE044
representing the correlation coefficient of C and D.
And calculating the visual fidelity of the overlapped area of the visible light image and the matching fusion image, wherein the formula is as follows:
Figure 526231DEST_PATH_IMAGE045
wherein
Figure 90067DEST_PATH_IMAGE046
Is a covariance matrix>
Figure 585508DEST_PATH_IMAGE047
Is based on the characteristic value of->
Figure 424151DEST_PATH_IMAGE048
The visual noise variance is represented, and the value of the visual noise variance can be 0.1, and the value of the visual noise variance has little influence on the result.
By using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity to construct an image optimization function, it can be understood that after the metric index of the visual fidelity is obtained, the image optimization function is constructed according to the mutual information and the visual fidelity, and the formula of the image optimization parameter is defined as follows:
Figure 111484DEST_PATH_IMAGE049
wherein the visible light image is
Figure 174118DEST_PATH_IMAGE050
The infrared image is->
Figure 330424DEST_PATH_IMAGE051
A group of matching parameters is selected as a population>
Figure 831813DEST_PATH_IMAGE051
Obtaining a correction waiting chart based on the ith population affine transformation after the tth iteration>
Figure 888499DEST_PATH_IMAGE052
Corresponding to the fusion result map is ^ er>
Figure 122035DEST_PATH_IMAGE053
,/>
Figure 14904DEST_PATH_IMAGE054
Represents the similarity function value (i =1, \ 8943;, n) found for the image matching after the ith iteration, which is greater than or equal to>
Figure 336295DEST_PATH_IMAGE055
Is a visible light image->
Figure 263800DEST_PATH_IMAGE050
Is matched with the corrected picture to be corrected>
Figure 871499DEST_PATH_IMAGE052
Mutual information of overlapping rectangular areas->
Figure 524370DEST_PATH_IMAGE056
Is a visible light image->
Figure 39665DEST_PATH_IMAGE050
And matching the fused image->
Figure 618414DEST_PATH_IMAGE053
Visual fidelity of the overlapping rectangular regions.
When the mutual information adopts normalized mutual information, the specific formula of the mutual information is expressed as follows:
Figure 741222DEST_PATH_IMAGE057
wherein H (\8729;) represents the entropy of the image,
Figure 920268DEST_PATH_IMAGE058
is the joint entropy of the image.
Step S24: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, an initial global optimal position is first determined, initial particles are used as matching parameters, that is, the initial parameters are used as target matching parameters, and the method is based on the initial particles
Figure 442516DEST_PATH_IMAGE059
And determining corresponding coordinate position information by using a calculation formula of the coordinates of the upper left corner and the coordinates of the lower right corner of the rectangular overlapping area. The gray scale statistic similarity adopts normalized mutual information to calculate the similarity function value->
Figure 141351DEST_PATH_IMAGE060
An initial global optimum position is%>
Figure 169481DEST_PATH_IMAGE061
The minimum particle bit, then updating the position of each particle, updating the current optimal position of the ith particle, calculating the optimization function value of the matched image, comparing the optimization function value of each time with a preset standard function value, if the current optimization function value is larger than the initial optimization function value, taking the current position of the current particle as the current optimal position of the current example, if the current optimization function value is smaller than the initial optimization function value, taking the current position of the previous example as the current optimal position of the current particle, further determining the current optimal position of the ith example, specifically, determining the current optimal position of each particle based on the ion weight>
Figure 930763DEST_PATH_IMAGE062
Is used for->
Figure 381336DEST_PATH_IMAGE062
As a matching parameter, the corrected infrared light image->
Figure 856049DEST_PATH_IMAGE063
The corrected infrared light image is obtained>
Figure 101085DEST_PATH_IMAGE064
(ii) a Determining the coordinates of the upper left corner and the lower right corner of the overlapping area according to the formula
Figure 615243DEST_PATH_IMAGE065
Calculating the visual fidelity of the visible light image and the fusion image for the image area of the rectangular overlapping area, wherein the specific process is that s-level wavelet transformation is carried out on the subgraph corresponding to the overlapping area of the two images, and corresponding visual fidelity is calculated by using parameters of the wavelet transformation, likelihood estimation of a Gaussian mixture model, variance and other parameters; then according to visual fidelity and the overlapping region of the visible light image and the corrected infrared light imageThe normalized mutual information of the domain determines the optimization function value of the matching image, if ^ s>
Figure 354660DEST_PATH_IMAGE066
Then->
Figure 372295DEST_PATH_IMAGE067
Figure 53812DEST_PATH_IMAGE068
Otherwise->
Figure 789687DEST_PATH_IMAGE069
,/>
Figure 96909DEST_PATH_IMAGE070
(ii) a Calculating an optimal position for the update population to +>
Figure 437892DEST_PATH_IMAGE071
Correcting the infrared image as a matching parameter>
Figure 900097DEST_PATH_IMAGE072
The corrected infrared image is obtained>
Figure 762749DEST_PATH_IMAGE073
(ii) a Determining the coordinates of the upper left corner and lower right corner of the overlap region, wherein &>
Figure 296498DEST_PATH_IMAGE074
(ii) a Then the optimization function value is calculated>
Figure 632933DEST_PATH_IMAGE075
If->
Figure 797198DEST_PATH_IMAGE076
Then>
Figure 366719DEST_PATH_IMAGE077
(ii) a Repeating the above determining the particle position and the group optimal position with the iteration number t = t +1Until the iteration times are more than the preset iteration times or the difference of the similarity function values of the two optimal matches is less than the given error, the global optimal position (or the position of the global optimal position) of the output group is judged>
Figure 953427DEST_PATH_IMAGE071
Parameters are matched for the target.
Step S25: and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Therefore, the initial matching parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that coarse matching is difficult to realize due to insufficient infrared and visible light image matching characteristic points is solved; by utilizing the characteristic of coaxial imaging of the unmanned aerial vehicle and estimating the overlapping area matched with the image through zooming and translation, the complexity of calculation of the overlapping area can be reduced.
Referring to fig. 5, an embodiment of the present invention further discloses an image matching apparatus, which includes:
a parameter determining module 11, configured to determine initial parameters based on image imaging parameters of the visible light image and the infrared light image, and construct matching parameters using the initial parameters;
a function constructing module 12, configured to construct an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
a target parameter determining module 13, configured to perform iterative computation on the image optimization function based on the matching parameters, and determine target matching parameters;
and the image matching module 14 is configured to perform affine transformation on the infrared light image by using the target matching parameters, and output a target infrared light matching image.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 6 is a block diagram of an electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 6 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement the relevant steps in the image matching method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage manner or a permanent storage manner.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, netware, unix, linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image matching method disclosed in any of the foregoing embodiments and executed by the electronic device 20. The data 223 may include data received by the electronic device and transmitted from an external device, or may include data collected by the input/output interface 25 itself.
Further, the present application also discloses a computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the image matching method disclosed in the foregoing. For the specific steps of the method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts between the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "...," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The image matching method, apparatus, device and medium provided by the present invention are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the description of the above embodiments is only used to help understanding the method and its core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. An image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image;
the constructing of the image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image comprises:
determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image;
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region;
respectively carrying out s-level wavelet transformation on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, dividing each wavelet sub-band into N non-overlapping coefficient blocks, and extracting a wavelet coefficient vector set
Figure QLYQS_3
And &>
Figure QLYQS_4
L =1,2, \ 8943;, s; calculating a covariance matrix ≥>
Figure QLYQS_6
Likelihood estimate is ≦ ≦>
Figure QLYQS_2
Wherein->
Figure QLYQS_5
Is a wavelet coefficient vector->
Figure QLYQS_7
The dimension of (a); is arranged and/or is>
Figure QLYQS_8
Represents independent, steady zero-mean Gaussian white noise with variance of->
Figure QLYQS_1
By using
Figure QLYQS_9
And
Figure QLYQS_10
compute fusion scalar pick>
Figure QLYQS_11
And variance->
Figure QLYQS_12
By using
Figure QLYQS_13
Calculating the visual fidelity;
wherein the content of the first and second substances,
Figure QLYQS_14
is covariance matrix->
Figure QLYQS_15
Is based on the characteristic value of->
Figure QLYQS_16
Representing a visual noise variance;
by using
Figure QLYQS_17
Defining the image optimization function;
wherein the visible light image is
Figure QLYQS_21
The infrared image is->
Figure QLYQS_22
A group of matching parameters is selected as a population>
Figure QLYQS_27
Obtaining a correction waiting chart which is greater than or equal to the judgment value through the ith population affine transformation after the tth iteration>
Figure QLYQS_19
Corresponding to the fusion result map is ^ er>
Figure QLYQS_24
,/>
Figure QLYQS_26
Indicates the ith speciesGroup i =1, \\ 8943j, n, similarity function value found by image matching after the t-th iteration, is/are greater than>
Figure QLYQS_28
Is a visible light image->
Figure QLYQS_20
Is matched with the corrected picture to be corrected>
Figure QLYQS_23
Mutual information in overlapping rectangular areas>
Figure QLYQS_25
Is a visible light image->
Figure QLYQS_29
And matching the fused image->
Figure QLYQS_18
Visual fidelity of the overlapping rectangular regions.
2. The image matching method of claim 1, wherein the constructing an image optimization function using the visual fidelity of the visible light image and the infrared light image and mutual information comprises:
and constructing the image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, regional mutual information or rotation invariant regional mutual information contained in the mutual information.
3. The image matching method of claim 1, wherein before constructing the image optimization function using the visual fidelity of the visible light image and the infrared light image and the mutual information, the method further comprises:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
4. The image matching method according to claim 3, wherein the determining the rectangular overlapping area between the visible light image and the infrared light image by the image scaling manner and the offset amount comprises:
the method includes the steps of determining center position information of a rectangular overlapping area based on image information of the visible light image and the infrared light image, and determining the rectangular overlapping area between the visible light image and the infrared light image based on the center position information and an offset.
5. The image matching method of claim 3, wherein the constructing an image optimization function using the visual fidelity of the visible light image and the infrared light image and mutual information comprises:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
6. The image matching method of claim 1, wherein the iteratively calculating the image optimization function based on the matching parameters to determine target matching parameters comprises:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
7. An image matching apparatus, characterized by comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative calculation on the image optimization function based on the matching parameters to determine target matching parameters;
the image matching module is used for performing affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image;
the function building module is specifically configured to determine a rectangular overlapping area between the visible light image and the infrared light image through an image scaling manner and an offset, so as to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image;
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-graph information corresponding to the first overlapping region and the second overlapping region;
respectively carrying out s-level wavelet transformation on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, dividing each wavelet sub-band into N non-overlapping coefficient blocks, and extracting a wavelet coefficient vector set
Figure QLYQS_31
And &>
Figure QLYQS_33
L =1,2, \ 8943;, s; calculating a covariance matrix ≥>
Figure QLYQS_36
Likelihood estimate is ≦ ≦>
Figure QLYQS_32
Wherein->
Figure QLYQS_34
Is a wavelet coefficient vector->
Figure QLYQS_35
Dimension of (c); is arranged and/or is>
Figure QLYQS_37
Represents independent, steady zero-mean Gaussian white noise with variance of->
Figure QLYQS_30
By using
Figure QLYQS_38
And
Figure QLYQS_39
compute fusion scalar @>
Figure QLYQS_40
And variance->
Figure QLYQS_41
By using
Figure QLYQS_42
Calculating the visual fidelity;
wherein the content of the first and second substances,
Figure QLYQS_43
is covariance matrix->
Figure QLYQS_44
In a characteristic value of +>
Figure QLYQS_45
Representing a visual noise variance;
by using
Figure QLYQS_46
Defining the image optimization function;
wherein the visible light image is
Figure QLYQS_47
The infrared image is->
Figure QLYQS_51
A group of matching parameters is selected as a population>
Figure QLYQS_54
Obtaining a correction waiting chart which is greater than or equal to the judgment value through the ith population affine transformation after the tth iteration>
Figure QLYQS_50
Corresponding to the fusion result map is ^ er>
Figure QLYQS_53
,/>
Figure QLYQS_57
Represents the i-th population, i =1, \ 8943j, n, the value of the similarity function found for the image matching after the t-th iteration, is/is>
Figure QLYQS_58
Is a visible light image->
Figure QLYQS_48
Is matched with the corrected picture to be corrected>
Figure QLYQS_52
Mutual information of overlapping rectangular areas->
Figure QLYQS_55
Is a visible light image->
Figure QLYQS_56
And matching the fused image->
Figure QLYQS_49
Visual fidelity of the overlapping rectangular regions.
8. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program for carrying out the steps of the image matching method according to any of claims 1 to 6.
9. A computer-readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method according to any one of claims 1 to 6 when executed by a processor.
CN202211553079.1A 2022-12-06 2022-12-06 Image matching method, device, equipment and medium Active CN115620030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553079.1A CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553079.1A CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115620030A CN115620030A (en) 2023-01-17
CN115620030B true CN115620030B (en) 2023-04-18

Family

ID=84880942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553079.1A Active CN115620030B (en) 2022-12-06 2022-12-06 Image matching method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115620030B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915945A (en) * 2015-02-04 2015-09-16 中国人民解放军海军装备研究院信息工程技术研究所 Quality evaluation method without reference image based on regional mutual information
CN114072818A (en) * 2019-06-28 2022-02-18 谷歌有限责任公司 Bayesian quantum circuit fidelity estimation
CN114298950A (en) * 2021-12-20 2022-04-08 扬州大学 Infrared and visible light image fusion method based on improved GoDec algorithm
WO2022116104A1 (en) * 2020-12-03 2022-06-09 华为技术有限公司 Image processing method and apparatus, and device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210049086A (en) * 2018-06-29 2021-05-04 델타레이 비브이 Article inspection by dynamic selection of projection angle
CN110084774B (en) * 2019-04-11 2023-05-05 江南大学 Method for minimizing fusion image by enhanced gradient transfer and total variation
CN110148104B (en) * 2019-05-14 2023-04-25 西安电子科技大学 Infrared and visible light image fusion method based on significance analysis and low-rank representation
CN110505472B (en) * 2019-07-15 2021-01-15 武汉大学 Quality evaluation method for H.265 ultra-high-definition video
CN110555843B (en) * 2019-09-11 2023-05-09 浙江师范大学 High-precision reference-free fusion remote sensing image quality analysis method and system
CN113706406B (en) * 2021-08-11 2023-08-04 武汉大学 Infrared visible light image fusion method based on feature space multi-classification countermeasure mechanism
CN115409879A (en) * 2022-08-24 2022-11-29 苏州国科康成医疗科技有限公司 Data processing method and device for image registration, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915945A (en) * 2015-02-04 2015-09-16 中国人民解放军海军装备研究院信息工程技术研究所 Quality evaluation method without reference image based on regional mutual information
CN114072818A (en) * 2019-06-28 2022-02-18 谷歌有限责任公司 Bayesian quantum circuit fidelity estimation
WO2022116104A1 (en) * 2020-12-03 2022-06-09 华为技术有限公司 Image processing method and apparatus, and device and storage medium
CN114298950A (en) * 2021-12-20 2022-04-08 扬州大学 Infrared and visible light image fusion method based on improved GoDec algorithm

Also Published As

Publication number Publication date
CN115620030A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN108871353B (en) Road network map generation method, system, equipment and storage medium
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN104574347A (en) On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN108229274B (en) Method and device for training multilayer neural network model and recognizing road characteristics
CN112419374A (en) Unmanned aerial vehicle positioning method based on image registration
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN113177592B (en) Image segmentation method and device, computer equipment and storage medium
CN115248439A (en) Laser radar slam method and system based on geometric information and intensity information
CN114332633B (en) Radar image target detection and identification method and equipment and storage medium
CN117451012A (en) Unmanned aerial vehicle aerial photography measurement method and system
CN115620030B (en) Image matching method, device, equipment and medium
CN114332215A (en) Multi-sensing calibration method and device, computer equipment and storage medium
CN109726679B (en) Remote sensing classification error spatial distribution mapping method
CN116630610A (en) ROI region extraction method based on semantic segmentation model and conditional random field
CN113781375B (en) Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN115272462A (en) Camera pose estimation method and device and electronic equipment
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium
CN114663751A (en) Power transmission line defect identification method and system based on incremental learning technology
Subash Automatic road extraction from satellite images using extended Kalman filtering and efficient particle filtering
CN116007637B (en) Positioning device, method, in-vehicle apparatus, vehicle, and computer program product
CN117523428B (en) Ground target detection method and device based on aircraft platform
CN117329928B (en) Unmanned aerial vehicle comprehensive detection method and system based on multivariate information fusion
CN114693988B (en) Satellite autonomous pose judging method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Tianwen

Inventor after: Xiang Luoqiao

Inventor after: Li Xiaorun

Inventor after: Guo Hao

Inventor after: Chen Lu

Inventor after: Lu Qing

Inventor after: Yang Miao

Inventor before: Zhang Tianwen

Inventor before: Xiang Qiaoluo

Inventor before: Li Xiaorun

Inventor before: Guo Hao

Inventor before: Chen Lu

Inventor before: Lu Qing

Inventor before: Yang Miao