Disclosure of Invention
In view of the above, an object of the present invention is to provide an image matching method, an image matching apparatus, an image matching device, and an image matching medium, which can make subjective and objective evaluations of matching results of an infrared light image and a visible light image more consistent and make matching accuracy higher. The specific scheme is as follows:
in a first aspect, the present application discloses an image matching method, comprising:
determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters;
constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters;
and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
and constructing an image optimization function by using the visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, region mutual information or rotation-invariant region mutual information contained in the mutual information.
Optionally, before constructing the image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image, the method further includes:
and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
Optionally, the determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling manner and an offset includes:
center position information of a rectangular overlapping area is determined based on image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and an offset.
Optionally, the constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image includes:
determining visual fidelity of the first overlapping region and the second overlapping region based on sub-graph information corresponding to the first overlapping region and the second overlapping region;
and constructing an image optimization function by using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
Optionally, the determining the visual fidelity of the first overlapping area and the second overlapping area based on the sub-image information corresponding to the first overlapping area and the second overlapping area includes:
respectively performing wavelet transformation on a first sub-image corresponding to the first overlapping area and a second sub-image corresponding to the second overlapping area to obtain a preset number of coefficient blocks and extract wavelet coefficient vectors;
calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector;
constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance;
calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance.
Optionally, the iteratively calculating the image optimization function based on the multiple sets of matching parameters to determine target matching parameters includes:
and performing iterative computation on the image optimization function by utilizing any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm or an ant colony optimization algorithm based on the matching parameters to determine target matching parameters.
In a second aspect, the present application discloses an image matching apparatus, comprising:
the parameter determining module is used for determining initial parameters based on image imaging parameters of the visible light image and the infrared light image and constructing multiple groups of matching parameters by using the initial parameters;
the function construction module is used for constructing an image optimization function by utilizing the visual fidelity and mutual information of the visible light image and the infrared light image;
the target parameter determining module is used for performing iterative computation on the image optimization function based on the multiple groups of matching parameters to determine target matching parameters;
and the image matching module is used for carrying out affine transformation on the infrared light image by using the target matching parameters and outputting a target infrared light matching image.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the image matching method disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the image matching method disclosed in the foregoing when being executed by a processor.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, with the development of unmanned aerial vehicle technology, many industries such as power grids, railways, wind power and the like utilize unmanned aerial vehicle imaging to carry out inspection operation. Due to the influence of various factors such as seasons, terrain, weather conditions and the like, a single sensor in a complex environment can only provide partial or inaccurate doped information, and therefore the unmanned aerial vehicle inspection tour is carried with various sensors. The unmanned aerial vehicle can carry sensors of visible light, multispectral light, hyperspectral light, thermal infrared, laser radar and the like, different sensors have different imaging characteristics, and the heterogeneous image cooperative processing can produce more accurate, more complete and more reliable description and judgment, so that the application effect is improved. Such as. The visible light image has higher spatial resolution and rich background information, but is easily influenced by illumination or weather conditions, the infrared sensor is less influenced by illumination or weather conditions, the image is relatively stable, but often lacks enough scene background detail information, the infrared and weak visible light images are fused, and a synthetic image more suitable for human eye observation or computer vision tasks can be generated. The accurate matching of the heterogeneous images is the basis of the cooperative processing of the heterogeneous images.
Currently, the common methods for infrared and visible image matching are mainly based on feature matching and coarse-to-fine matching of feature and region combination. Because the gray difference between the infrared image and the visible light image is obvious, enough high-precision feature matching pairs are difficult to find, and the matching precision is insufficient by directly using a feature-based matching method. The coarse-to-fine matching method of combining the features and the regions realizes coarse matching by using a feature matching method, and then optimizes registration parameters by using a gray-based method. Because the optimization algorithm generally has initial value sensitivity, the matching result of the method is greatly influenced by the initial feature matching result; in addition, a similarity measure of image matching regions is one of the key points of such methods. The information considered by the similarity measurement indexes commonly used in the existing image matching is single, the measurement possibly has the defect of non-conformity with the subjective evaluation of human eyes, and although the visual fidelity is an image quality evaluation index conforming to the subjective evaluation of human eyes, the index is only used for fusing the image quality evaluation at present and is not used in the image matching result measurement.
Therefore, according to the image matching scheme disclosed by the application, subjective and objective evaluation of the matching result of the infrared light image and the visible light image can be more consistent, and the matching precision is higher.
Referring to fig. 1, an embodiment of the present invention discloses an image matching method, including:
step S11: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
In this embodiment, the visible light image I r As a reference image, an infrared image I s For the image to be matched, the initial matching parameter a is obtained according to the imaging parameters and the imaging characteristics of the image 1 (0) ,b 1 (0) ,c 1 (0) ,a 2 (0) ,b 2 (0) ,c 2 (0) (ii) a Specifically, firstly, a scale ratio s of the visible light image and the infrared image is calculated according to the camera focal length and the unit pixel physical length of the visible light image and the infrared image, and the calculation formula is as follows:
wherein the content of the first and second substances,
and &>
A camera focal length representing a visible light image and a camera focal length representing an infrared image, respectively->
And
respectively representing visible light images and infraredAnd calculating the physical length of the unit pixel of the image according to the camera parameters.
After the scale ratio of the visible light image and the infrared light image is obtained, calculating initial parameters according to the scale ratio, the length and the width of the visible light image and the length and the width of the infrared light image, wherein the calculation formula is as follows:
wherein the content of the first and second substances,
is long and/or long in the visible image>
Is the width of the visible light image>
Is long and/or long in the infrared light image>
Is the width of the infrared light image; therefore, the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared light image and the visible light image is solved by determining the initial parameters through the image imaging parameters and the image imaging characteristics.
In this embodiment, the initial parameters are regarded as initial particles, and n sets of matching parameters a are constructed within a certain range of the initial particles by using a random perturbation method 1i (0) ,b 1i (0) ,c 1i (0) ,a 2i (0) ,b 2i (0) ,c 2i (0) I =1, \ 8943j, n; taking a group of matching parameters as a population; initializing the iteration number t =0; the matching parameter data is subjected to random disturbance by adopting a random disturbance method for capacity expansion, namely, the matching parameter data is subjected to up-and-down floating within a range to increase the data volume so as to improve the robustness of the algorithm, and the limitation of the specific range is self-limited according to the actual condition of a userAnd (4) determining.
Step S12: and constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image.
In this embodiment, an image optimization function is constructed by using visual fidelity of the visible light image and the infrared light image and any mutual information of normalized mutual information, region mutual information or rotation invariant region mutual information included in the mutual information. It is understood that the visual fidelity is an image quality evaluation parameter applied to a method for evaluating image quality, and the similarity measure commonly used in image matching includes various mutual information and structural similarity of matching images, and the mutual information specifically includes: the mutual information, the regional mutual information and the rotation invariant regional mutual information are normalized, so that one of the mutual information can be selected from the mutual information to serve as a mutual information parameter in the embodiment, the defect that the subjective evaluation of human eyes is not in conformity with the common similarity measurement in image matching can exist, and the visual fidelity is an image quality evaluation index in conformity with the subjective evaluation of human eyes.
Step S13: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, based on the matching parameters, the image optimization function is iteratively calculated by using any one of a particle swarm optimization algorithm, a quantum particle swarm optimization algorithm, or an ant colony optimization algorithm, so as to determine target matching parameters. It can be understood that the image optimization parameters constructed by combining the mutual information with the visual fidelity of the fused image are iteratively solved by using any one of the PSO algorithm, the QPSO algorithm, or the ant colony algorithm to obtain the optimal matching parameters, i.e., the target matching parameters. It can be understood that when the target matching parameter is found by using the QPSO algorithm, the maximum iteration number MAX is setIte =100, given an error of 0.0001, note
(ii) a I.e. also>
Is the current position of the ith particle; />
(ii) a I.e. is>
The current optimal position of the ith particle is taken as the current optimal position of the ith particle; />
(ii) a I.e. is>
Is the global optimal position of the particle swarm; initialization
. Sequentially executing the steps of determining an initial global optimal position, updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population, and the like, wherein the step of determining the initial global optimal position may specifically include: directly taking initial particles as target matching parameters, namely taking the initial parameters as the target matching parameters, correcting the infrared light image to obtain a corrected infrared light image, then calculating the average optimal position of the population according to the global optimal position of each particle, and then based on a formula->
Calculates a random position and then holds it according to the formula>
Updating the particle position, wherein>
(ii) a Setting iteration times t = t +1, and repeating the steps of updating the position of each particle, updating the current optimal position of the ith example, updating the optimal position of the population and the like until the iteration times>The maximum, or the difference between the matching parameter values of the two optimal matches is smaller than the given error, and the global optimal position of the output population is the optimal matching parameter, i.e. the target matching parameter.
Step S14: and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
In this embodiment, the target matching parameters are used to match I
s Affine transformation is carried out, and the corrected graph to be matched is output
And matching the fused result->
Therefore, a target infrared light matching image, namely a matched fusion image result, can be obtained.
The following describes a specific embodiment by taking a real hyperspectral image as an example. Adopt the photovoltaic module aerial photography picture of shooing under a set of roof scene of big jiangchan si XT2 two photothermographic camera acquisition, carry out geometric correction and barrel correction to it, relevant image acquisition parameter is as shown in Table 1:
TABLE 1
Item
|
Visible light image
|
Infrared image
|
Image resolution WxH
|
4000x3000
|
640x512
|
Focal length f
|
8mm
|
19mm
|
Pixel pitch
|
1.9μm
|
17μm |
The initial parameters obtained by the image imaging parameters and the image imaging characteristics and the QPSO iterative optimization to obtain the matching parameters are shown in the table 2,
TABLE 2
Type of parameter
|
a 1 |
b 1 |
c 1 |
a 2 |
b 2 |
c 2 |
RMSE
|
Real parameters
|
3.6259
|
0.0110
|
839.2111
|
-0.0234
|
3.6326
|
563.4898
|
-
|
Initial parameters
|
3.7673
|
0
|
794.4598
|
0
|
3.7673
|
535.5678
|
21.5342
|
Optimizing parameters
|
3.6324
|
-0.0058
|
837.6085
|
0.0273
|
3.6350
|
563.5631
|
0.6553 |
The result of the fusion experiment performed on the initial parameters determined by calculation and the visible light image is shown in fig. 2, and it can be seen that the component has a large overlap offset and fails to correspond accurately. From the steps of constructing multiple sets of matching parameters, the initial position of the particle is set to be (3.7673, 0,794.4598,0,3.7673, 535.5678), the iteration number is 100, the given error is 0.0001, the number of particles in the particle group is set to be 50, the initial particle group is constructed through a random perturbation algorithm, and the matching parameters obtained through QPSO iterative optimization are shown in Table 2. The real registration parameters are obtained by manually selecting 20 pairs of characteristic point pairs with errors smaller than 0.5 pixel by ENVI and calculating, the root mean square error RMSE of the optimized parameters and the real parameters is obviously reduced compared with the initial parameters, and the result verifies the effectiveness of the proposed fine matching method in optimizing the position of the matching point. The result obtained by using the optimized registration parameters in the fusion experiment is shown in fig. 3, compared with the initial parameter fusion fig. 2, the overlapping part of the image matched by the method is smoothly connected, and the high precision of the method is verified visually.
Thus, the application discloses an image matching method, which comprises the following steps: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that the rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Referring to fig. 4, the embodiment of the present invention discloses a specific image matching method, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution. Specifically, the method comprises the following steps:
step S21: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters.
Step S22: and determining a rectangular overlapping area between the visible light image and the infrared light image through an image scaling mode and an offset to obtain a first overlapping area of the visible light image and a second overlapping area of the infrared light image.
In this embodiment, the center position information of the rectangular overlapping area is determined based on the image information of the visible light image and the infrared light image, and the rectangular overlapping area between the visible light image and the infrared light image is determined based on the center position information and the offset. It can be understood that the upper left corner coordinate and the lower right corner coordinate of the rectangular overlapping area are first determined according to the length and width of the visible light image, the length and width of the infrared light image, and the offset, wherein the upper left corner coordinate and the lower right corner coordinate (x) are calculated L ,y L ),(x R ,y R ) The calculation formula is as follows:
wherein the content of the first and second substances,
is long and/or long in the visible image>
Is long and/or long in the infrared light image>
Is a width of the visible-light image,
p is the offset for the width of the infrared light image. Then, according to the upper left corner and the right cornerThe coordinates of the lower corners determine the central position information of the overlapping area at this time, the range of the overlapping area is determined based on the central position information and the offset amount, and the first overlapping area and the second overlapping area are determined according to the determined range of the overlapping area in the visible light image and the infrared light image respectively.
Step S23: determining visual fidelity of the first overlapping region and the second overlapping region based on sub-image information corresponding to the first overlapping region and the second overlapping region; and constructing an image optimization function by using mutual information of the first overlapping area and the second overlapping area and the visual fidelity.
In this embodiment, wavelet transformation is performed on a first sub-image corresponding to the first overlapping region and a second sub-image corresponding to the second overlapping region, so as to obtain a preset number of coefficient blocks and extract a wavelet coefficient vector; calculating a covariance matrix and a likelihood estimation of the wavelet coefficient vector; constructing a respective sample vector based on window samples in the middle of the coefficient block to determine a respective variance; calculating a visual fidelity of the first overlap region and the second overlap region based on the likelihood estimate, the variance, and a visual noise variance. It can be understood that s-level wavelet transformation is respectively carried out on two sub-images corresponding to the overlapping regions of the visible light image and the matching fusion image, each wavelet sub-band is divided into N non-overlapping coefficient blocks, and a wavelet coefficient vector set c is extracted
l ={c
l1 ,c
l2 ,⋯,c
lN And d
l ={d
l1 ,d
l2 ,⋯,d
lN H =1,2, \ 8943j, s; computing a covariance matrix
Let c assume
lj Is a random vector in a Gaussian mixture model, and the likelihood is estimated as
Wherein M is
l Is a wavelet coefficient vector c
lj The dimension of (a); is arranged and/or is>
Represents independent, steady zero-mean Gaussian white noise with variance of->
Based on the distortion model>
Respectively recording B multiplied by B window samples in the middle of the jth coefficient block of the two sub-images in the step to form vectors C and D, and fusing a gain scalar->
And variance->
The estimation is as follows:
wherein the content of the first and second substances,
representing the correlation coefficient of C and D.
And calculating the visual fidelity of the overlapped area of the visible light image and the matching fusion image, wherein the formula is as follows:
wherein
Is a covariance matrix>
Is based on the characteristic value of->
The visual noise variance is represented, and the value of the visual noise variance can be 0.1, and the value of the visual noise variance has little influence on the result.
By using the mutual information of the first overlapping area and the second overlapping area and the visual fidelity to construct an image optimization function, it can be understood that after the metric index of the visual fidelity is obtained, the image optimization function is constructed according to the mutual information and the visual fidelity, and the formula of the image optimization parameter is defined as follows:
wherein the visible light image is
The infrared image is->
A group of matching parameters is selected as a population>
Obtaining a correction waiting chart based on the ith population affine transformation after the tth iteration>
Corresponding to the fusion result map is ^ er>
,/>
Represents the similarity function value (i =1, \ 8943;, n) found for the image matching after the ith iteration, which is greater than or equal to>
Is a visible light image->
Is matched with the corrected picture to be corrected>
Mutual information of overlapping rectangular areas->
Is a visible light image->
And matching the fused image->
Visual fidelity of the overlapping rectangular regions.
When the mutual information adopts normalized mutual information, the specific formula of the mutual information is expressed as follows:
wherein H (\8729;) represents the entropy of the image,
is the joint entropy of the image.
Step S24: and performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters.
In this embodiment, an initial global optimal position is first determined, initial particles are used as matching parameters, that is, the initial parameters are used as target matching parameters, and the method is based on the initial particles
And determining corresponding coordinate position information by using a calculation formula of the coordinates of the upper left corner and the coordinates of the lower right corner of the rectangular overlapping area. The gray scale statistic similarity adopts normalized mutual information to calculate the similarity function value->
An initial global optimum position is%>
The minimum particle bit, then updating the position of each particle, updating the current optimal position of the ith particle, calculating the optimization function value of the matched image, comparing the optimization function value of each time with a preset standard function value, if the current optimization function value is larger than the initial optimization function value, taking the current position of the current particle as the current optimal position of the current example, if the current optimization function value is smaller than the initial optimization function value, taking the current position of the previous example as the current optimal position of the current particle, further determining the current optimal position of the ith example, specifically, determining the current optimal position of each particle based on the ion weight>
Is used for->
As a matching parameter, the corrected infrared light image->
The corrected infrared light image is obtained>
(ii) a Determining the coordinates of the upper left corner and the lower right corner of the overlapping area according to the formula
Calculating the visual fidelity of the visible light image and the fusion image for the image area of the rectangular overlapping area, wherein the specific process is that s-level wavelet transformation is carried out on the subgraph corresponding to the overlapping area of the two images, and corresponding visual fidelity is calculated by using parameters of the wavelet transformation, likelihood estimation of a Gaussian mixture model, variance and other parameters; then according to visual fidelity and the overlapping region of the visible light image and the corrected infrared light imageThe normalized mutual information of the domain determines the optimization function value of the matching image, if ^ s>
Then->
,
Otherwise->
,/>
(ii) a Calculating an optimal position for the update population to +>
Correcting the infrared image as a matching parameter>
The corrected infrared image is obtained>
(ii) a Determining the coordinates of the upper left corner and lower right corner of the overlap region, wherein &>
(ii) a Then the optimization function value is calculated>
If->
Then>
(ii) a Repeating the above determining the particle position and the group optimal position with the iteration number t = t +1Until the iteration times are more than the preset iteration times or the difference of the similarity function values of the two optimal matches is less than the given error, the global optimal position (or the position of the global optimal position) of the output group is judged>
Parameters are matched for the target.
Step S25: and performing affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image.
Therefore, the initial matching parameters are obtained through the image imaging parameters and the imaging characteristics, and the problem that coarse matching is difficult to realize due to insufficient infrared and visible light image matching characteristic points is solved; by utilizing the characteristic of coaxial imaging of the unmanned aerial vehicle and estimating the overlapping area matched with the image through zooming and translation, the complexity of calculation of the overlapping area can be reduced.
Referring to fig. 5, an embodiment of the present invention further discloses an image matching apparatus, which includes:
a parameter determining module 11, configured to determine initial parameters based on image imaging parameters of the visible light image and the infrared light image, and construct matching parameters using the initial parameters;
a function constructing module 12, configured to construct an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image;
a target parameter determining module 13, configured to perform iterative computation on the image optimization function based on the matching parameters, and determine target matching parameters;
and the image matching module 14 is configured to perform affine transformation on the infrared light image by using the target matching parameters, and output a target infrared light matching image.
Thus, the present application discloses an image matching method, comprising: determining initial parameters based on image imaging parameters of the visible light image and the infrared light image, and constructing matching parameters by using the initial parameters; constructing an image optimization function by using the visual fidelity and mutual information of the visible light image and the infrared light image; performing iterative computation on the image optimization function based on the matching parameters to determine target matching parameters; and carrying out affine transformation on the infrared light image by using the target matching parameters, and outputting a target infrared light matching image. Therefore, the initial parameters are obtained through the imaging parameters and the imaging characteristics of the images, and the problem that rough matching is difficult to realize due to insufficient matching characteristic points of the infrared and visible light images is solved; the similarity of the matching regions is measured by utilizing the mutual information of the image matching regions and combining the visual fidelity of the fused images, so that the subjective and objective evaluation of the matching results is more consistent, and the matching precision is higher.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 6 is a block diagram of an electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 6 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement the relevant steps in the image matching method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 21 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 21 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 21 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 21 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage manner or a permanent storage manner.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, netware, unix, linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image matching method disclosed in any of the foregoing embodiments and executed by the electronic device 20. The data 223 may include data received by the electronic device and transmitted from an external device, or may include data collected by the input/output interface 25 itself.
Further, the present application also discloses a computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the image matching method disclosed in the foregoing. For the specific steps of the method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts between the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "...," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The image matching method, apparatus, device and medium provided by the present invention are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the description of the above embodiments is only used to help understanding the method and its core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.