CN110992409B - Multispectral stereo camera dynamic registration method based on Fourier transform registration - Google Patents

Multispectral stereo camera dynamic registration method based on Fourier transform registration Download PDF

Info

Publication number
CN110992409B
CN110992409B CN201911153769.6A CN201911153769A CN110992409B CN 110992409 B CN110992409 B CN 110992409B CN 201911153769 A CN201911153769 A CN 201911153769A CN 110992409 B CN110992409 B CN 110992409B
Authority
CN
China
Prior art keywords
image
point
infrared
visible light
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911153769.6A
Other languages
Chinese (zh)
Other versions
CN110992409A (en
Inventor
仲维
李豪杰
柳博谦
王智慧
刘日升
罗钟铉
樊鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201911153769.6A priority Critical patent/CN110992409B/en
Publication of CN110992409A publication Critical patent/CN110992409A/en
Application granted granted Critical
Publication of CN110992409B publication Critical patent/CN110992409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of image processing and computer vision, and relates to a dynamic registration method of a multispectral stereo camera based on Fourier transform registration. The method comprises the first step of carrying out distortion removal and binocular correction on the original image according to internal references and original external references of an infrared camera and a visible light camera respectively. And secondly, Fourier transform registration. And thirdly, respectively extracting characteristic points on the infrared image and the visible light image after registration. And fourthly, matching the feature points extracted in the previous step. And fifthly, calculating the feature points of the registered infrared image corresponding to the feature points of the infrared original image according to the results of the second step and the third step. And sixthly, judging the coverage area of the feature points. And step seven, correcting the calibration result. The invention carries out matching according to the result after registration, and can effectively utilize the position relation between the visible light image and the infrared image, thereby more effectively carrying out combined self-calibration on the infrared camera and the visible light camera, and having simple and convenient operation and accurate result.

Description

Multispectral stereo camera dynamic registration method based on Fourier transform registration
Technical Field
The invention belongs to the field of image processing and computer vision, and relates to a dynamic registration method of a multispectral stereo camera based on Fourier transform registration.
Background
Infrared (Infrared) is an electromagnetic wave having a wavelength between that of microwave and visible light, and has a longer wavelength than red light. Substances above absolute zero (-273.15 c) can all produce infrared radiation. Infrared images are widely used in different fields such as military defense, resource exploration, weather forecasting, environmental monitoring, medical diagnosis and treatment, marine research and the like due to the ability of observing through fog, rain and the like. The object can be photographed by infrared rays through mist and smoke, and infrared photographing can be performed even at night. The infrared camera imaging has the advantages of imaging in extreme scenes (low light, rain and snow, dense fog and the like) and has the disadvantages of low resolution and blurred image details. In contrast, visible cameras have the advantages of high resolution and clear image details, but cannot image in extreme scenes. Therefore, combining an infrared camera with a visible light camera is of great practical significance.
Stereoscopic vision is an important topic of the computer vision field. The purpose is to reconstruct the 3D geometric information of the scene. Binocular stereo vision is an important field of stereo vision. In binocular stereo vision, left and right cameras are used to simulate two eyes. The depth image is calculated by calculating the difference between the binocular images. The binocular stereo vision has the advantages of high efficiency, high accuracy, simple system structure and low cost. Since binocular stereoscopic vision needs to match the same point on the left and right image capturing points, the focal lengths and image capturing centers of the two lenses of the camera, and the positional relationship between the two lenses on the left and right are set. To obtain the above data, we need to calibrate the camera. Acquiring the position relationship between the visible light camera and the infrared camera is called joint calibration.
Two lens parameters and relative position parameters of the camera are obtained during calibration, but these parameters are unstable. When temperature, humidity, etc. change, the internal parameters of the camera lens also change. In addition, the positional relationship between the two lenses may change due to an accidental camera collision. Thus, the internal and external parameters have to be modified each time the camera is used, which is self-calibration. Under the condition that the internal parameters of the camera are known, the position relation of the infrared lens and the visible light lens is corrected by respectively extracting the infrared image characteristics and the visible light image characteristics, namely the infrared camera and the visible light camera are subjected to combined self-calibration.
In order to narrow the matching range of the feature points, the infrared image and the visible light image are registered before feature point detection. The visible light image and the infrared image are subjected to Fourier transform, the images are represented by frequency domains, and then the frequency domains of the infrared image and the visible light image are registered. This has the advantage of being computationally efficient and robust to frequency dependent noise.
Disclosure of Invention
The invention aims to solve the problem that the position relation between an infrared camera and a visible light camera is changed due to factors such as temperature, humidity and vibration. Firstly, Fourier transform is carried out on the visible light image and the infrared image, and then registration is carried out on the frequency domains of the visible light image and the infrared image. And correcting the original calibration result by extracting characteristic points matched with the infrared camera and the visible light camera.
The multispectral stereo camera dynamic registration method based on Fourier transform registration comprises the following steps:
firstly, correcting an original image: and carrying out distortion removal and binocular correction on the original image according to the respective internal parameters of the infrared camera and the visible light camera and the original external parameters.
And secondly, Fourier transform registration.
And thirdly, respectively extracting characteristic points on the infrared image and the visible light image after registration.
And fourthly, matching the feature points extracted in the previous step.
And fifthly, calculating the feature points of the registered infrared image corresponding to the feature points of the infrared original image according to the results of the second step and the third step.
Sixthly, judging the coverage area of the feature points: and dividing the image into m × n grids, if the characteristic points cover all the grids, carrying out the next step, and if not, continuously shooting the image, and repeating the first step to the fifth step.
Step seven, correcting the calibration result: the image coordinates of all the feature points are used to calculate the positional relationship between the two cameras after correction, and then are superimposed with the original external reference.
The first step specifically comprises the following steps:
1-1) calculating each original image point PiCorresponding coordinates in a normal coordinate system.
The pixel coordinate system takes the upper left corner of the picture as an origin, and the x axis and the y axis of the pixel coordinate system are respectively parallel to the x axis and the y axis of the image coordinate system. The unit of the pixel coordinate system is a pixel, which is a basic and indivisible unit of image display. The normal coordinate system takes the optical center of the camera as the origin of the image coordinate system, and the distance from the optical center to the image plane is scaled to 1. The relationship of pixel coordinates to normal coordinates is as follows:
u=KX
Figure GDA0003332243090000021
wherein the content of the first and second substances,
Figure GDA0003332243090000022
pixel coordinates representing an image;
Figure GDA0003332243090000023
representing the internal reference matrix of the camera, fxAnd fyFocal lengths (unit is pixel) in x and y directions of the image, respectively, (c)x,cy) Representing a location of a camera store;
Figure GDA0003332243090000031
are coordinates in a normal coordinate system. Knowing the pixel coordinate system of the image and the camera's internal parameters allows to calculate the regular coordinate system of the pixel point correspondences, i.e.
X=K-1u
For each original image point PiIts normal coordinate system is:
Figure GDA0003332243090000032
wherein u isiIs PiPixel coordinate of (2), XiIs PiNormal coordinate of, KiIs PiCorresponding to the internal reference matrix of the camera, if PiIs a point on the infrared image, KiIs the internal reference matrix of the infrared camera, if PiIs a point on the visible image, KiIs the internal reference matrix of the visible light camera.
1-2) removing image distortion: and calculating the normal coordinates of the original image point after distortion removal.
Due to the limitation of lens production process, the lens in practical situation has some distortion phenomena to cause nonlinear distortion, which can be roughly divided into radial distortion and tangential distortion.
The radial distortion of the image is the position deviation of image pixel points generated along the radial direction by taking a distortion center as a central point, so that the image formed in the image is deformed. The radial distortion is roughly expressed as follows:
xd=x(1+k1r2+k2r4+k3r6)
yd=y(1+k1r2+k2r4+k3r6)
wherein r is2=x2+y2,k1、k2、k3Is a radial distortion parameter.
Tangential distortion is due to imperfections in the camera fabrication such that the lens itself is not parallel to the image plane, and can be quantitatively described as:
xd=x+(2p1xy+p2(r2+2x2))
yd=y+(p1(r2+2y2)+2p2xy)
wherein p is1、p2Is the tangential distortion coefficient.
In summary, the coordinate relationship before and after distortion is as follows:
xd=x(1+k1r2+k2r4+k3r6)+(2p1xy+p2(r2+2x2))
yd=y(1+k1r2+k2r4+k3r6)+(p1(r2+2y2)+2p2xy)
wherein (x, y) is a normal coordinate in an ideal state, (x)d,yd) Are the actual normal coordinates with distortion. We get (x)d,yd) As an initial value of (x, y), the actual (x, y) is obtained by iterative computation several times.
1-3) rotating the two images according to the original rotation relationship of the two cameras: knowing the rotation matrix R and translation vector t between the two cameras in the past, makes
Xr=RXl+t
Wherein, XlNormal coordinate, X, of the infrared camerarRepresenting the normal coordinates of a visible light camera. And rotating the infrared image by a half angle in the positive direction of R, and rotating the visible light image by a half angle in the negative direction of R.
For P after the distortion removal obtained in the previous stepiNormal coordinate of (2)iIf P isiIs an infrared image point, R1/2Xi→Xi(ii) a If P isiIs a visible light image point, R-1/2Xi→Xi
1-4) reducing the image after the distortion removal rotation to a pixel coordinate system according to the formula u-KX. Image point P updated according to the last stepiNormal coordinate of (2)iCalculating the distortion-removed corrected image coordinates
KiXi→ui
From the above, the coordinates u of the point before the distortion correction is knowniThe coordinates of the distortion-removed corrected points calculated in the steps 1-1) to 1-4) are expressed as F (u)i)。
1-5) for each image point v of the distortion-corrected image IiCalculating its corresponding original image I0Pixel coordinate position of (F)-1(vi). From I0Selecting the color value of the corresponding position to fill in I:
I(vi(=I0(F-1(vi))
due to F-1(vi) The color value of the position corresponding to the decimal coordinate needs to be calculated by using bilinear interpolation.
The second step specifically comprises the following steps:
2-1) carrying out Fourier transformation on the infrared and visible light images after the distortion removal correction in the step 1) and converting the images into frequency domain space.
2-2) high-pass filtering the frequency domain space.
2-3) converting the energy of each image after filtering into a log-polar coordinate form, and obtaining a proportionality coefficient and a rotation angle by using a phase correlation-based method.
Recording the visible light image as f1(x, y) infrared image f2(x, y) if f2To f1The scaling factor, the rotation angle, and the translation amount of (a), (θ), and (Δ x, Δ y), respectively, are:
f2(x,y)×f1(a(xcosθ+ysinθ)-Δx,a(-xsinθ+ycosθ)-Δy)
fourier transform is carried out on the formula to obtain
F2=e-2πj(ξΔx+ηΔy)a-2|F1[a-1(ξcosθ+ηsinθ),a-1(-ξsinθ+ηcosθ)]1
Wherein, F1And F2The frequency domains of the visible light image and the infrared image respectively.
Neglecting translation to obtain
|F2(ξ,η)|=a-2|F1[a-1(ξcosθ+ηsinθ),a-1(-ξsinθ+ηcosθ)]|
Converting the visible light image and the infrared image into polar coordinates in a frequency domain:
rp0,ρ)=|F1(ρcosθ0,ρsinθ0)|
sp0,ρ)=|F2(ρcosθ0,ρsinθ0)|
brought into the above formula to obtain
Figure GDA0003332243090000051
Let λ ═ lg ρ and b ═ lg a, and r is defined at the same timepl(θ,λ)=rp(θ,ρ),spl(θ,λ)=sp(θ, ρ), the above equation can be written as follows:
spl(θ,ρ)=a-2rpl[(θ-θ0),λ-b]
fourier transform is carried out on the formula to obtain
Figure GDA0003332243090000052
Substituting the above formula into a cross energy spectrum formula
Figure GDA0003332243090000053
B and θ when the left side of the equation takes the maximum value0Namely the logarithm and the rotation angle of the corresponding scaling factor, and the scaling factor a is equal to eb
2-4) passing the infrared image through an angle theta0And (4) calculating a mutual energy spectrum together with the visible light image after rotation and scale a amplification, and obtaining the translation amount by using a phase correlation based method.
Recording infrared image angle theta0The image after rotation and scale a amplification is f3And then the visible light image f1Together Fourier transformed to obtain F3And F1Formula of cross energy spectrum
Figure GDA0003332243090000054
When the left side of the equation takes the maximum value, Δ x and Δ y are obtained as the corresponding translation amounts.
2-5) registering the infrared image according to the zoom coefficient a, the rotation angle theta and the translation amount (delta x, delta y) obtained in the previous step, so that the infrared image is aligned on the visible light image.
The third step specifically comprises the following steps:
3-1) respectively constructing corresponding single-layer difference Gaussian pyramids (DoG) according to the infrared image gray level image and the visible image gray level image.
Differential gaussian pyramid a differential gaussian pyramid is derived from the difference of adjacent Scale spaces and is often used for Scale-invariant feature transform (SIFT). The scale space of an image is defined as: the convolution of the gaussian convolution kernel with the image is a function of the parameter σ in the gaussian convolution kernel. Specifically, the scale space of the scene image I (x, y) is:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein the content of the first and second substances,
Figure GDA0003332243090000061
is a gaussian kernel function, σ is a scale factor, and the size of σ determines the degree of smoothness of the image. Large sigma values correspond to coarse scale (low resolution) and small sigma values correspond to fine scale (high resolution). Denotes a convolution operation. We call L (x, y, σ) the scale space of image I (x, y). We perform a difference on the scale spaces of different scales to obtain a layer of difference gaussian pyramid (as shown in fig. 3), and in addition, multiply a normalized scale factor λ, so that the maximum value of the DoG image is 255.
D(x,y,σ)=λ(L(x,y,kσ)-L(x,y,σ))
Unlike SIFT, we compute only one layer of differential scale features. The reason for this is two: firstly, the calculation amount for calculating the multilayer differential scale features is too large, and the real-time performance cannot be realized; second, the accuracy of SIFT features obtained using multi-layered differential scale features is too low.
3-2) taking the local extreme point of the single-layer difference Gaussian pyramid D obtained in the previous step as a feature point set { P }.
3-2-1) expansion of D, the result is noted as D1. Will D1Each pixel point inComparing with the points in 8-neighborhood, if the pixel point is local maximum, adding it into the candidate point set P1And (c) removing the residue.
3-2-2) inverting D and then performing an expansion operation, and recording the result as D2. Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2And (c) removing the residue.
3-2-3) reacting P1And P2Taking intersection to obtain P3=P1∩P2. Get P3And taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P }. The characteristic point set of the infrared image is
Figure GDA0003332243090000062
The characteristic point set of the visible light image is
Figure GDA0003332243090000063
The fourth step specifically includes the following steps:
4-1) divide both the infrared image and the visible image into m × n blocks. For each feature point of the infrared map
Figure GDA0003332243090000064
Steps 4-2) to 4-6) are performed.
4-2) find
Figure GDA0003332243090000065
At blocks corresponding to the infrared map
Figure GDA0003332243090000066
(as shown in fig. 4 (a)). Block
Figure GDA0003332243090000067
The blocks at the same position in the visible light diagram are
Figure GDA0003332243090000068
And block
Figure GDA0003332243090000069
Set of blocks with identical positions
Figure GDA00033322430900000610
(see FIG. 4 (b)), the feature point set is shown as
Figure GDA00033322430900000611
Evaluating the similarity degree between the pixel points, and if the similarity degree is larger than a threshold value t1Then, it is regarded as the coarse matching point, and its set is recorded as
Figure GDA00033322430900000612
Otherwise, abandoning the point, and selecting the next characteristic point to perform the step 4-2) again.
4-3) if
Figure GDA00033322430900000613
And
Figure GDA00033322430900000614
maximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
then the match is retained, get
Figure GDA00033322430900000615
The point of maximum similarity in
Figure GDA00033322430900000616
As a matching point, where t2Is a threshold value, F (S)first,ssecond) For the description of sfirstAnd ssecondThe relationship between them. If not, the point is abandoned, and the next feature point is selected to carry out the step 4-2) again.
After screening according to the rule, matching according to the steps 4-2) -4-3)
Figure GDA0003332243090000071
At the corresponding characteristic points of the infrared image
Figure GDA0003332243090000072
If it is satisfied with
Figure GDA0003332243090000073
Then the match is retained
Figure GDA0003332243090000074
If not, the point is abandoned, and the next feature point is selected to carry out the step 4-2) again.
4-4) feature points in infrared map
Figure GDA0003332243090000075
For reference, the parabolic fitting optimizes the integer pixel characteristic points corresponding to the visible light map
Figure GDA0003332243090000076
The obtained sub-pixel characteristic points corresponding to the visible light image
Figure GDA0003332243090000077
Wherein
Figure GDA0003332243090000078
As a sub-pixel offset in the x-direction,
Figure GDA0003332243090000079
is the sub-pixel offset in the y-direction.
4-5) corresponding to the integer pixel characteristic points of the visible light image
Figure GDA00033322430900000710
As a reference, calculating sub-pixel characteristic points corresponding to the infrared image according to the method of 4-4)
Figure GDA00033322430900000711
Wherein
Figure GDA00033322430900000712
As a sub-pixel offset in the x-direction,
Figure GDA00033322430900000713
is the sub-pixel offset in the y-direction.
4-6) obtaining the final matching point pair as
Figure GDA00033322430900000714
And selecting the next infrared image feature point to perform the steps 4-2) -4-6) again.
The seventh step specifically includes the following steps:
7-1) solving a basic matrix F and an essential matrix E according to the characteristic point pair coordinates of the infrared graph and the visible light graph and the internal reference matrix of the infrared camera and the visible light camera: corresponding pixel point pair u of infrared and visible lightl、urThe relationship to the basis matrix F is:
Figure GDA00033322430900000715
and (4) further screening the point pairs by using random sample consensus (RANSAC), and then substituting the coordinates of the corresponding points into the formula to construct a homogeneous linear equation set to solve F.
The relationship between the base matrix and the essence matrix is:
Figure GDA00033322430900000716
wherein, Kl、KrRespectively, the reference matrices of the infrared camera and the visible light camera.
7-2) decomposing the infrared and visible light camera rotation and translation relations after correction from the essential matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [ t]×A cross-product matrix representing t.
Performing singular value decomposition on E to obtain
Figure GDA00033322430900000717
Defining two matrices
Figure GDA0003332243090000081
And
Figure GDA0003332243090000082
ZW=Σ
so E can be written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Four pairs of R and t are obtained, and a solution with three-dimensional significance is selected.
7-3) superposing the resolved rotation and translation relation to the original external reference.
The rotation matrix before distortion removal is recorded as R0The translation vector is t0=(tx,ty,tz)T(ii) a The rotation matrix calculated in the last step is R, and the translation vector is t ═ t (t)x′,ty′,tz′)T. Then new RnewAnd tnewAs follows
Figure GDA0003332243090000083
Figure GDA0003332243090000084
It is also necessary to combine tnewBy a coefficient such that tnewComponent in the x-direction
Figure GDA0003332243090000085
The invention has the beneficial effects that:
the invention solves the problem that the position relation between the infrared camera and the visible light camera is changed due to factors such as temperature, humidity, vibration and the like. Has the advantages of high speed, accurate result, simple operation and the like. Furthermore, we register the infrared image and the visible image by fourier transform. Compared with the common method, the method further reduces the matching range of the feature points. Compared with the common registration method, the method is higher in calculation efficiency and robust to frequency-dependent noise.
Drawings
Fig. 1 is an overall flowchart.
Fig. 2 is a correction flowchart.
Fig. 3 shows a gaussian difference pyramid.
Fig. 4 is a schematic diagram of block matching.
Detailed Description
The following detailed description is made in conjunction with the accompanying drawings and examples:
firstly, correcting an original image:
1-1) calculating each original image point PiCorresponding coordinates in a normal coordinate system.
For each original image point PiIts normal coordinate system is:
Figure GDA0003332243090000091
wherein u isiIs PiPixel coordinate of (2), XiIs PiNormal coordinate of, KiIs PiCorresponding to the internal reference matrix of the camera, if PiIs a point on the infrared image, KiIs the internal reference matrix of the infrared camera, if PiIs a point on the visible image, KiIs the internal reference matrix of the visible light camera.
1-2) removing image distortion: and calculating the normal coordinates of the original image point after distortion removal.
With (x)d,yd) As the initial value of (x, y), the actual value is obtained by iterative computation for several times(x, y) of (a).
1-3) rotating the two images according to the original rotation relationship of the two cameras:
for P after the distortion removal obtained in the previous stepiNormal coordinate of (2)iIf P isiIs an infrared image point, R1/2Xi→Xi(ii) a If P isiIs a visible light image point, R-1/2Xi→Xi
1-4) based on the updated image point PiNormal coordinate of (2)iCalculating the distortion-removed corrected image coordinates
KiXi→ui
From the above, the coordinates u of the point before the distortion correction is knowniThe coordinates of the distortion-removed corrected points calculated in the steps 1-1) to 1-4) are expressed as F (u)i)。
1-5) for each image point v of the distortion-corrected image IiCalculating its corresponding original image I0Pixel coordinate position of (F)-1(vi). From I0Selecting the color value of the corresponding position to fill in I:
I(vi)=I0(F-1(vi))
due to F-1(vi) The color value of the position corresponding to the decimal coordinate needs to be calculated by using bilinear interpolation.
2) And Fourier transform registration.
2-1) carrying out Fourier transformation on the infrared and visible light images after the distortion removal correction in the step 1) and converting the images into frequency domain space.
2-2) high-pass filtering the frequency domain space.
2-3) converting the energy of each image after filtering into a log-polar coordinate form, and obtaining a proportionality coefficient and a rotation angle by using a phase correlation-based method.
Recording the visible light image as f1(x, y) infrared image f2(x, y) if f2To f1The scaling factor, the rotation angle, and the translation amount of (a), (θ), and (Δ x, Δ y), respectively, are:
f2(x,y)=f1(a(xcosθ+ysinθ)-Δx,a(-xsinθ+ycosθ)-Δy)
fourier transform is carried out on the formula to obtain
F2=e-2πj(ξΔx+ηΔy)a-2|F1[a-1(ξcosθ+ηsinθ),a-1(-ξsinθ+ηcosθ)]|
Wherein, F1And F2The frequency domains of the visible light image and the infrared image respectively.
Neglecting translation to obtain
|F2(ξ,η)|=a-2|F1[a-1(ξcosθ+ηsinθ),a-1(-ξsinθ+ηcosθ)]|
Converting the visible light image and the infrared image into polar coordinates in a frequency domain:
rp0,ρ)=|F1(ρcosθ0,ρsinθ0)|
sp0,ρ)=|F2(ρcosθ0,ρsinθ0)|
brought into the above formula to obtain
Figure GDA0003332243090000101
Let λ be lg ρ, b be lga, and r be definedpl(θ,λ)=rp(θ,ρ),spl(θ,λ)=sp(θ, ρ), the above equation can be written as follows:
spl(θ,ρ)=a-2rpl[(θ-θ0),λ-b]
fourier transform is carried out on the formula to obtain
Figure GDA0003332243090000102
Substituting the above formula into a cross energy spectrum formula
Figure GDA0003332243090000103
B and θ when the left side of the equation takes the maximum value0Namely the logarithm and the rotation angle of the corresponding scaling factor, and the scaling factor a is equal to eb
2-4) passing the infrared image through an angle theta0And (4) calculating a mutual energy spectrum together with the visible light image after rotation and scale a amplification, and obtaining the translation amount by using a phase correlation based method.
Recording infrared image angle theta0The image after rotation and scale a amplification is f3And then the visible light image f1Together Fourier transformed to obtain F3And F1Formula of cross energy spectrum
Figure GDA0003332243090000104
When the left side of the equation takes the maximum value, Δ x and Δ y are obtained as the corresponding translation amounts.
2-5) registering the infrared image according to the zoom coefficient a, the rotation angle theta and the translation amount (delta x, delta y) obtained in the previous step, so that the infrared image is aligned on the visible light image.
3) And respectively extracting characteristic points on the infrared image and the visible light image after registration.
3-1) constructing a single-scale difference Gaussian pyramid (DoG). Differential gaussian pyramid a differential gaussian pyramid is derived from the difference of adjacent Scale spaces and is often used for Scale-invariant feature transform (SIFT). The scale space of an image is defined as: the convolution of the gaussian convolution kernel with the image is a function of the parameter σ in the gaussian convolution kernel. Specifically, the scale space of the scene image I (x, y) is:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein the content of the first and second substances,
Figure GDA0003332243090000111
is a Gaussian kernel function, σ is a scale factor, ofThe size determines the degree of smoothness of the image. Large sigma values correspond to coarse scale (low resolution) and small sigma values correspond to fine scale (high resolution). Denotes a convolution operation. We call L (x, y, σ) the scale space of image I (x, y). We perform a difference on the scale spaces of different scales to obtain a layer of difference gaussian pyramid (as shown in fig. 3), and in addition, multiply a normalized scale factor λ, so that the maximum value of the DoG image is 255.
D(x,y,σ)=λ(L(x,y,kσ)-L(x,y,σ))
Unlike SIFT, we compute only one layer of differential scale features. The reason for this is two: firstly, the calculation amount for calculating the multilayer differential scale features is too large, and the real-time performance cannot be realized; second, the accuracy of SIFT features obtained using multi-layered differential scale features is too low.
3-2) taking the local extreme point of the single-layer difference Gaussian pyramid D obtained in the previous step as a feature point set { P }.
3-3-1) expansion of D, the result is noted as D1. Will D1Comparing each pixel point with the point on 8-neighborhood, if the pixel point is local maximum, adding it into candidate point set P1And (c) removing the residue.
3-3-2) inverting D and then performing an expansion operation, and recording the result as D2. Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2And (c) removing the residue.
3-3-3) reacting P1And P2Taking intersection to obtain P3=P1∩P2. Get P3And taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P }. The characteristic point set of the infrared image is
Figure GDA0003332243090000112
The characteristic point set of the visible light image is
Figure GDA0003332243090000113
4) And matching the feature points extracted in the last step.
4-1) dividing the infrared distortion-removed corrected image and the visible light distortion-removed corrected image into m × n blocks. For each feature point of the infrared map
Figure GDA0003332243090000114
Steps 4-2) to 4-6) are performed.
4-2) calculation
Figure GDA0003332243090000115
And
Figure GDA0003332243090000116
at any point in it
Figure GDA0003332243090000117
If the degree of similarity is greater than the threshold value t1Then, it is regarded as the coarse matching point, and its set is recorded as
Figure GDA0003332243090000118
Otherwise, abandoning the point, and selecting the next characteristic point to perform the step 4-2) again.
4-3) if
Figure GDA0003332243090000119
And
Figure GDA00033322430900001110
maximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
then the match is retained, get
Figure GDA00033322430900001111
The point of maximum similarity in
Figure GDA00033322430900001112
As a matching point, where t2Is a threshold value, F(s)first,ssecond) For the description of sfirsAnd ssecondThe relationship between. If not, the point is abandoned, and the next feature point is selected to carry out the step 4-2) again.
After screening according to the rule, matching according to the steps 4-2) -4-3)
Figure GDA0003332243090000121
At the corresponding characteristic points of the infrared image
Figure GDA0003332243090000122
If it is satisfied with
Figure GDA0003332243090000123
Then the match is retained
Figure GDA0003332243090000124
If not, the point is abandoned, and the next feature point is selected to carry out the step 4-2) again.
4-4) feature points in infrared map
Figure GDA0003332243090000125
For reference, the parabolic fitting optimizes the integer pixel characteristic points corresponding to the visible light map
Figure GDA0003332243090000126
The obtained sub-pixel characteristic points corresponding to the visible light image
Figure GDA0003332243090000127
Wherein
Figure GDA0003332243090000128
As a sub-pixel offset in the x-direction,
Figure GDA0003332243090000129
is the sub-pixel offset in the y-direction.
4-5) corresponding to the integer pixel characteristic points of the visible light image
Figure GDA00033322430900001210
For reference, calculating corresponding infrared image according to the method of 4-4)Sub-pixel feature points
Figure GDA00033322430900001211
Wherein
Figure GDA00033322430900001212
As a sub-pixel offset in the x-direction,
Figure GDA00033322430900001213
is the sub-pixel offset in the y-direction.
4-6) obtaining the final matching point pair as
Figure GDA00033322430900001214
And selecting the next infrared image feature point to perform the steps 4-2) -4-6) again.
5) And calculating the characteristic points of the registered infrared image corresponding to the characteristic points of the infrared original image according to the results of the step 2) and the step 3).
6) Judging the coverage area of the feature points: and (3) dividing the image into m-n grids, if the characteristic points cover all the grids, carrying out the next step, otherwise, continuously shooting the image, and repeating the steps 1) to 5).
7) And correcting a calibration result: the image coordinates of all the feature points are used to calculate the positional relationship between the two cameras after correction, and then are superimposed with the original external reference.
7-1) further screening the point pairs by using random sample consensus (RANSAC), and then substituting the coordinates of the corresponding points into the formula to construct a homogeneous linear equation set to solve F.
The relationship between the base matrix and the essence matrix is:
Figure GDA00033322430900001215
wherein, Kl、KrRespectively, the reference matrices of the infrared camera and the visible light camera.
7-2) decomposing the infrared and visible light camera rotation and translation relations after correction from the essential matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [ t]×A cross-product matrix representing t.
Performing singular value decomposition on E to obtain
Figure GDA0003332243090000131
Defining two matrices
Figure GDA0003332243090000132
And
Figure GDA0003332243090000133
ZW=Σ
so E can be written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Four pairs of R and t are obtained, and a solution with three-dimensional significance is selected.
7-3) superposing the resolved rotation and translation relation to the original external reference.
Rotation matrix R and translation vector t before distortion removal
And (3) calculating the result:
rotation matrix R and translation vector t before distortion removal
Figure GDA0003332243090000134
t=[-124.9870 -3.1082 -3.5752]T
Calculated rotation matrix is R 'and translation vector is t'
Figure GDA0003332243090000135
t′=[-1.0000 0.0066 0.0072]T
Novel RnewAnd tnew
Figure GDA0003332243090000136
tnew=[-124.9870 0.6796 0.9011]T

Claims (3)

1. The multispectral stereo camera dynamic registration method based on Fourier transform registration is characterized by comprising the following steps of:
firstly, correcting an original image: carrying out distortion removal and binocular correction on the original image according to respective internal parameters and original external parameters of an infrared camera and a visible light camera;
secondly, Fourier transform registration;
2-1) carrying out Fourier transform on the infrared and visible light images after the distortion removal correction in the step 1) and converting the images into frequency domain space;
2-2) carrying out high-pass filtering on the frequency domain space;
2-3) converting the energy of each filtered image into a log-polar coordinate form, and obtaining a proportionality coefficient and a rotation angle by using a phase correlation-based method;
2-4) passing the infrared image through an angle theta0Rotating, amplifying by a proportion a, calculating a mutual energy spectrum together with the visible light image, and obtaining a translation amount by using a phase correlation based method;
2-5) registering the infrared image according to the zoom coefficient a, the rotation angle theta and the translation amount (delta x, delta y) obtained in the previous step, so that the infrared image is aligned to the visible light image;
thirdly, respectively extracting characteristic points on the infrared image and the visible light image after registration;
fourthly, matching the feature points extracted in the previous step;
the fourth step of feature point matching specifically comprises the following steps:
4-1) dividing the infrared image and the visible light image into m × n blocks; for each feature point of the infrared map
Figure FDA0003332243080000011
Carrying out steps 4-2) to 4-6);
4-2) find
Figure FDA0003332243080000012
At blocks corresponding to the infrared map
Figure FDA0003332243080000013
Block
Figure FDA0003332243080000014
The blocks at the same position in the visible light diagram are
Figure FDA0003332243080000015
And block
Figure FDA0003332243080000016
Set of blocks with identical positions
Figure FDA0003332243080000017
Its feature point set is
Figure FDA0003332243080000018
Evaluating the similarity degree between the pixel points, and if the similarity degree is larger than a threshold value t1Then, it is regarded as the coarse matching point, and its set is recorded as
Figure FDA0003332243080000019
Otherwise, abandoning the point, and selecting the next characteristic point to perform the step 4-2) again;
4-3) if
Figure FDA00033322430800000110
And
Figure FDA00033322430800000111
maximum value of similarity in sfirstAnd the second largest value ssecondSatisfies the following conditions:
F(sfirst,ssecond)≥t2
then the match is retained, get
Figure FDA00033322430800000112
The point of maximum similarity in
Figure FDA00033322430800000113
As a matching point, where t2Is a threshold value, F(s)first,ssecond) For the description of sfirstAnd ssecondThe relationship between; if not, abandoning the point, and selecting the next characteristic point to perform the step 4-2) again;
after screening according to the rule, matching according to the steps 4-2) -4-3)
Figure FDA00033322430800000114
At the corresponding characteristic points of the infrared image
Figure FDA00033322430800000115
If it is satisfied with
Figure FDA00033322430800000116
Then the match is retained
Figure FDA00033322430800000117
If not, abandoning the point, and selecting the next characteristic point to perform the step 4-2) again;
4-4) feature points in infrared map
Figure FDA00033322430800000118
For reference, parabolic fitting optimizes the corresponding visible light mapInteger pixel feature points
Figure FDA0003332243080000021
The obtained sub-pixel characteristic points corresponding to the visible light image
Figure FDA0003332243080000022
Wherein
Figure FDA0003332243080000023
As a sub-pixel offset in the x-direction,
Figure FDA0003332243080000024
is the sub-pixel offset in the y-direction;
4-5) corresponding to the integer pixel characteristic points of the visible light image
Figure FDA0003332243080000025
As a reference, calculating sub-pixel characteristic points corresponding to the infrared image according to the method of 4-4)
Figure FDA0003332243080000026
Wherein
Figure FDA0003332243080000027
As a sub-pixel offset in the x-direction,
Figure FDA0003332243080000028
is the sub-pixel offset in the y-direction;
4-6) obtaining the final matching point pair as
Figure FDA0003332243080000029
Selecting the next infrared image feature point and carrying out the steps 4-2) -4-6) again;
fifthly, calculating feature points of the registered infrared image corresponding to the feature points of the infrared original image according to the results of the second step and the third step;
sixthly, judging the coverage area of the feature points: dividing the image into m × n grids, if the characteristic points cover all the grids, carrying out the next step, otherwise, continuously shooting the image, and repeating the first step to the fifth step;
step seven, correcting the calibration result:
7-1) solving a basic matrix F and an essential matrix E according to the characteristic point pair coordinates of the infrared graph and the visible light graph and the internal reference matrix of the infrared camera and the visible light camera: corresponding pixel point pair u of infrared and visible lightl、urThe relationship to the basis matrix F is:
Figure FDA00033322430800000210
further screening the point pairs by using random sample consensus (RANSAC), substituting corresponding point coordinates into the formula, and constructing a homogeneous linear equation set to solve F;
the relationship between the base matrix and the essence matrix is:
Figure FDA00033322430800000211
wherein, Kl、KrRespectively are internal reference matrixes of the infrared camera and the visible light camera;
7-2) decomposing the infrared and visible light camera rotation and translation relations after correction from the essential matrix: the relationship of the essential matrix E to the rotation R and translation t is as follows:
E=[t]×R
wherein [ t]×A cross-product matrix representing t;
performing singular value decomposition on E to obtain
Figure FDA00033322430800000212
Defining two matrices
Figure FDA00033322430800000213
And
Figure FDA00033322430800000214
ZW=Σ
so E can be written in the following two forms
(1)E=UZUTUWVT
Let [ t)]×=UZUT,R=UWTVT
(2)E=-UZUTUWTVT
Let [ t)]×=-UZUT,R=UWTVT
Obtaining four pairs of R and t, and selecting a solution with three-dimensional significance;
7-3) superposing the decomposed rotation and translation relation to the original external reference;
the rotation matrix before distortion removal is recorded as R0The translation vector is t0=(tx,ty,tz)T(ii) a The rotation matrix calculated in the previous step is R, and the translation vector is t ═ t'x,t′y,t′z)T(ii) a Then new RnewAnd tnewAs follows
Figure FDA0003332243080000031
Figure FDA0003332243080000032
Will tnewBy a coefficient such that tnewComponent in the x-direction
Figure FDA0003332243080000033
2. The method for dynamic registration of a multispectral stereo camera based on fourier transform registration as claimed in claim 1, wherein the first step comprises the following steps:
1-1) for each original image point PiIts normal coordinate system is:
Figure FDA0003332243080000034
wherein u isiIs PiPixel coordinate of (2), XiIs PiNormal coordinate of, KiIs PiCorresponding to the internal reference matrix of the camera, if PiIs a point on the infrared image, KiIs the internal reference matrix of the infrared camera, if PiIs a point on the visible image, KiIs an internal reference matrix of the visible light camera;
1-2) removing image distortion: calculating the normal coordinate of the original image point after distortion removal;
with (x)d,yd) As the initial value of (x, y), iteratively calculating for several times to obtain the actual (x, y);
1-3) rotating the two images according to the original rotation relationship of the two cameras: knowing the rotation matrix R and translation vector t between the two cameras in the past, makes
Xr=RXl+t
Wherein, XlNormal coordinate, X, of the infrared camerarNormal coordinates representing a visible light camera; rotating the infrared image by a half angle in the positive direction of R, and rotating the visible light image by a half angle in the negative direction of R;
for P after the distortion removal obtained in the previous stepiNormal coordinate of (2)iIf P isiIs an infrared image point, R1/2Xi→Xi(ii) a If P isiIs a visible light image point, R-1/2Xi→Xi
1-4) reducing the image after distortion removal rotation to a pixel coordinate system according to a formula u ═ KX; image point P updated according to the last stepiNormal coordinate of (2)iCalculating the distortion-removed corrected image coordinates
KiXi→ui
From the above, the coordinates u of the point before the distortion correction is knowniThe coordinates of the distortion-removed corrected points calculated in the steps 1-1) to 1-4) are expressed as F (u)i);
1-5) for each image point v of the distortion-corrected image IiCalculating its corresponding original image I0Pixel coordinate position of (F)-1(vi) (ii) a From I0Selecting the color value of the corresponding position to fill in I:
I(vi)=I0(F-1(vi))
due to F-1(vi) The color value of the position corresponding to the decimal coordinate needs to be calculated by using bilinear interpolation.
3. The method for dynamic registration of a multispectral stereo camera based on fourier transform registration according to claim 1 or 2, wherein the third step comprises the following steps:
3-1) respectively constructing corresponding single-layer differential Gaussian pyramid (DoG) according to the infrared image gray level image and the visible image gray level image;
taking local extreme points of the obtained single-layer difference Gaussian pyramid D as a feature point set { P };
3-2-1) expansion of D, the result is noted as D1(ii) a Will D1Comparing each pixel point with the point on 8-neighborhood, if the pixel point is local maximum, adding it into candidate point set P1Lining;
3-2-2) inverting D and then performing an expansion operation, and recording the result as D2(ii) a Will D2Comparing each pixel point with the point in 8-neighborhood, if the pixel point is local minimum, adding it into candidate point set P2Lining;
3-2-3) reacting P1And P2Taking intersection to obtain P3=P1∩P2(ii) a Get P3Taking the points with the middle DoG gray value larger than 15 as a characteristic point set { P }; the characteristic point set of the infrared image is
Figure FDA0003332243080000041
The characteristic point set of the visible light image is
Figure FDA0003332243080000042
CN201911153769.6A 2019-11-22 2019-11-22 Multispectral stereo camera dynamic registration method based on Fourier transform registration Active CN110992409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911153769.6A CN110992409B (en) 2019-11-22 2019-11-22 Multispectral stereo camera dynamic registration method based on Fourier transform registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911153769.6A CN110992409B (en) 2019-11-22 2019-11-22 Multispectral stereo camera dynamic registration method based on Fourier transform registration

Publications (2)

Publication Number Publication Date
CN110992409A CN110992409A (en) 2020-04-10
CN110992409B true CN110992409B (en) 2022-02-15

Family

ID=70085620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911153769.6A Active CN110992409B (en) 2019-11-22 2019-11-22 Multispectral stereo camera dynamic registration method based on Fourier transform registration

Country Status (1)

Country Link
CN (1) CN110992409B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001845A (en) * 2020-08-21 2020-11-27 沈阳天眼智云信息科技有限公司 Coordinate conversion method of double-light image
CN112037270A (en) * 2020-09-04 2020-12-04 北京航空航天大学 Magneto-optical Kerr image registration correction method and system and microscope system
CN112233158B (en) * 2020-10-14 2022-02-15 俐玛精密测量技术(苏州)有限公司 Secondary projection registration method of micro-nano CT projection image
CN113409450B (en) * 2021-07-09 2022-08-26 浙江大学 Three-dimensional reconstruction method for chickens containing RGBDT information
CN116883291B (en) * 2023-09-06 2023-11-17 山东科技大学 Distortion correction method based on binary Fourier series

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223330A (en) * 2019-06-12 2019-09-10 国网河北省电力有限公司沧州供电分公司 A kind of method for registering and system of visible light and infrared image
CN110349193A (en) * 2019-06-27 2019-10-18 南京理工大学 Fast image registration method suitable for Fourier transform spectrometer,

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10176567B2 (en) * 2015-12-21 2019-01-08 Canon Kabushiki Kaisha Physical registration of images acquired by Fourier Ptychography

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223330A (en) * 2019-06-12 2019-09-10 国网河北省电力有限公司沧州供电分公司 A kind of method for registering and system of visible light and infrared image
CN110349193A (en) * 2019-06-27 2019-10-18 南京理工大学 Fast image registration method suitable for Fourier transform spectrometer,

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic calibration and registration of lidar and stereo camera without calibration objects;Vijay John 等;《2015 IEEE International Conference on Vehicular Electronics and Safety (ICVES)》;20160218;全文 *
亚像素级图像配准算法研究;黎俊 等;《中国图象图形学报》;20081130;第13卷(第11期);全文 *

Also Published As

Publication number Publication date
CN110992409A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110969670B (en) Multispectral camera dynamic three-dimensional calibration method based on significant features
CN110992409B (en) Multispectral stereo camera dynamic registration method based on Fourier transform registration
CN110969668B (en) Stereo calibration algorithm of long-focus binocular camera
CN110969669B (en) Visible light and infrared camera combined calibration method based on mutual information registration
CN110969667B (en) Multispectral camera external parameter self-correction algorithm based on edge characteristics
CN111080709B (en) Multispectral stereo camera self-calibration algorithm based on track feature registration
CN110880191B (en) Infrared stereo camera dynamic external parameter calculation method based on histogram equalization
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
CN107767339B (en) Binocular stereo image splicing method
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
CN110136048B (en) Image registration method and system, storage medium and terminal
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN114265427A (en) Inspection unmanned aerial vehicle auxiliary navigation system and method based on infrared image matching
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
Alam et al. Entropy-based image registration method using the curvelet transform
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN117291808B (en) Light field image super-resolution processing method based on stream prior and polar bias compensation
Wang et al. Facilitating PTZ camera auto-calibration to be noise resilient with two images
CN114972625A (en) Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
Zhuo et al. Stereo matching approach using zooming images
Xu et al. A Comprehensive Overview of Fish-Eye Camera Distortion Correction Methods
Flores et al. Generating a full spherical view by modeling the relation between two fisheye images
CN116030106A (en) Infrared and visible light image registration method based on phase characteristics and edge characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant