Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for stitching images, so as to solve the problem in the prior art that when there is a close-range object in an overlapping region of two images, the stitching of the images is unnatural.
In order to solve the technical problems, the invention adopts the following technical scheme:
an image stitching method comprises the following steps:
calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area;
constructing a difference matrix according to the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area and the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area which are obtained by pre-calculation;
and splicing the first overlapping area and the second overlapping area according to the difference matrix.
Preferably, calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area includes:
matching the plurality of first feature points extracted in the first overlapping area with the plurality of second feature points extracted in the second overlapping area to obtain a plurality of matched feature point pairs;
calculating a difference vector of each matched feature point pair according to the space position coordinate of the first feature point and the space position coordinate of the second feature point in each matched feature point pair;
matching the plurality of first preset feature points extracted from the first overlapping area with the plurality of second preset feature points extracted from the second overlapping area to obtain a plurality of preset matching feature point pairs;
and calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area according to the difference vector of each matched characteristic point pair and the preset difference vector of each preset matched characteristic point pair.
Preferably, before matching the plurality of first feature points extracted in the first overlapping area with the plurality of second feature points extracted in the second overlapping area to obtain a plurality of matched feature point pairs, the method further includes:
respectively acquiring the first overlapping area and the second overlapping area;
uniformly dividing the first overlap region and the second overlap region into n regions, respectively, wherein n is a positive integer and n > 1;
extracting a preset number of first feature points for each region of the first overlap region division and extracting the preset number of second feature points for each region of the second overlap region division.
Preferably, matching the plurality of first feature points extracted in the first overlapping region with the plurality of second feature points extracted in the second overlapping region to obtain a plurality of matched feature point pairs includes:
and performing SIFT feature matching on the plurality of first feature points extracted in the first overlapping region and the plurality of second feature points extracted in the second overlapping region by adopting a scale-invariant feature transform (SIFT) algorithm to obtain a plurality of matched feature point pairs.
Preferably, calculating a disparity vector of each of the pairs of matched feature points according to the spatial position coordinates of the first feature point and the spatial position coordinates of the second feature point in each of the pairs of matched feature points includes:
and taking the difference value of the space position coordinate of the first characteristic point and the space position coordinate of the second characteristic point in each matching characteristic point pair as the difference vector of each matching characteristic point pair.
Preferably, calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area according to the disparity vector of each matching feature point pair and the preset disparity vector of each preset matching feature point pair, includes:
dividing the topological structure corresponding to the first overlapping area into m triangles by adopting a Delaunay triangulation algorithm; each vertex of each triangle obtained by division is a first feature point in the matching feature point pair or a first preset feature point in the preset matching feature point pair, each triangle is internally provided with only one pixel point, m is a positive integer and is greater than 1;
calculating the deviant of the pixel point positioned in each triangle by adopting a dual difference algorithm according to the difference vector of the matching feature point pair respectively corresponding to the three vertexes of each triangle or the preset difference vector of each preset matching feature point pair obtained by division;
and calculating the deviant of the pixel point positioned on each triangle edge by adopting a dual difference algorithm according to the difference vector of the matching feature point pair respectively corresponding to the two vertexes of the triangle edge where each pixel point positioned on each triangle edge is positioned or the preset difference vector of each preset matching feature point pair.
Preferably, constructing a difference matrix according to the offset values of the corresponding pixels in the first overlapping area and the second overlapping area, and the color difference value and the edge difference value of the corresponding pixels in the first overlapping area and the second overlapping area, which are obtained by pre-calculation, includes:
calculating difference values of corresponding pixel points in the first overlapping area and the second overlapping area according to the offset value, the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area;
and constructing a difference matrix according to the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area.
An apparatus for stitching images, comprising:
the calculation module is used for calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area;
a construction module, configured to construct a difference matrix according to offset values of corresponding pixels in the first overlapping area and the second overlapping area, and color difference values and edge difference values of corresponding pixels in the first overlapping area and the second overlapping area, which are obtained through pre-calculation;
and the splicing module is used for splicing the first overlapping area and the second overlapping area according to the difference matrix.
Preferably, the calculation module comprises:
a first matching sub-module, configured to match the plurality of first feature points extracted in the first overlapping region with the plurality of second feature points extracted in the second overlapping region, so as to obtain a plurality of matched feature point pairs;
the first calculation submodule is used for calculating the difference vector of each matched characteristic point pair according to the space position coordinate of the first characteristic point and the space position coordinate of the second characteristic point in each matched characteristic point pair;
the second matching submodule is used for matching the plurality of first preset feature points extracted from the first overlapping area with the plurality of second preset feature points extracted from the second overlapping area to obtain a plurality of preset matching feature point pairs;
and the second calculating submodule is used for calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area according to the difference vector of each matched characteristic point pair and the preset difference vector of each preset matched characteristic point pair.
Preferably, the method further comprises the following steps:
an obtaining module, configured to, before the first matching sub-module matches the plurality of first feature points extracted in the first overlapping area with the plurality of second feature points extracted in the second overlapping area to obtain a plurality of matched feature point pairs, obtain the first overlapping area and the second overlapping area respectively;
a dividing module, configured to uniformly divide the first overlapping area and the second overlapping area into n areas, where n is a positive integer and n > 1;
an extracting module, configured to extract a preset number of first feature points for each region divided by the first overlapping region and extract the preset number of second feature points for each region divided by the second overlapping region.
Preferably, the first matching sub-module includes:
and the matching unit is used for carrying out SIFT feature matching on the plurality of first feature points extracted from the first overlapping region and the plurality of second feature points extracted from the second overlapping region by adopting a Scale Invariant Feature Transform (SIFT) algorithm to obtain a plurality of matched feature point pairs.
Preferably, the first calculation submodule includes:
and the calculating unit is used for taking the difference value of the space position coordinate of the first characteristic point and the space position coordinate of the second characteristic point in each matching characteristic point pair as the difference vector of each matching characteristic point pair.
Preferably, the second calculation submodule includes:
the triangle dividing unit is used for dividing the topological structure corresponding to the first overlapping area into m triangles by adopting a Delaunay triangulation algorithm; each vertex of each triangle obtained by division is a first feature point in the matching feature point pair or a first preset feature point in the preset matching feature point pair, each triangle is internally provided with only one pixel point, m is a positive integer and is greater than 1;
the first deviant calculating unit is used for calculating deviants of pixel points positioned in each triangle by adopting a dual difference algorithm according to the difference vectors of the matching feature point pairs respectively corresponding to the three vertexes of each triangle obtained by division or the preset difference vectors of each preset matching feature point pair;
and the second deviant calculating unit is used for calculating the deviant of the pixel point positioned on each triangle edge by adopting a dual difference algorithm according to the difference vector of the matching characteristic point pair respectively corresponding to the two vertexes of the triangle edge where each pixel point positioned on each triangle edge is positioned or the preset difference vector of each preset matching characteristic point pair.
Preferably, the building block comprises:
a difference value calculating submodule, configured to calculate difference values of corresponding pixel points in the first overlapping area and the second overlapping area according to offset values, color difference values, and edge difference values of the corresponding pixel points in the first overlapping area and the second overlapping area;
and the construction submodule is used for constructing a difference matrix according to the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a method and a device for splicing images, and a difference matrix is constructed according to offset values of corresponding pixel points in a first overlapping area and a second overlapping area and color difference values and edge difference values of the corresponding pixel points in the first overlapping area and the second overlapping area which are obtained through pre-calculation. Since the offset value of the corresponding pixel point is a main characteristic for distinguishing the near view object from the distant view object, the offset value corresponding to the distant view object is small, and the offset value corresponding to the near view object is large. In addition, when the first overlapping area and the second overlapping area are spliced according to the difference matrix, a splicing seam with a small difference value is selected, and when the difference value is small, the corresponding deviation value is small, at the moment, the optimal splicing seam can be formed at a distant view object in the image and cannot be formed at a close view object in the image, and therefore the splicing effect is natural.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image splicing method, and with reference to fig. 1, the method comprises the following steps:
s11, calculating the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area;
the first overlapping area and the second overlapping area are provided with a plurality of pixel points, the plurality of pixel points in the first overlapping area and the plurality of pixel points in the second overlapping area are provided with corresponding relations, two pixel points corresponding to the same shooting object are corresponding pixel points, and if one pixel point in the first overlapping area and one pixel point in the second overlapping area are both the pixel points of the tree tip, the two pixel points are corresponding pixel points.
S12, constructing a difference matrix according to the offset values of the corresponding pixel points in the first overlapping area and the second overlapping area and the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area which are obtained through pre-calculation;
specifically, step S12 includes:
1) calculating difference values of corresponding pixel points in the first overlapping area and the second overlapping area according to the offset value, the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area;
wherein, the difference value of the corresponding pixel point represents the difference of the corresponding pixel point.
Specifically, when calculating the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area, weights are set for the offset value, the color difference value, and the edge difference value of the corresponding pixel points, respectively, where the weight of the offset value of the corresponding pixel point is the largest.
It should be noted that the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area need to be calculated in advance, where a difference value between the gray values of the pixel points corresponding to the first overlapping area and the second overlapping area is used as the color difference value, and a difference value between the edge values of the pixel points corresponding to the first overlapping area and the second overlapping area is used as the edge difference value.
And respectively multiplying the offset value, the color difference value and the edge difference value of the corresponding pixel points in the first overlapping area and the second overlapping area by the respective corresponding weights to obtain the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area.
2) And constructing a difference matrix according to the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area.
And arranging the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area according to the position relation of the pixel points to obtain a difference matrix.
In this step, the difference values of the corresponding pixels in the first overlapping area and the second overlapping area are obtained, and then a numerical basis is provided for constructing the difference matrix.
And S13, splicing the first overlapping area and the second overlapping area according to the difference matrix.
According to the difference matrix, a dynamic programming algorithm can be adopted to splice the first overlapping area and the second overlapping area.
Specifically, according to the difference matrix, an optimal splicing seam is obtained by adopting a dynamic programming algorithm, and after the optimal splicing seam is obtained, the first overlapping area and the second overlapping area are spliced according to the optimal splicing seam.
In this embodiment, a difference matrix is constructed according to the offset values of the corresponding pixels in the first overlapping area and the second overlapping area, and the color difference value and the edge difference value of the corresponding pixels in the first overlapping area and the second overlapping area, which are obtained through pre-calculation. Since the offset value of the corresponding pixel point is a main characteristic for distinguishing the near view object from the distant view object, the offset value corresponding to the distant view object is small, and the offset value corresponding to the near view object is large. In addition, when the first overlapping area and the second overlapping area are spliced according to the difference matrix, a splicing seam with a small difference value is selected, and when the difference value is small, the corresponding deviation value is small, at the moment, the optimal splicing seam can be formed at a distant view object in the image and can not be formed at a close view object in the image, and the splicing effect is natural.
On the basis of the above embodiment, step S11 includes:
s21, matching the plurality of first characteristic points extracted in the first overlapping area with the plurality of second characteristic points extracted in the second overlapping area to obtain a plurality of matched characteristic point pairs;
the first overlapping area is an overlapping area in a first image of the two images to be spliced; the second overlapping area is an overlapping area in a second image of the two images to be stitched.
Specifically, S21 includes:
and performing SIFT feature matching on the plurality of first feature points extracted in the first overlapping region and the plurality of second feature points extracted in the second overlapping region by adopting a scale-invariant feature transform (SIFT) algorithm to obtain a plurality of matched feature point pairs.
It should be noted that after a plurality of matching feature point pairs are obtained, matching singular points can be removed by randomly adopting a consistent RANSAC algorithm, and specifically, a matching singular point is a matching feature point pair with poor matching degree among the obtained plurality of matching feature point pairs. When the matching singular points enable later-stage images to be spliced, the splicing effect is poor, and then the matching singular points need to be eliminated.
In this step, the manner of obtaining a plurality of matching feature point pairs by using the scale invariant feature transform SIFT algorithm is an embodiment of obtaining a plurality of matching feature point pairs, and any embodiment that can obtain a plurality of matching feature point pairs is within the scope of the present invention.
Optionally, on the basis of this embodiment, before step S21, the method further includes:
1) respectively acquiring the first overlapping area and the second overlapping area;
2) uniformly dividing the first overlapping area and the second overlapping area into n areas respectively, wherein n is a positive integer and n > 1;
2) a preset number of first feature points are extracted for each region of the first overlap region division and a preset number of second feature points are extracted for each region of the second overlap region division.
Specifically, when the first feature point or the second feature point is extracted, the pixel points with higher brightness or higher edge intensity are extracted, when the pixel points with higher brightness or higher edge intensity are concentrated, and the offset values of the corresponding pixel points in the first overlapping region and the second overlapping region are solved at a later stage, the positions of the feature points are concentrated, and the situation that the offset values of the corresponding pixel points are not easy to calculate is caused, so that the extracted feature points need to be uniformly distributed.
The acquired first overlapping area and the acquired second overlapping area are uniformly divided into n areas, wherein the first overlapping area and the second overlapping area can be divided into triangles, rectangles or polygons when dividing, and the like, as long as the uniform division is ensured when dividing. The uniform division is to ensure that the extracted feature points are distributed uniformly in the whole area, so that the situation that the positions of the feature points are concentrated and the offset value of the corresponding pixel point is not easy to calculate can be avoided.
Thereafter, a preset number of first feature points are extracted for each region of the first overlap region division and a preset number of second feature points are extracted for each region of the second overlap region division.
The preset number is selected by the staff according to the image complexity of the two images to be spliced.
In this embodiment, the acquired first overlap region and the acquired second overlap region are uniformly divided into n regions, a preset number of first feature points are extracted for each region divided by the first overlap region, and a preset number of second feature points are extracted for each region divided by the second overlap region. The extracted feature points can be ensured to be uniformly distributed in the first overlapping area and the second overlapping area.
S22, calculating a difference vector of each matched feature point pair according to the space position coordinate of the first feature point and the space position coordinate of the second feature point in each matched feature point pair;
specifically, step S22 includes:
and taking the difference value of the space position coordinate of the first characteristic point and the space position coordinate of the second characteristic point in each matching characteristic point pair as the difference vector of each matching characteristic point pair.
In detail, each matching feature point pair has a spatial position coordinate for a first feature point in the first overlapping region, and each matching feature point pair has a spatial position coordinate for a second feature point in the second overlapping region, and the difference between the spatial position coordinate of the first feature point and the spatial position coordinate of the second feature point is taken as the disparity vector of each matching feature point pair.
Wherein the disparity vector represents a spatial position difference for each matching pair of feature points.
S23, matching the plurality of first preset feature points extracted in the first overlapping area with the plurality of second preset feature points extracted in the second overlapping area to obtain a plurality of preset matching feature point pairs;
the first overlapping area is provided with a plurality of first preset characteristic points, the second overlapping area is provided with a plurality of second preset characteristic points, and the plurality of first preset characteristic points and the plurality of second preset characteristic points form a plurality of preset matching characteristic point pairs correspondingly.
It should be noted that the preset disparity vector of each preset matching feature point pair is (∞, infinity). The purpose of selecting the plurality of preset matching feature point pairs is to ensure that the optimal splicing seam is not in the area outside the first overlapping area and the second overlapping area during final splicing.
And S24, calculating the offset value of the corresponding pixel point in the first overlapping area and the second overlapping area according to the difference vector of each matching characteristic point pair and the preset difference vector of each preset matching characteristic point pair.
Specifically, the disparity vector of each selected matching feature point pair and the preset disparity vector of each preset matching feature point pair are known, but the disparity values of the remaining corresponding pixel points in the first overlapping area and the second overlapping area are unknown. Wherein the disparity vector and the disparity value are derivable from each other.
The disparity value is related to the offset values of the corresponding pixels in the first overlapping area and the second overlapping area, so that the offset values of the corresponding pixels in the first overlapping area and the second overlapping area need to be calculated.
In this embodiment, the offset value of the corresponding pixel point in the first overlapping region and the second overlapping region can be calculated through the difference vector of each matching feature point pair and the preset difference vector of each preset matching feature point pair, so that data support can be provided for constructing the difference matrix.
Optionally, on the basis of any of the foregoing embodiments, referring to fig. 3, step S24 includes:
s31, dividing the topological structure corresponding to the first overlapping area into m triangles by adopting a Delaunay triangulation algorithm;
each vertex of each triangle obtained by division is a first feature point in a matching feature point pair or a first preset feature point in a preset matching feature point pair, each triangle is internally provided with and only comprises one pixel point, m is a positive integer and is greater than 1.
It should be noted that, the topological structure corresponding to the first overlapping area is the same as the topological structure corresponding to the second overlapping area, and at this time, the topological structure corresponding to the second overlapping area may also be divided into m triangles by using a Delaunay triangulation algorithm.
S32, calculating the deviant of the pixel point inside each triangle by adopting a dual difference algorithm according to the difference vector of the matching feature point pair respectively corresponding to the three vertexes of each triangle obtained by division or the preset difference vector of each preset matching feature point pair;
when one pixel point is positioned inside the triangle, calculating the deviant of the pixel point positioned inside each triangle by adopting a dual difference algorithm according to the values of the delta x and the delta y according to the difference vector of the matching characteristic point pair respectively corresponding to the three vertexes of each triangle obtained by division or the preset difference vector of each preset matching characteristic point pair, namely (delta x, delta y).
And S33, calculating the deviant of the pixel point on each triangle edge by adopting a dual difference algorithm according to the difference vector of the matching feature point pair respectively corresponding to the two vertexes of the triangle edge where each pixel point on each triangle edge is located or the preset difference vector of each preset matching feature point pair.
When a pixel point is located on a triangle edge, since the pixel point may be located on a common edge of two triangles at the same time, a problem that the vertex of which triangle is selected cannot be known by adopting a calculation method of an offset value of the pixel point located inside the triangle at this time exists. Therefore, when calculating the deviant of the pixel point on the triangle side, the deviant of the pixel point on each triangle side is calculated by adopting the difference vector of the matching feature point pair respectively corresponding to the two vertexes of the triangle side where the pixel point is located or the preset difference vector of each preset matching feature point pair and adopting the dual difference algorithm.
In this embodiment, a Delaunay triangulation algorithm is adopted to divide the topology structure corresponding to the first overlapping area into m triangles, and then, the offset values of the pixel points located on the sides and inside of each triangle can be calculated according to a dual difference algorithm, so as to provide a basis for constructing the difference matrix.
Optionally, another embodiment of the present invention provides an image stitching apparatus, referring to fig. 4, including:
a calculating module 101, configured to calculate offset values of corresponding pixel points in the first overlapping area and the second overlapping area;
a constructing module 102, configured to construct a difference matrix according to the offset values of the corresponding pixels in the first overlapping area and the second overlapping area, and the color difference value and the edge difference value of the corresponding pixels in the first overlapping area and the second overlapping area, which are obtained through pre-calculation;
a splicing module 103, configured to splice the first overlapping area and the second overlapping area according to the difference matrix.
Optionally, further, the building module 102 includes:
a difference value calculating submodule, configured to calculate difference values of corresponding pixel points in the first overlapping area and the second overlapping area according to offset values, color difference values, and edge difference values of the corresponding pixel points in the first overlapping area and the second overlapping area;
and the construction submodule is used for constructing a difference matrix according to the difference values of the corresponding pixel points in the first overlapping area and the second overlapping area.
In this embodiment, a difference matrix is constructed according to the offset values of the corresponding pixels in the first overlapping area and the second overlapping area, and the color difference value and the edge difference value of the corresponding pixels in the first overlapping area and the second overlapping area, which are obtained through pre-calculation. Since the offset value of the corresponding pixel point is a main characteristic for distinguishing the near view object from the distant view object, the offset value corresponding to the distant view object is small, and the offset value corresponding to the near view object is large. In addition, when the first overlapping area and the second overlapping area are spliced according to the difference matrix, a splicing seam with a small difference value is selected, and when the difference value is small, the corresponding deviation value is small, at the moment, the optimal splicing seam can be formed at a distant view object in the image and can not be formed at a close view object in the image, and the splicing effect is natural.
It should be noted that, for the working process of each module in this embodiment, please refer to the corresponding parts in the above embodiments, which are not described herein again.
Optionally, in another embodiment of the present invention, referring to fig. 5, the calculating module 101 includes:
a first matching sub-module 1011, configured to match the plurality of first feature points extracted in the first overlapping region with the plurality of second feature points extracted in the second overlapping region, so as to obtain a plurality of matching feature point pairs;
a first calculating sub-module 1012, configured to calculate a disparity vector for each matching feature point pair according to a spatial position coordinate of a first feature point and a spatial position coordinate of a second feature point in each matching feature point pair;
a second matching sub-module 1013, configured to match the plurality of first preset feature points extracted in the first overlapping region with the plurality of second preset feature points extracted in the second overlapping region, so as to obtain a plurality of preset matching feature point pairs;
the second calculating sub-module 1014 is configured to calculate, according to the disparity vector of each matching feature point pair and the preset disparity vector of each preset matching feature point pair, an offset value of a corresponding pixel point in the first overlapping area and the second overlapping area.
Optionally, further, the method further includes:
an obtaining module, configured to, before the first matching sub-module matches the plurality of first feature points extracted in the first overlapping area with the plurality of second feature points extracted in the second overlapping area to obtain a plurality of matched feature point pairs, obtain the first overlapping area and the second overlapping area respectively;
a dividing module, configured to uniformly divide the first overlapping area and the second overlapping area into n areas, where n is a positive integer and n > 1;
an extracting module, configured to extract a preset number of first feature points for each region divided by the first overlapping region and extract the preset number of second feature points for each region divided by the second overlapping region.
Optionally, further, the first matching sub-module 1011 includes:
and the matching unit is used for carrying out SIFT feature matching on the plurality of first feature points extracted from the first overlapping region and the plurality of second feature points extracted from the second overlapping region by adopting a Scale Invariant Feature Transform (SIFT) algorithm to obtain a plurality of matched feature point pairs.
Optionally, further, the first computing submodule 1012 includes:
and the calculating unit is used for taking the difference value of the space position coordinate of the first characteristic point and the space position coordinate of the second characteristic point in each matching characteristic point pair as the difference vector of each matching characteristic point pair.
In this embodiment, the offset value of the corresponding pixel point in the first overlapping region and the second overlapping region can be calculated through the difference vector of each matching feature point pair and the preset difference vector of each preset matching feature point pair, so that data support can be provided for constructing the difference matrix.
It should be noted that, for the working process of each module or unit in this embodiment, please refer to the corresponding description in the foregoing embodiment, which is not described herein again.
Optionally, in another embodiment of the present invention, the second computing submodule 1014 includes:
the triangle dividing unit is used for dividing the topological structure corresponding to the first overlapping area into m triangles by adopting a Delaunay triangulation algorithm; each vertex of each triangle obtained by division is a first feature point in the matching feature point pair or a first preset feature point in the preset matching feature point pair, each triangle is internally provided with only one pixel point, m is a positive integer and is greater than 1;
the first deviant calculating unit is used for calculating deviants of pixel points positioned in each triangle by adopting a dual difference algorithm according to the difference vectors of the matching feature point pairs respectively corresponding to the three vertexes of each triangle obtained by division or the preset difference vectors of each preset matching feature point pair;
and the second deviant calculating unit is used for calculating the deviant of the pixel point positioned on each triangle edge by adopting a dual difference algorithm according to the difference vector of the matching characteristic point pair respectively corresponding to the two vertexes of the triangle edge where each pixel point positioned on each triangle edge is positioned or the preset difference vector of each preset matching characteristic point pair.
In this embodiment, a Delaunay triangulation algorithm is adopted to divide the topology structure corresponding to the first overlapping area into m triangles, and then, the offset values of the pixel points located on the sides and inside of each triangle can be calculated according to a dual difference algorithm, so as to provide a basis for constructing the difference matrix.
It should be noted that, for the working process of each module or unit in this embodiment, please refer to the corresponding description in the foregoing embodiment, which is not described herein again.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.