CN111008932A - Panoramic image splicing method based on image screening - Google Patents

Panoramic image splicing method based on image screening Download PDF

Info

Publication number
CN111008932A
CN111008932A CN201911245280.1A CN201911245280A CN111008932A CN 111008932 A CN111008932 A CN 111008932A CN 201911245280 A CN201911245280 A CN 201911245280A CN 111008932 A CN111008932 A CN 111008932A
Authority
CN
China
Prior art keywords
image
images
image group
similarity
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911245280.1A
Other languages
Chinese (zh)
Other versions
CN111008932B (en
Inventor
阎维青
魏鑫
顾美琪
苏凯祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai University
Original Assignee
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai University filed Critical Yantai University
Priority to CN201911245280.1A priority Critical patent/CN111008932B/en
Publication of CN111008932A publication Critical patent/CN111008932A/en
Application granted granted Critical
Publication of CN111008932B publication Critical patent/CN111008932B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a panoramic image splicing method based on image screening, which comprises the following steps: an image screening algorithm is provided according to the similarity matrix between the images, and the redundant images are removed from the original image group; based on the weight matrix of the screened image group, the optimal reference image is worked out, a splicing sequence is determined, the screened image group is grouped and divided into a plurality of small image groups; and performing similarity transformation between the small image group and the small image group to obtain initialized registration parameters, and then refining the initialized registration parameters between adjacent images under perspective constraint through a homography model according to a splicing sequence. According to the invention, a plurality of images with overlapping regions collected by a target region are screened by an unmanned aerial vehicle, redundant images are removed from an image group, and finally a panoramic image of the target region is spliced by the screened image group.

Description

Panoramic image splicing method based on image screening
Technical Field
The invention relates to the field of panoramic image splicing, in particular to a panoramic image splicing technology for acquiring multiple images with overlapped areas in a target area based on an unmanned aerial vehicle and carrying out image screening.
Background
Unmanned aerial vehicle remote sensing is a new remote sensing means, because of its characteristics such as have high efficiency, flexibility, quick, low cost and high resolution, has shown good development momentum in recent years. However, in the aerial photography process of the unmanned aerial vehicle remote sensing platform, the range of a single acquired image is small and the whole required area cannot be covered frequently due to the limitations of the flying height, the focal length of a camera and the like, so that splicing a plurality of acquired remote sensing images with small visual angles into a panoramic image with a large visual angle becomes an important technology.
The panoramic image stitching can stitch a plurality of adjacent small-view-angle images into a panoramic image with a large viewing angle. Image stitching is the mapping of all images onto a common coordinate system by projection warping (e.g., cylindrical, spherical, or perspective). Due to irregular movement of the lens, certain parallax exists between images, and the spliced panoramic picture almost inevitably has the problems of insufficient local splicing precision (figure 1(a)) and serious global deformation accumulation (figure 1 (b)).
To improve the quality of the splice, Konolige et al[1]It is proposed to use beam-balancing for global optimization, which minimizes the global reprojection error. To avoid non-linear optimization, Kekec et al[2]An affine model is used to initialize the alignment and a homography model is used for global optimization. As the number of stitched images increases, the accumulation of perspective distortion increases, and to avoid this problem Cabillero et al[3]The image is registered by adopting a layered model according to the registration quality of the image, the model has smaller degree of freedom for the registration of the large parallax image, and the essence of the algorithm is that a balance is obtained between the improvement of the registration precision and the reduction of deformation accumulation.
For the problem of splicing images with large visual angles, the method for improving the splicing quality of the images by using the topological relation among the images is also a very efficient scheme. For efficient estimation of topological relations between images, Elicol et al[4]Coarse feature point matching is adopted to combine with minimum spanning treeTo detect the overlapping relationship between the images. With respect to the selection of the reference image, Richardet al[5]It is shown that the most suitable choice is the image closest to the centre of the panoramic image, since the average shortest path for the centre image to all other images is the shortest, which minimizes the accumulation of deformations. To implement this algorithm, Choe et al[6]The optimal reference image is selected by the algorithm of graph theory, but the premise is that deformation errors between each pair of images need to be calculated in advance. Xia et al[7]A registration model is provided, initial registration is carried out through an affine model, and then parameters are refined through a homography model between adjacent images.
The method can splice a plurality of images into a complete panoramic image, but as the number of splices increases, data can be redundant, and unnecessary calculation is generated.
Disclosure of Invention
The invention provides a panoramic image splicing method based on image screening, which screens a plurality of images with overlapped areas collected by a target area through an unmanned aerial vehicle, removes an image group from redundant images, and finally splices the panoramic image of the target area through the screened image group, wherein the following description is provided:
a panoramic image splicing method based on image screening comprises the following steps:
an image screening algorithm is provided according to the similarity matrix between the images, and the redundant images are removed from the original image group;
based on the weight matrix of the screened image group, the optimal reference image is worked out, a splicing sequence is determined, the screened image group is grouped and divided into a plurality of small image groups;
similarity transformation is firstly carried out between the small image group and the small image group[9]Obtaining initialized registration parameters, and then passing a homography model between adjacent images under perspective constraint according to a splicing sequence[10]And refining the initialized registration parameters.
Further, the removing of the redundant image from the original image group according to the image screening and the similarity matrix specifically includes:
1) setting a similarity threshold value based on a similarity matrix of the original image group;
2) judging whether the maximum value in the similarity matrix of the current image group is larger than a similarity threshold value or not, if so, selecting two images with the highest similarity in the current image group, judging one image as a redundant image, and removing the image group;
3) repeating the operation of the step 2) until the maximum value in the similarity matrix of the current image group is smaller than the similarity threshold.
Wherein, the determining one of the images as a redundant image specifically includes: and respectively calculating the sum of the similarity between all the images in the residual image group after deleting each image, and then determining the image with the minimum sum of the similarity after deleting as a redundant image.
The method for obtaining the optimal reference image based on the weight matrix of the screened image group specifically comprises the following steps:
establishing a weight matrix among all the screened images based on the similarity matrix of the images, running a shortest path algorithm based on the weight matrix, and calculating the weight sum of the shortest paths from each point to all other points; and taking the image represented by the point with the minimum sum of the shortest path weights as the optimal reference map.
The determining of the splicing sequence specifically comprises:
based on the weight matrix of the image group, points representing the optimal reference map are used as starting points, a breadth-first traversal algorithm is applied, and the sequence of the obtained points is used as a splicing sequence of the images.
The method further comprises the following steps:
carrying out similarity transformation between the small image group and the small image group to obtain initialized registration parameters, and then refining the initialized registration parameters through a homography model under perspective constraint between adjacent images according to a splicing sequence; the optimal solution is obtained by minimizing the total energy function.
The technical scheme provided by the invention has the beneficial effects that:
1. on the premise of ensuring the integrity and quality of the final splicing result, the method screens the redundant images in the acquired small-view-angle image group, thereby greatly improving the splicing rate;
2. the method overcomes the accumulation of perspective deformation under a large data set, groups the images, and performs initialization registration between the image group and the image group through a similarity transformation model to obtain good global consistency;
3. according to the method, homography registration is carried out between adjacent images based on a topological structure between the images, and a good splicing effect is obtained locally;
4. experimental results show that the method can effectively remove the redundant images from the image group, and the quality and integrity of the generated large-view-angle panoramic image are not greatly different from the effect obtained by splicing all the images.
Drawings
FIG. 1 is a diagram of problems encountered during panoramic image stitching;
wherein, (a) is that the local splicing precision is not high, and ghost images or dislocation occur; (b) for serious deformation accumulation, the final splicing effect lacks global consistency.
FIG. 2 is a flow chart of a panoramic image stitching method based on image screening;
FIG. 3 is a representation diagram of a splicing sequence in space, which is obtained by selecting an optimal reference diagram for 61 images and running breadth-first traversal with the optimal reference diagram as a starting point;
FIG. 4 is a schematic diagram of the topological structure analysis of 61 images and the screened images;
wherein, (a) is the topological structure of the original image group; (b) the topological structure of the screened image group (containing 31 images) is shown.
FIG. 5 is a diagram showing the stitching results of 61 images and the screened images;
wherein, (a) is a panoramic image spliced by the original image group; (b) the panoramic image is formed by splicing the screened image group (containing 31 images).
FIG. 6 is a schematic diagram of the topology analysis of 744 images and the screened images;
wherein, (a) is the topological structure of the original image group; (b) the topology of the screened image group (containing 375 images) is shown.
Fig. 7 is a diagram illustrating the results of stitching 744 images and the screened images.
Wherein, (a) is a panoramic image spliced by the original image group; (b) the panoramic image is spliced by the screened image group (containing 375 images).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
Example 1
Referring to fig. 1, in the process of stitching the panoramic image, as the number of stitched images increases, data may have a redundancy phenomenon, unnecessary calculation is generated, and the stitching rate is reduced. In order to solve the above problem, an embodiment of the present invention provides a panoramic image stitching method based on image screening, and referring to fig. 2, the method includes the following steps:
101: rapidly acquiring a similarity matrix of the image group, and providing an image screening algorithm, so that the redundant image is removed from the original image group;
102: based on the weight matrix of the screened image group, the optimal reference image is worked out, a splicing sequence is determined, the screened image group is grouped and divided into a plurality of small image groups;
103: and performing similarity transformation between the small image group and the small image group to obtain initialized registration parameters, and then refining the initialized registration parameters between adjacent images under the anti-perspective constraint through a homography model according to a splicing sequence.
Example 2
The scheme of example 1 is further described below with reference to specific examples, which are described in detail below:
201: shooting a target area based on an unmanned aerial vehicle, and acquiring a plurality of continuous images with overlapped areas as an original image group;
wherein, the image information of the target area can be obtained through the operation of the step.
202: carrying out rough feature point matching on the original image group to obtain a similarity matrix between the images;
wherein, the step 202 specifically includes:
establishing all images (I) according to the matching number of the feature points of the two imagesi1.. and N, N denotes the number of original images), where the similarity M (i, j) between the ith image and the jth image can be expressed as:
Figure BDA0002307361270000041
where p is the number of feature points in image i and q represents the number of feature points in image j. m and n represent the serial numbers of the characteristic points in the image i and the image j, if the characteristic points m and n are matched, vmnAnd recording as 1, otherwise, recording as 0.
The above-mentioned method for calculating feature point matching is through SURF (speeded up robust features)[8-10]And finally, matching the feature points of each image with the feature points of the rest other images one by using the screened feature points.
203: based on a similarity matrix between the images, an image screening algorithm is provided, and redundant images are removed from an original image group;
wherein the step 203 comprises:
based on a similarity matrix M of the original image group obtained in the previous step, (1) setting a similarity threshold; (2) judging whether the maximum value in the similarity matrix of the current image group is greater than a similarity threshold, if so, selecting two images with the highest similarity in the current image group, judging one of the two images as a redundant image, and removing the image group; (3) and (3) repeating the operation of the step (2) until the maximum value in the similarity matrix of the current image group is smaller than the similarity threshold value.
Further, the formula for determining whether one of the images is a redundant image is as follows:
Figure BDA0002307361270000051
and b, respectively calculating the sum of the similarity between all the images in the residual image group after deleting each image, and then determining the image with the minimum sum of the similarity after deleting as a redundant image.
For example, there are 5 images A-E, where image A and image B are the two images with the highest similarity. Deleting the image A, calculating the sum of the similarity between every two of the remaining four images B-E, deleting the image B, and calculating the sum of the similarity between every two of the four images A, C-E. If the sum of the degrees of similarity obtained by deleting the image A is smaller than the sum of the degrees of similarity obtained by deleting the image B, determining the image A as a redundant image, removing the redundant image from the image group, and not operating the image B. On the contrary, the image B is determined as a redundant image, and is removed from the image group, and no operation is performed on the image a.
204: finding an optimal reference image based on the weight matrix of the screened image group, determining a splicing sequence and grouping the images;
wherein the step 204 comprises:
the method for finding the optimal reference map is as follows: firstly, establishing a weight matrix W between all screened images based on a similarity matrix M of the images, wherein the weight between an image i and an image j is defined as follows:
Figure BDA0002307361270000061
wherein epsilon is the balance weight, inf is infinity, and In is the natural logarithm.
Then, based on the weight matrix W, a shortest path algorithm is operated, the sum of the weights of the shortest paths from each point to all other points is calculated, and an image represented by the point with the minimum sum of the weights of the shortest paths is used as an optimal reference image and is represented by a letter O.
For example, 5 images A-E and their weight matrixes are used, the shortest path algorithm is operated to calculate the shortest paths from the image A to the four images B-E respectively, then the weight sum of the four shortest paths is obtained, and by analogy, the shortest paths from each image to the remaining four images is calculated, and the weight sum of the four shortest paths corresponding to each image is obtained. And if the sum of the weights of the shortest paths of the image A to the rest four images is minimum, setting the image A as the optimal reference map.
The method for determining the splicing sequence is as follows: based on the weight matrix W of the image group, a point O representing the optimal reference map is used as a starting point, a breadth-first traversal algorithm is applied, and the obtained point sequence is used as the image splicing sequence. As shown in fig. 3, fig. 7 is a spatial relationship diagram of 61 images, each point represents one image, a connecting line between a point and a point represents that the images have an overlapping relationship, a number on the point represents a stitching order of the images, a point 0 represents an optimal reference diagram, numbers on other points represent that a breadth-first traversal algorithm is run based on a weight matrix W, the point 0 is taken as a starting point, a sequence of the points is sequentially obtained, and the sequence is taken as a stitching order of the panoramic images.
The method of grouping images is as follows: setting the number s of images contained in each group of image groups (s is less than the number n of images in the group of images), and setting the 1 st to s th images in the image sequence as a first group G1={Iii 1,2,.. s }, the s +1 to 2 s-th images are set as a second group G2={Iii +1, s +2, 2s, … … until the number of remaining images is less than s, the remaining images are set as the last group of images Gm={Iii=(m-1)s+1,i=(m-1)s+2,...,n}。
205: an image registration method combining image global similarity transformation and local perspective transformation;
wherein, the step 205 specifically comprises:
in order to obtain good global consistency and prevent accumulation of deformation errors, the initial registration between image sets is performed by a similarity transformation model.
Image group GmIs set as
Figure BDA0002307361270000071
Figure BDA0002307361270000072
Representing an image group GmThe ith image is transformed to the similarity transformation matrix of the optimal reference image O, n1As a group of images GmThe number of the middle images; and image group GmAdjacent image group Gm+Is set as
Figure BDA0002307361270000073
Figure BDA0002307361270000074
Representing an image group Gm+1The j-th image is transformed to the similarity transformation matrix of the optimal reference image O, n2As a group of images Gm+1The number of images, wherein the energy function to initialize the registration is:
E(S)=E1(S|Gm,Gm+1)+E2(Sm|Gm) (4)
wherein E is1(S|Gm,Gm+1) Representative image group GmAnd its neighboring image group Gm+1Sum of registration errors, S ═ Sm∪Sm+1Represents an image group GmAnd image group Gm+1The union of the sets of parameters of the similarity transformation model.
Figure BDA0002307361270000075
E2(Sm|Gm) Representative image group GmThe sum of registration errors between images with overlapping regions inside, which is defined as:
Figure BDA0002307361270000076
wherein the content of the first and second substances,
Figure BDA0002307361270000077
t (x) represents a transformation to a non-homogeneous coordinate x,
Figure BDA0002307361270000078
the representative image i is transformed to the similarity transformation matrix of the optimal reference image O,
Figure BDA0002307361270000079
is the two-dimensional coordinate of the k-th matching point of the image i and the image j on the image i, Mi,jRepresenting the number of matching points for image i and image j.
To obtain the optimal global consistency effect, the optimal solution is obtained by minimizing the total energy function. Through the above operations, a set of similarity transformation matrices for all images to the optimal reference image O is obtained:
Figure BDA00023073612700000710
wherein
Figure BDA00023073612700000711
And (3) transforming the image i to the similarity transformation matrix of the optimal reference image O, wherein n is the total number of the images of the image group. In order to improve the registration accuracy of the local overlapping region of the panoramic images, optimization is performed through a homography model.
The homography transformation matrix optimization process is as follows: transforming the similarity into a matrix
Figure BDA0002307361270000081
Is set as a homography transformation matrix
Figure BDA0002307361270000082
To the transformation matrix
Figure BDA0002307361270000083
The optimization formula of (2) is as follows:
E(H)=E1(H)+λE2(H) (7)
wherein λ is the equilibrium E1(H) And E2(H) Right of (1)A weight coefficient; e1(H) The purpose of the method is to minimize the square sum of registration errors of feature points between images and obtain good local registration effect, and the method is defined as follows:
Figure BDA0002307361270000084
wherein the content of the first and second substances,
Figure BDA0002307361270000085
Figure BDA0002307361270000086
the representative image i is transformed to the homography transformation matrix of the optimal reference image O,
Figure BDA0002307361270000087
a homography transformation matrix for image j to the optimal reference map O,
Figure BDA0002307361270000088
two-dimensional point coordinates on image i for the k-th pair of matching points for image i and image j,
Figure BDA0002307361270000089
two-dimensional point coordinates on image i for the k-th pair of matching points for image j and image i.
E2(H) The purpose of (1) is to maintain global consistency and prevent severe accumulation of perspective distortion. Therefore, when the homography model is used for optimization, the homography model parameters should be close to the initialized similarity transformation model parameters, so as to prevent the point displacement from being too large in the feature point transformation process, and the definition is as follows:
Figure BDA00023073612700000810
by homography of the transformation matrix for each image
Figure BDA00023073612700000811
The parameters are refined, and the optimized result is used as each image to be finally splicedAnd transforming the matrix.
Example 3
In order to verify the effectiveness of the method, in this section, two groups of image sets collected by the drone are tested, and the panoramic image generated by the original image group is compared with the panoramic image generated by the image group after screening. FIG. 4 is a comparison of topological structures of 61 images before and after screening, FIG. 4(a) is a topological structure of an original image group, FIG. 4(b) is a topological structure of an image group after screening, FIG. 5 is a comparison of stitching results of 61 images before and after screening, FIG. 5(a) is a stitching result of an original image group, and FIG. 5(b) is a stitching result of an image group after screening; fig. 6 is a comparison of topology structures before and after screening of 744 images, fig. 6(a) is a topology structure of an original image group, fig. 6(b) is a topology structure of an image group after screening, fig. 7 is a comparison of stitching results before and after screening of 744 images, fig. 7(a) is a stitching result of an original image group, and fig. 7(b) is a stitching result of an image group after screening.
The experimental result shows the effectiveness of the method, and the method can remove the redundant images from the image group on the premise of ensuring the integrity and quality of the final splicing result, thereby improving the splicing speed.
Reference to the literature
[1]K.Konolige,Sparse sparse bundle adjustment,in:British MachineVision Conference,2010,pp.1–10.
[2]A.Y.Taygun Kekec,M.Unel,A new approach to real-time mosaicing ofaerialimages,Robot.Auton.Syst.62(12)(2014)1755–1767.
[3]F.Caballero,L.Merino,J.Ferruz,A.Ollero,Homography based Kalmanfilter for mosaic building.applications to UAV position estimation,in:Proceedings of the IEEE International Conference on Robotics and Automation,2007,pp.2004–2009
[4]A.Elibol,N.Gracias,R.Garcia,Fast topology estimation for imagemosaicing using adaptive information thresholding,Robot.Auton.Syst.61(2)(2013)125–136.
[5]R.Szeliski,Image alignment and stitching:a tutorial,Found.TrendsComput.Graph.Vis.2(1)
(2006)1–104.
[6]T.E.Choe,I.Cohen,M.Lee,G.Medioni,Optimal global mosaic generationfrom retinal images,in:Proceedings of the IEEE International Conference onPattern Recognition,Vol.3,2006,pp.681–684
[7]M.Xia,J.Yao,R.Xie,L.Li,and W.Zhang.Globally consistent alignmentfor planar mosaicking via topology analysis.Pattern Recognition,66:239–252,2017.
[8]Bay,Herbert,Tinne Tuytelaars,and Luc Van Gool."Surf:Speeded uprobust features."European conference on computer vision.Springer,Berlin,Heidelberg,2006.
[9]Taussky O,Zassenhaus H.On the similarity transformation between amatirx and its transpose[J].Pacific Journal of Mathematics,1959,9(3):893-896.
[10]Dubrofsky E.Homography Estimation[D].UNIVERSITY OF BRITISHCOLUMBIA(Vancouver,2009.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A panoramic image splicing method based on image screening is characterized by comprising the following steps:
an image screening algorithm is provided according to the similarity matrix between the images, and the redundant images are removed from the original image group;
based on the weight matrix of the screened image group, the optimal reference image is worked out, a splicing sequence is determined, the screened image group is grouped and divided into a plurality of small image groups;
and performing similarity transformation between the small image group and the small image group to obtain initialized registration parameters, and then refining the initialized registration parameters between adjacent images under the anti-perspective constraint through a homography model according to a splicing sequence.
2. The panoramic image stitching method based on image screening according to claim 1, wherein the method for removing redundant images from an original image group by providing an image screening algorithm according to a similarity matrix in the image group specifically comprises:
1) setting a similarity threshold value based on a similarity matrix of the original image group;
2) judging whether the maximum value in the similarity matrix of the current image group is larger than a similarity threshold value or not, if so, selecting two images with the highest similarity in the current image group, judging one image as a redundant image, and removing the image group;
3) repeating the operation of the step 2) until the maximum value in the similarity matrix of the current image group is smaller than the similarity threshold.
3. The method for stitching panoramic images based on image screening according to claim 2, wherein the determining one of the images as a redundant image specifically comprises: and respectively calculating the sum of the similarity between all the images in the residual image group after deleting each image, and then determining the image with the minimum sum of the similarity after deleting as a redundant image.
4. The method for stitching panoramic images based on image screening according to claim 1, wherein the finding of the optimal reference image based on the weight matrix of the screened image group specifically comprises:
establishing a weight matrix among all the screened images based on the similarity matrix of the images, running a shortest path algorithm based on the weight matrix, and calculating the weight sum of the shortest paths from each point to all other points; and taking the image represented by the point with the minimum sum of the shortest path weights as the optimal reference map.
5. The panoramic image stitching method based on image screening according to claim 1, wherein the determining of the stitching sequence specifically comprises:
based on the weight matrix of the image group, points representing the optimal reference map are used as starting points, a breadth-first traversal algorithm is applied, and the sequence of the obtained points is used as a splicing sequence of the images.
6. The method for stitching the panoramic images based on the image screening as claimed in claim 1, further comprising:
carrying out similarity transformation between the small image group and the small image group to obtain initialized registration parameters, and then refining the initialized registration parameters through a homography model under perspective constraint between adjacent images according to a splicing sequence; the optimal solution is obtained by minimizing the total energy function.
CN201911245280.1A 2019-12-06 2019-12-06 Panoramic image splicing method based on image screening Expired - Fee Related CN111008932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911245280.1A CN111008932B (en) 2019-12-06 2019-12-06 Panoramic image splicing method based on image screening

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911245280.1A CN111008932B (en) 2019-12-06 2019-12-06 Panoramic image splicing method based on image screening

Publications (2)

Publication Number Publication Date
CN111008932A true CN111008932A (en) 2020-04-14
CN111008932B CN111008932B (en) 2021-05-25

Family

ID=70115500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911245280.1A Expired - Fee Related CN111008932B (en) 2019-12-06 2019-12-06 Panoramic image splicing method based on image screening

Country Status (1)

Country Link
CN (1) CN111008932B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222817A (en) * 2021-05-13 2021-08-06 哈尔滨工程大学 Image feature extraction-based 12-channel video image splicing and image registration method
CN115713700A (en) * 2022-11-23 2023-02-24 广东省国土资源测绘院 Method for collecting typical crop planting samples in cooperation with open space

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950426A (en) * 2010-09-29 2011-01-19 北京航空航天大学 Vehicle relay tracking method in multi-camera scene
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
KR101464218B1 (en) * 2014-04-25 2014-11-24 주식회사 이오씨 Apparatus And Method Of Processing An Image Of Panorama Camera
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation
CN109741240A (en) * 2018-12-25 2019-05-10 常熟理工学院 A kind of more flat image joining methods based on hierarchical clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950426A (en) * 2010-09-29 2011-01-19 北京航空航天大学 Vehicle relay tracking method in multi-camera scene
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
KR101464218B1 (en) * 2014-04-25 2014-11-24 주식회사 이오씨 Apparatus And Method Of Processing An Image Of Panorama Camera
CN107274346A (en) * 2017-06-23 2017-10-20 中国科学技术大学 Real-time panoramic video splicing system
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation
CN109741240A (en) * 2018-12-25 2019-05-10 常熟理工学院 A kind of more flat image joining methods based on hierarchical clustering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TAUSSKY O,ZASSENHAUS H.: "On the similarity transformation between a matrix and its transpose", 《PACIFIC JOURNAL OF MATHEMATICS》 *
XIA MENGHAN,ET AL: "Globally consistent alignment for planar mosaicking via topology analysis", 《PATTERN RECOGNITION》 *
常伟等: "一种改进的快速全景图像拼接算法", 《电子测量技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222817A (en) * 2021-05-13 2021-08-06 哈尔滨工程大学 Image feature extraction-based 12-channel video image splicing and image registration method
CN115713700A (en) * 2022-11-23 2023-02-24 广东省国土资源测绘院 Method for collecting typical crop planting samples in cooperation with open space
CN115713700B (en) * 2022-11-23 2023-07-28 广东省国土资源测绘院 Air-ground cooperative typical crop planting sample collection method

Also Published As

Publication number Publication date
CN111008932B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN108665496B (en) End-to-end semantic instant positioning and mapping method based on deep learning
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
US6671399B1 (en) Fast epipolar line adjustment of stereo pairs
CN108921781A (en) A kind of light field joining method based on depth
CN112233181A (en) 6D pose recognition method and device and computer storage medium
CN109584156A (en) Micro- sequence image splicing method and device
CN109145747A (en) A kind of water surface panoramic picture semantic segmentation method
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN110717936B (en) Image stitching method based on camera attitude estimation
CN108171249B (en) RGBD data-based local descriptor learning method
CN111008932B (en) Panoramic image splicing method based on image screening
Elibol et al. Efficient image mosaicing for multi-robot visual underwater mapping
CN111798373A (en) Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization
CN111981982A (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN106204507B (en) Unmanned aerial vehicle image splicing method
Holzer et al. Multilayer adaptive linear predictors for real-time tracking
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN114022525A (en) Point cloud registration method and device based on deep learning, terminal equipment and medium
CN116051658B (en) Camera hand-eye calibration method and device for target detection based on binocular vision
RU2384882C1 (en) Method for automatic linking panoramic landscape images
CN115456870A (en) Multi-image splicing method based on external parameter estimation
US11747141B2 (en) System and method for providing improved geocoded reference data to a 3D map representation
JP2007280032A (en) Image processing apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210525

Termination date: 20211206

CF01 Termination of patent right due to non-payment of annual fee