CN115393196A - Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging - Google Patents

Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging Download PDF

Info

Publication number
CN115393196A
CN115393196A CN202211306562.XA CN202211306562A CN115393196A CN 115393196 A CN115393196 A CN 115393196A CN 202211306562 A CN202211306562 A CN 202211306562A CN 115393196 A CN115393196 A CN 115393196A
Authority
CN
China
Prior art keywords
image
images
point
spliced
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211306562.XA
Other languages
Chinese (zh)
Other versions
CN115393196B (en
Inventor
何佳妮
金祥博
王跃明
曾玉明
吴越
姜璟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202211306562.XA priority Critical patent/CN115393196B/en
Publication of CN115393196A publication Critical patent/CN115393196A/en
Application granted granted Critical
Publication of CN115393196B publication Critical patent/CN115393196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared multi-sequence image seamless splicing method for unmanned plane area array sweep, which comprises the steps of stretching the gray level of an infrared image, extracting characteristic points, establishing a vector relation according to original coordinates, matching the characteristic points of images with overlapped areas, calculating a central control point by combining coordinates, partitioning the image according to the course and the wingspan direction, calculating the coordinate offset of each area, determining the offset of all pixels according to the distance from a central point and the angles from four directions perpendicular to the overlapped areas, calculating the cost and value matrix of adjacent images, weighting and solving the optimal suture line to realize seamless splicing. The aerial infrared multi-sequence image characteristic of area array sweep is considered, the characteristic points and the central control point are used for deformation, the position with the minimum difference of the overlapping area is searched to serve as a suture line, and rapid seamless splicing is achieved.

Description

Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
Technical Field
The invention relates to the field of image splicing, in particular to an infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging.
Background
The aviation infrared image is formed by carrying an infrared sensor on an aviation platform, remotely recording ground object radiation and reflected infrared energy and acquiring the radiation and temperature information of the ground object. Compared with visible light images, infrared imaging has the advantages of all-time imaging, and can be applied to the fields of natural disasters, environmental pollution, battlefield reconnaissance and the like. Image stitching is a key technology for aviation image processing, a group of images with overlapping areas are registered and fused, the process of stitching a plurality of images into a panoramic image with a wider view field is realized, and the stitching precision directly influences the subsequent application effect. However, the infrared image has the characteristics of low signal-to-noise ratio and low contrast, so that the ground object target in the image is not obvious, the spatial correlation is large, the extraction of the image feature point is difficult, the registration error of the same-name point is large, and the infrared image swept by the unmanned aerial vehicle in an area array has the characteristic of large parallax, the image is obtained from a non-single viewpoint, and the terrain height fluctuates, so that the principle of single-point perspective is not met. The existing image splicing technology is mainly based on image matching of an overlapping area and can be divided into a gray-level-based splicing method, a transform domain-based splicing method and a feature-based splicing method for obtaining feature point pairs matched with images in the overlapping area and combining a homography matrix to perform projection or affine transformation.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a registration, deformation and stitching splicing method for an infrared multi-sequence image of unmanned plane area array sweep, aiming at solving the technical problem that the splicing effect of the traditional characteristic point matching and homography transformation matrix on the infrared multi-sequence image of the unmanned plane area array sweep is poor.
The purpose of the invention is realized by the following technical scheme: an unmanned aerial vehicle area array swinging infrared multi-sequence image seamless splicing method comprises the following steps:
(1) Preprocessing an image to be spliced, wherein the preprocessing is to perform gray stretching on an infrared image and enhance the image contrast, and the image to be spliced is an infrared image swept by an unmanned aerial vehicle area array;
(2) Extracting the feature points in the image in the step (1) by utilizing an SIFT algorithm to obtain scale-invariant feature points in the image;
(3) Establishing a vector coordinate relation according to the minimum coordinate of the image, the image size and the sweep sequence in the step (2);
(4) According to the vector coordinates of the images to be spliced, calculating overlapped images of adjacent images of the unmanned aerial vehicle in the course and span directions
The sequence number is used for matching every two feature points of adjacent images, eliminating the feature points with wrong matching by using an RANSAC algorithm to obtain feature point pairs which are accurately matched, and using the midpoint of the corresponding feature point pair in a local coordinate system as a control point;
(5) Dividing the image to be spliced into four areas according to the connecting line of the central point and the vertex of the image to be spliced, wherein the four areas respectively correspond to left and right adjacent images of a course and images in upper and lower adjacent directions in a wingspan direction, respectively calculating coordinate offset between all characteristic points and control points in the four directions, and calculating the average value of coordinate offset of all points in each direction as the offset of the image in the direction;
(6) Fixing the position of the central point of the image to be spliced, determining the offset of the rest pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, and carrying out coordinate transformation on all the pixels according to the respective corresponding offsets to obtain a deformed image; calculating the cost and value matrix of adjacent images, and carrying out weighting to solve the optimal suture line;
(7) And (4) carrying out image splicing on the deformed images obtained in the step (6) on sewing lines in four adjacent directions.
Specifically, the gray stretching in the step (1) is to determine a gray threshold range according to a histogram of the image to be stitched, and normalize the gray value to be within a range of 0 to 255.
Further, the method for calculating the control point in step (5) is implemented by the following sub-steps:
(3.1) determining adjacent image groups of the images to be spliced in the course and span directions: the unmanned aerial vehicle acquires images of the whole area according to the S-shaped route, and if the images are the first or last image acquired in the first line of the course or the first and last images acquired in the last line of the course, the images to be spliced only have 2 adjacent images; if the first image and the last image are acquired from the course middle row, the image to be spliced comprises 3 adjacent images; if the intermediate image is the intermediate image obtained by the course intermediate line, the image group to be spliced comprises 4 images;
(3.2) obtaining the feature point coordinates, feature point description, image coordinates and size information of the image to be spliced and the adjacent images;
(3.3) respectively matching the characteristic points of the image to be spliced and the adjacent image, removing the error matching points of the initially matched characteristic points according to the RANSAC algorithm, and keeping the number Q of the matching characteristic point pairs if the error matching points are removed
Figure 18245DEST_PATH_IMAGE001
10, considering that the matched reliable characteristic point pair does not exist;
and (3.4) combining the image coordinate information and the coordinates of the feature points relative to the image, endowing the coordinates under the local coordinate system to the reserved correct matching feature points, and taking the middle points of the matching feature points as central control points.
Further, the step (5) is realized by the following sub-steps:
(4.1) determining the midpoint of the images to be spliced and the maximum and minimum coordinates of all the images in the image group under the local coordinate system;
(4.2) taking the midpoint of the image to be spliced as a pole and a vertically downward ray as a polar axis, calculating included angles between the midpoint of the image to be spliced and four vertexes, dividing the image to be spliced into 4 areas by connecting lines of the midpoint and the four vertexes, and adopting the mathematical expression as follows:
Figure 31200DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,
Figure 772760DEST_PATH_IMAGE003
Figure 420517DEST_PATH_IMAGE004
Figure 97486DEST_PATH_IMAGE005
Figure 332158DEST_PATH_IMAGE006
respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,
Figure 611830DEST_PATH_IMAGE007
Figure 881137DEST_PATH_IMAGE008
is the horizontal and vertical coordinates of the center point of the image,
Figure 463428DEST_PATH_IMAGE009
Figure 686861DEST_PATH_IMAGE010
is the horizontal and vertical coordinates of the top left vertex of the image,
Figure 973486DEST_PATH_IMAGE011
Figure 159617DEST_PATH_IMAGE012
is the horizontal and vertical coordinates of the left lower vertex of the image,
Figure 240705DEST_PATH_IMAGE013
Figure 387653DEST_PATH_IMAGE014
is the horizontal and vertical coordinates of the right lower vertex of the image,
Figure 445345DEST_PATH_IMAGE015
Figure 485982DEST_PATH_IMAGE016
the horizontal and vertical coordinates of the upper right vertex of the image are shown;
(4.3) calculating coordinate offset of all the matched feature points and the control points in the four regions, taking the average value as the coordinate offset of the region, wherein the mathematical expression of the coordinate offset is as follows:
Figure 737972DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 106637DEST_PATH_IMAGE018
the extracted feature points of the images to be stitched,
Figure 531802DEST_PATH_IMAGE019
is a central control point, and is characterized in that,
Figure 866093DEST_PATH_IMAGE020
is a characteristic point number, in common
Figure 554564DEST_PATH_IMAGE021
A pair of characteristic points is formed by the characteristic points,
Figure 676103DEST_PATH_IMAGE022
for all matching pairs of feature points
Figure 577063DEST_PATH_IMAGE023
Direction seatThe amount of the target offset is set,
Figure 592293DEST_PATH_IMAGE024
for all matching pairs of feature points
Figure 950199DEST_PATH_IMAGE025
The amount of the directional coordinate offset is,
Figure 355773DEST_PATH_IMAGE026
for all pairs of characteristic points
Figure 998107DEST_PATH_IMAGE027
The average coordinate offset of the directions is,
Figure 867843DEST_PATH_IMAGE028
for all feature points
To be in
Figure 632537DEST_PATH_IMAGE029
Average coordinate offset of direction.
Further, the method for calculating coordinate offset of all pixels in the image in the sixth step is implemented by the following substeps:
(5.1) calculating a bisection angle of an angle formed by the central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
Figure 463089DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure 472896DEST_PATH_IMAGE031
is a bisector angle of the first and second vertices and the center point, wherein,
Figure 400401DEST_PATH_IMAGE032
a bisecting angle of the second vertex and the third vertex with respect to the center point, wherein,
Figure 398312DEST_PATH_IMAGE033
a bisecting angle of the third vertex and the fourth vertex with respect to the center point, wherein,
Figure 778478DEST_PATH_IMAGE034
the bisection angle of an included angle between the fourth vertex and the first vertex and the central point is included;
(5.2) calculating a bisector of an overlapping area of the image to be spliced and the adjacent image, wherein the mathematical expression of the bisector is as follows:
Figure 762615DEST_PATH_IMAGE035
wherein the content of the first and second substances,
Figure 43161DEST_PATH_IMAGE036
Figure 211974DEST_PATH_IMAGE037
Figure 79436DEST_PATH_IMAGE038
Figure 867264DEST_PATH_IMAGE039
respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,
Figure 503781DEST_PATH_IMAGE040
Figure 344961DEST_PATH_IMAGE041
Figure 434139DEST_PATH_IMAGE042
and
Figure 25658DEST_PATH_IMAGE043
respectively the distance between the central point of the image to be spliced and the upper, lower, left and right edges,
Figure 516682DEST_PATH_IMAGE044
Figure 27298DEST_PATH_IMAGE045
Figure 367887DEST_PATH_IMAGE046
and
Figure 497517DEST_PATH_IMAGE047
respectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
(5.3) calculating the angles and the distances of all pixels in the image from the central point, wherein the mathematical expression of the angles and the distances is as follows:
Figure 108627DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 790144DEST_PATH_IMAGE049
is the angle of the picture element from the center point,
Figure 119494DEST_PATH_IMAGE050
is the distance of the picture element from the center point,
Figure 351017DEST_PATH_IMAGE051
Figure 754317DEST_PATH_IMAGE052
is the horizontal and vertical coordinates of the center point of the image,
Figure 809998DEST_PATH_IMAGE053
Figure 157802DEST_PATH_IMAGE054
is the horizontal and vertical coordinates of a certain pixel,
Figure 957131DEST_PATH_IMAGE055
is 0, 180 or 360 degrees when the pixel is at the upper left of the center point
Figure 214937DEST_PATH_IMAGE056
Is 180 degrees, when the image element is at the lower left of the central point
Figure 697913DEST_PATH_IMAGE057
Is 360 degrees, when the pixel is positioned at the lower right part of the central point
Figure 798593DEST_PATH_IMAGE058
Is 0 degree, when the pixel is at the upper right of the center point
Figure 401612DEST_PATH_IMAGE059
Is 180 degrees;
(5.4) judging the direction of the bisector of the pixel in the overlapping area, if the pixel is far away from the center of the image to be spliced, the coordinate offset of the pixel is the average coordinate offset of the direction; if the image is close to the direction of the center of the image to be spliced, the pixel coordinate offset is weighted and calculated according to the angle between the image and two adjacent vertexes and the projection distance from the midpoint, and the mathematical expression is as follows:
Figure 576242DEST_PATH_IMAGE060
wherein, the first and the second end of the pipe are connected with each other,
Figure 911408DEST_PATH_IMAGE061
is the weight coefficient of the deformation is the weight coefficient,
Figure 672953DEST_PATH_IMAGE062
the picture element is
Figure 876401DEST_PATH_IMAGE063
The amount of shift in the coordinate direction is,
Figure 171116DEST_PATH_IMAGE064
the picture element is
Figure 411605DEST_PATH_IMAGE065
The amount of shift in the coordinate direction is,
Figure 424560DEST_PATH_IMAGE066
the projection distance of the pixel at the middle point of the region to which the pixel belongs;
Figure 664655DEST_PATH_IMAGE067
the perpendicular distance from the midpoint to the edge for the region,
Figure 79456DEST_PATH_IMAGE068
Figure 553163DEST_PATH_IMAGE069
is an included angle between the vertical line of the adjacent direction and the connecting line of the pixel and the central point,
Figure 459939DEST_PATH_IMAGE070
in adjacent directions
Figure 5190DEST_PATH_IMAGE071
And
Figure 510383DEST_PATH_IMAGE072
average coordinate offset of direction.
Further, the suture line calculation method in the step (7) is realized by the following sub-steps:
(6.1) determining starting and ending points of a suture line at the edge of the overlapping area, wherein if the starting and ending points of the suture line are the adjacent images of the course, the starting and ending points of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent in the span direction, the starting point and the ending point of the sewing line are positioned at the left edge and the right edge of the overlapping area;
(6.2) calculating the gray difference of the overlapping area of the adjacent images
Figure 358253DEST_PATH_IMAGE073
The mathematical expression is as follows:
Figure 345800DEST_PATH_IMAGE074
wherein, the first and the second end of the pipe are connected with each other,
Figure 366846DEST_PATH_IMAGE075
is the gray value of the overlapping area of the images to be spliced,
Figure 552977DEST_PATH_IMAGE076
the gray value of the overlapping area of the adjacent images; (6.3) finding the maximum and minimum gray difference of all the lines and columns, and finding the average gray difference of the maximum and minimum gray differences
Figure 132600DEST_PATH_IMAGE077
The mathematical expression is as follows:
Figure 13969DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 838705DEST_PATH_IMAGE079
is the minimum gray level difference in the overlapping region,
Figure 879342DEST_PATH_IMAGE080
is the maximum gray level difference in the overlap region;
(6.4) Gray-weighted distance conversion of the overlapped region gray image
Figure 131332DEST_PATH_IMAGE081
The mathematical expression is as follows:
Figure 765576DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure 629889DEST_PATH_IMAGE083
the horizontal and vertical coordinates of any pixel in the overlapping area are shown;
(6.5) starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the cost function is calculated by the median value of the maximum gray difference and the minimum gray difference in the overlapping region
Figure 525033DEST_PATH_IMAGE084
The mathematical expression is as follows:
Figure 947924DEST_PATH_IMAGE085
and (6.6) after the suture line is determined, taking images on two sides of the suture line to construct a mask.
Further, the image stitching in the step (7) is realized by the following sub-steps:
(7.1) establishing all image masks;
and (7.2) if repeated masks appear in the triple-overlapped or quadruple-overlapped area, only taking the mask of the image at the forefront of the shooting sequence, and if no mask exists, taking the image at the position corresponding to the first appearing image.
The invention has the following beneficial effects:
the invention provides a simple, quick and effective seamless splicing method for infrared multi-sequence images of unmanned plane area array swinging, multiple unmanned plane infrared images can be spliced into a high-quality wide-view-field panoramic image, the high-quality panoramic splicing result can directly promote the application of the infrared images in the fields of natural disaster monitoring and early warning, environmental pollution discovery and management, battlefield environment patrol and reconnaissance and the like, good data support is provided for further infrared profit research, and the development and application of infrared data are promoted.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings used in the detailed description or the prior art description will be briefly described below.
Fig. 1 is a flowchart of an infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array sweep according to embodiment 1 of the present invention;
fig. 2 is a flowchart of the feature point and central control point extraction step provided in step two of embodiment 1 of the present invention;
fig. 3 is a specific mathematical model of image deformation provided in step four, step five and step six of embodiment 1 of the present invention;
fig. 4 is a flowchart of image deformation provided in step four, step five, and step six of embodiment 1 of the present invention;
fig. 5 is a flowchart of image stitching provided in step seven and step eight of embodiment 1 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "far", "near", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings only for the convenience of description of the present invention and simplification of description, but do not indicate or imply that the method referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment of the invention provides a rapid seamless splicing method for images of an infrared unmanned aerial vehicle, which comprises the following steps as shown in figure 1:
s1, preprocessing an image to be spliced;
in the embodiment of the invention, the infrared images to be spliced are all from an infrared imager swept by an unmanned aerial vehicle area array, the infrared images to be spliced are preprocessed, the preprocessing is to perform gray stretching on the images and enhance the contrast of the images, further, the gray stretching determines a gray value threshold range according to a histogram of the images and normalizes the gray value to be in a range of 0-255;
s2, extracting image feature points;
in the embodiment of the invention, the SIFT algorithm is utilized to extract the feature points in the image;
s3, establishing a vector coordinate relation of the images to be spliced;
in the embodiment of the invention, a vector coordinate relationship is established according to the minimum coordinate of the image to be spliced, the image size and the sweep sequence, and the image rough splicing result and the adjacent image and the overlapping area of the image to be spliced can be determined according to the vector coordinate relationship;
s4, matching the characteristic points of the images with the overlapped areas and calculating a central control point by combining coordinates;
in the embodiment of the invention, according to each image coordinate after geometric correction, the overlapped image serial number of adjacent images in the course direction and the wingspan direction is calculated, feature points of the adjacent images are matched pairwise, the RANSAC algorithm is used for eliminating the feature points with wrong matching to obtain the feature point pairs which are accurately matched, the middle points of the corresponding feature point pairs in a local coordinate system are used as control points, and the purpose of the control points is to ensure that the same-name points of the two images are converted to the positions of the control points through the coordinates to realize coordinate matching; the course direction is the flight direction of the unmanned aerial vehicle, and the wingspan direction is the vertical flight direction of the unmanned aerial vehicle;
s5, carrying out image partitioning according to the course and the wingspan direction and calculating the coordinate offset of each partition;
in the embodiment of the invention, the image with registration is divided into four areas according to the central point and the vertex connecting line of the image with registration, the four areas respectively correspond to the adjacent images in the left and right directions of the course and the adjacent images in the up and down directions of the wingspan, the coordinate offset between the characteristic points and the control points in the four directions is respectively calculated, and the average value of the coordinate offset of all points in each direction is calculated as the offset of the image in the direction because the offsets are basically consistent; the vertexes are four angular points of the image, and the common vertexes are mostly polygons, corners of polyhedrons or other higher-dimensional polyhedrons, and are composed of edges, faces or surfaces of the object.
And S6, determining the offset of all the pixels according to the distance from the center point and the angles from the four vertical overlapping area directions.
In the embodiment of the invention, the position of the central point is fixed and the overlapped area in the upper, lower, left and right directions is divided into two parts according to the overlapped direction, all pixels in the area far away from the central point carry out coordinate transformation according to the average offset, and the offset of the rest pixels is determined according to the distance from the central point and the angles from the four directions perpendicular to the overlapped area.
S7, calculating cost and value matrixes of adjacent images and weighting to solve an optimal suture line;
in the embodiment of the invention, the cost and value matrixes of adjacent images are calculated, the optimal suture line is weighted and solved, and the areas on the two sides of the suture line generate corresponding masks;
s8, stitching the images;
in the embodiment of the invention, image splicing is carried out according to the mask generated by the suture lines of the deformed images in four directions.
Specifically, in the embodiment of the invention, the images to be spliced are subjected to gray scale stretching in a preprocessing manner, then image characteristic point extraction is carried out, the vector coordinate relationship of the images to be spliced is established, the characteristic points of the images in the overlapped areas are matched, the central control point is calculated by combining coordinates, the images are partitioned according to the course direction and the wingspan direction, the coordinate offset of each area is calculated, the offset of all pixels is determined according to the distance from the central point and the angles from four directions perpendicular to the overlapped areas, the cost and the value matrix of adjacent images are calculated, the optimal suture line is weighted and solved, and the images are spliced.
Optionally, as shown in fig. 2, the extracting step of the feature points and the central control point includes:
s21, carrying out gray level stretching on an image to be spliced, and extracting the image feature points after the gray level stretching based on an SIFT algorithm;
s22, establishing a vector position relation according to the image coordinates and the flight strip relation, solving and determining adjacent images of the images to be spliced, and performing primary matching on feature points of the adjacent images;
in the embodiment of the invention, the adjacent image groups of the images in the course direction and the wingspan direction are determined: the unmanned aerial vehicle acquires images of the whole area according to an S-shaped route, and if the images are the first or last image acquired by the first line of the course, or the first and last images acquired by the last line of the course, the images to be spliced only have 2 adjacent images; if the images are the first image and the last image acquired by the course middle row, the images to be spliced comprise 3 adjacent images; if the image group to be spliced is an intermediate image obtained by the course intermediate line, acquiring the feature point coordinates, the feature point description, the image coordinates and the size information of the image to be spliced and the adjacent images thereof, wherein the image group to be spliced comprises 4 images;
s23, eliminating the wrong matching points based on RANSAC, and reserving the middle points of the correct matching feature point pairs as control points;
in the embodiment of the invention, the characteristic points of the image to be spliced and the adjacent image are respectively matched, then the characteristic points which are preliminarily matched are removed according to the RANSAC algorithm, and the number Q of the matched characteristic point pairs reserved after removal
Figure 803884DEST_PATH_IMAGE001
10, considering that the matched reliable feature point pair does not exist, combining image coordinate information and the coordinate of the feature point relative to the image, endowing the retained correct matched feature point with the coordinate under a local coordinate system, and taking the midpoint of the matched feature point as a central control point;
specifically, in the embodiment of the invention, the image to be spliced is subjected to gray scale stretching, the image feature points after the gray scale stretching are extracted based on an SIFT algorithm, the extracted data of the image feature points after the gray scale stretching is increased, a vector position relation is established according to the image coordinates and the air belt relation, the adjacent images of the image to be spliced are determined, the feature points of the adjacent images are subjected to primary matching, the wrong matching points are eliminated based on RANSAC, and the middle points of the correct matching feature point pairs are reserved as control points.
Optionally, as shown in fig. 3 and fig. 4, the specific model of the image deformation includes the following steps:
s41, connecting an image center point and four vertexes, and dividing the image into four areas which respectively correspond to four adjacent images;
in the embodiment of the invention, a coordinate system is established, the center of an image is taken as an original point, a downward direction is taken as an x axis, a rightward direction is taken as a y axis, the original point and four vertexes of an effective area of the image are connected, the image is divided into 4 areas, and the angle of a connecting line between the center and the vertexes is calculated
Figure 970423DEST_PATH_IMAGE003
Figure 484188DEST_PATH_IMAGE004
Figure 77980DEST_PATH_IMAGE005
Figure 421237DEST_PATH_IMAGE006
The calculation formula is as follows:
Figure 125888DEST_PATH_IMAGE086
wherein the content of the first and second substances,
Figure 198886DEST_PATH_IMAGE003
Figure 527362DEST_PATH_IMAGE004
Figure 420231DEST_PATH_IMAGE005
Figure 194152DEST_PATH_IMAGE006
are respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,
Figure 59340DEST_PATH_IMAGE007
Figure 994935DEST_PATH_IMAGE008
is the horizontal and vertical coordinates of the center point of the image,
Figure 873636DEST_PATH_IMAGE009
Figure 247985DEST_PATH_IMAGE010
is the horizontal and vertical coordinates of the top left vertex of the image,
Figure 29997DEST_PATH_IMAGE011
Figure 74176DEST_PATH_IMAGE087
is the horizontal and vertical coordinates of the left lower vertex of the image,
Figure 676059DEST_PATH_IMAGE013
Figure 89985DEST_PATH_IMAGE014
is the horizontal and vertical coordinates of the lower right vertex of the image,
Figure 726502DEST_PATH_IMAGE015
Figure 941583DEST_PATH_IMAGE016
the horizontal and vertical coordinates of the upper right vertex of the image are taken;
s42, calculating coordinate difference values of all feature points and control points in the four areas, and solving an average value as an average offset of all pixels in the direction, wherein a calculation formula is as follows:
Figure 296341DEST_PATH_IMAGE088
wherein, the first and the second end of the pipe are connected with each other,
Figure 746914DEST_PATH_IMAGE018
the extracted feature points of the images to be stitched,
Figure 2052DEST_PATH_IMAGE019
is a central control point, and is characterized in that,
Figure 388034DEST_PATH_IMAGE020
is a characteristic point number, in common
Figure 964509DEST_PATH_IMAGE021
A pair of characteristic points is formed by the characteristic points,
Figure 218773DEST_PATH_IMAGE022
for all matching pairs of feature points
Figure 829883DEST_PATH_IMAGE023
The amount of the directional coordinate offset is,
Figure 386766DEST_PATH_IMAGE024
for all matching pairs of feature points
Figure 952002DEST_PATH_IMAGE025
The amount of the directional coordinate offset is,
Figure 9957DEST_PATH_IMAGE026
for all pairs of characteristic points
Figure 475573DEST_PATH_IMAGE027
The average coordinate offset of the directions is,
Figure 203358DEST_PATH_IMAGE028
for all feature points
To be in
Figure 754425DEST_PATH_IMAGE029
Average coordinate offset of direction.
The method for calculating the coordinate offset of all pixels in the image in the sixth step is realized by the following substeps:
calculating a bisection angle of an angle formed by a central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
Figure 114606DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure 434729DEST_PATH_IMAGE031
a bisector angle of the first vertex and the second vertex with respect to the center point, wherein
Figure 333414DEST_PATH_IMAGE032
A bisecting angle of the second vertex and the third vertex with the center point, wherein,
Figure 371778DEST_PATH_IMAGE033
a bisecting angle of the third vertex and the fourth vertex with respect to the center point, wherein,
Figure 771535DEST_PATH_IMAGE034
is a bisection angle of the fourth vertex and the angle between the first vertex and the center point.
S43, dividing the overlapping area into two areas according to the overlapping direction, wherein the mathematical expression of the two areas is as follows:
Figure 447629DEST_PATH_IMAGE089
wherein, the first and the second end of the pipe are connected with each other,
Figure 782796DEST_PATH_IMAGE036
Figure 42876DEST_PATH_IMAGE037
Figure 246324DEST_PATH_IMAGE090
Figure 275460DEST_PATH_IMAGE091
respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,
Figure 781527DEST_PATH_IMAGE040
Figure 27439DEST_PATH_IMAGE092
Figure 768999DEST_PATH_IMAGE093
and
Figure 918220DEST_PATH_IMAGE094
respectively the distance between the central point of the image to be spliced and the upper, lower, left and right sides,
Figure 595189DEST_PATH_IMAGE044
Figure 829861DEST_PATH_IMAGE095
Figure 876577DEST_PATH_IMAGE046
and
Figure 880305DEST_PATH_IMAGE047
respectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
calculating the angles and the distances between all pixels in the image and the central point, wherein the mathematical expression is as follows:
Figure 728176DEST_PATH_IMAGE096
wherein, the first and the second end of the pipe are connected with each other,
Figure 450144DEST_PATH_IMAGE097
is the angle of the picture element from the center point,
Figure 533506DEST_PATH_IMAGE098
is the distance of the picture element from the center point,
Figure 179293DEST_PATH_IMAGE051
Figure 932485DEST_PATH_IMAGE052
is the horizontal and vertical coordinates of the center point of the image,
Figure 141750DEST_PATH_IMAGE053
Figure 763224DEST_PATH_IMAGE054
is the horizontal and vertical coordinates of a certain pixel,
Figure 7123DEST_PATH_IMAGE099
is 0, 180 or 360 degrees when the pixel is at the upper left of the center point
Figure 196796DEST_PATH_IMAGE056
Is 180 degrees, when the image element is at the lower left of the central point
Figure 925980DEST_PATH_IMAGE057
Is 360 degrees, when the pixel is positioned at the lower right part of the central point
Figure 288828DEST_PATH_IMAGE058
Is 0 degree, when the pixel is at the upper right of the center point
Figure 324917DEST_PATH_IMAGE100
Is 180 degrees;
s44, keeping away from one side of the central point of the image to be spliced, carrying out coordinate transformation on all pixels by using the average offset, enabling all pixels to be close to one side of the central point of the image to be spliced, and determining the offsets of all pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, wherein the specific calculation formula is as follows:
Figure 747809DEST_PATH_IMAGE101
wherein, the first and the second end of the pipe are connected with each other,
Figure 728403DEST_PATH_IMAGE061
is the deformation weight coefficient(s) of the object,
Figure 659056DEST_PATH_IMAGE062
the picture element is
Figure 611969DEST_PATH_IMAGE063
The amount of shift in the coordinate direction is,
Figure 143444DEST_PATH_IMAGE064
the picture element is
Figure 345756DEST_PATH_IMAGE102
The amount of shift in the coordinate direction is,
Figure 315986DEST_PATH_IMAGE103
the projection distance of the pixel at the middle point of the region to which the pixel belongs;
Figure 624870DEST_PATH_IMAGE104
the perpendicular distance from the midpoint to the edge for this region,
Figure 61667DEST_PATH_IMAGE068
Figure 16854DEST_PATH_IMAGE069
is an included angle between the vertical line of the adjacent direction and the connecting line of the pixel and the central point,
Figure 525195DEST_PATH_IMAGE070
in adjacent directions
Figure 124804DEST_PATH_IMAGE071
And
Figure 60399DEST_PATH_IMAGE072
average coordinate offset of direction.
Specifically, in the embodiment of the invention, an image center point and four vertexes are connected, the image is divided into four areas, the four areas correspond to four adjacent images respectively, coordinate differences of all feature points and control points in the four areas are calculated, an average value is calculated to serve as an average offset of all pixels in the direction, an overlapping area is divided into two areas according to an overlapping direction, one side of the overlapping area far away from the center point of the image to be spliced is subjected to coordinate transformation by the average offset, one side of the overlapping area near the center point of the image to be spliced is subjected to coordinate transformation by all pixels, and the offsets of all pixels are determined according to the distance from the center point and the angles from the four directions perpendicular to the overlapping area.
Optionally, as shown in fig. 5, the image stitching includes the following steps:
s51, determining the starting point and the end point of the suture line;
in the embodiment of the invention, the starting point and the ending point of the suture line are determined at the edge of the overlapping area, and if the starting point and the ending point of the suture line are adjacent images in the course, the starting point and the ending point of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent in the span direction, the starting point and the ending point of the sewing line are positioned at the left edge and the right edge of the overlapping area;
s52, calculating the gray difference of the overlapping area of the adjacent images
Figure 204679DEST_PATH_IMAGE073
The mathematical expression is as follows:
Figure 313449DEST_PATH_IMAGE105
wherein, the first and the second end of the pipe are connected with each other,
Figure 767564DEST_PATH_IMAGE106
the gray values of the overlapped regions of the images to be stitched,
Figure 874061DEST_PATH_IMAGE107
the gray value of the overlapping area of the adjacent images;
s53, wherein,
Figure 803840DEST_PATH_IMAGE108
is the gray value of the overlapping area of the images to be spliced,
Figure 155449DEST_PATH_IMAGE076
the gray value of the overlapping area of the adjacent images; (6.3) finding the maximum and minimum gray difference of all the lines and columns, and finding the average gray difference of the maximum and minimum gray differences
Figure 729649DEST_PATH_IMAGE077
The mathematical expression is as follows:
Figure 7047DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 424122DEST_PATH_IMAGE079
is the minimum gray-scale difference in the overlapping region,
Figure 812378DEST_PATH_IMAGE080
is the maximum gray difference in the overlap region;
s54, carrying out gray-scale weighted distance conversion on the gray-scale image in the overlapping area
Figure 241085DEST_PATH_IMAGE109
The mathematical expression is as follows:
Figure 453498DEST_PATH_IMAGE082
wherein the content of the first and second substances,
Figure 92290DEST_PATH_IMAGE083
the horizontal and vertical coordinates of any pixel in the overlapping area are shown;
s55, starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the cost function is calculated by the method of calculating the median value of the maximum gray difference and the minimum gray difference in the overlapping region
Figure 284237DEST_PATH_IMAGE084
The mathematical expression is as follows:
Figure 833030DEST_PATH_IMAGE085
in the embodiment of the invention, starting from the starting point of the suture line, the point with the minimum cost function in the next row is calculated as the position of the next point of the suture line, and the cost function calculation method is the average value of the gray difference values of two adjacent points;
s56, taking images on two sides of the suture line to construct a mask;
in the embodiment of the invention, if repeated masks or no masks appear in a triple-overlapping or quadruple-overlapping area, only the mask of the image with the shooting sequence at the top is taken;
specifically, in the embodiment of the present invention, the starting point and the ending point of the suture line are determined, the gray level difference value of the overlapping area of the adjacent images is calculated, the maximum and minimum gray level difference values of all the rows and columns and the average gray level difference between the maximum gray level difference and the minimum gray level difference are obtained, gray level weighted distance conversion is performed on the gray level image in the overlapping area, starting from the starting point of the suture line, the point with the minimum cost function in the next row is calculated as the position of the next point of the suture line, the cost function calculation method is the average value of the gray level difference values of the two adjacent points, the images on the two sides of the suture line are taken to construct a mask, and the suture line with the minimum gray level difference on the two sides of the suture line can be found by the method.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the claims.

Claims (7)

1. An unmanned aerial vehicle area array swinging infrared multi-sequence image seamless splicing method is characterized by comprising the following steps:
(1) Preprocessing an image to be spliced, wherein the preprocessing is to perform gray stretching on an infrared image and enhance the image contrast, and the image to be spliced is an infrared image swept by an unmanned aerial vehicle area array;
(2) Extracting the feature points in the image in the step (1) by using an SIFT algorithm to obtain scale-invariant feature points in the image;
(3) Establishing a vector coordinate relation according to the minimum coordinate of the image, the image size and the sweep sequence in the step (2);
(4) According to the vector coordinates of the images to be spliced, calculating overlapped images of adjacent images of the unmanned aerial vehicle in the course and span directions
The sequence number is used for matching every two feature points of adjacent images, eliminating the feature points with wrong matching by using an RANSAC algorithm to obtain feature point pairs which are accurately matched, and using the midpoint of the corresponding feature point pair in a local coordinate system as a control point;
(5) Dividing the image to be spliced into four areas according to the connecting line of the central point and the vertex of the image to be spliced, wherein the four areas respectively correspond to left and right adjacent images of a course and images in upper and lower adjacent directions in a wingspan direction, respectively calculating coordinate offset between all characteristic points and control points in the four directions, and calculating the average value of coordinate offset of all points in each direction as the offset of the image in the direction;
(6) Fixing the position of the central point of the image to be spliced, determining the offset of the rest pixels according to the distance from the central point and the angles from the four vertical overlapping area directions, and carrying out coordinate transformation on all the pixels according to the respective corresponding offsets to obtain a deformed image; calculating the cost and value matrix of adjacent images, and weighting to solve the optimal suture line;
(7) And (4) carrying out image splicing on the deformed images obtained in the step (6) on suture lines in four adjacent directions.
2. The method for seamlessly splicing the infrared multi-sequence images by unmanned aerial vehicle area array sweeping as claimed in claim 1, wherein the gray stretching in step (1) is to determine a threshold range of gray values according to a histogram of the images to be spliced and normalize the gray values to be in a range of 0-255.
3. The method for seamlessly splicing the infrared multi-sequence images during the area array sweeping of the unmanned aerial vehicle according to claim 1, wherein the calculating method of the control points in the step (5) is implemented through the following sub-steps:
(3.1) determining adjacent image groups of the images to be spliced in the course and span directions: the unmanned aerial vehicle acquires images of the whole area according to the S-shaped route, and if the images are the first or last image acquired in the first line of the course or the first and last images acquired in the last line of the course, the images to be spliced only have 2 adjacent images; if the first image and the last image are acquired from the course middle row, the image to be spliced comprises 3 adjacent images; if the image group to be spliced is the intermediate image obtained by the course intermediate line, the image group to be spliced comprises 4 images;
(3.2) acquiring the feature point coordinates, feature point descriptions, image coordinates and size information of the image to be spliced and the adjacent images;
(3.3) respectively matching the feature points of the image to be spliced and the adjacent image, removing the error matching points of the preliminarily matched feature points according to the RANSAC algorithm, and keeping the number Q of the matching feature point pairs after removal
Figure 515484DEST_PATH_IMAGE001
10, the matched reliable characteristic point pair does not exist;
and (3.4) combining the image coordinate information and the coordinates of the feature points relative to the image, endowing the coordinates under a local coordinate system to the reserved correct matching feature points, and taking the middle points of the matching feature points as central control points.
4. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the step (5) is realized by the following sub-steps:
(4.1) determining the midpoint of the images to be spliced and the maximum and minimum coordinates of all the images in the image group under the local coordinate system;
(4.2) taking the midpoint of the image to be spliced as a pole and a vertically downward ray as a polar axis, calculating included angles between the midpoint of the image to be spliced and four vertexes, dividing the image to be spliced into 4 areas by connecting lines of the midpoint and the four vertexes, and adopting the mathematical expression as follows:
Figure 760521DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 805837DEST_PATH_IMAGE003
Figure 30407DEST_PATH_IMAGE004
Figure 907096DEST_PATH_IMAGE005
Figure 260717DEST_PATH_IMAGE006
respectively the angles formed by the left, right, upper and lower vertexes and the middle point of the image,
Figure 793330DEST_PATH_IMAGE007
Figure 320126DEST_PATH_IMAGE008
is the horizontal and vertical coordinates of the center point of the image,
Figure 753119DEST_PATH_IMAGE009
Figure 74379DEST_PATH_IMAGE010
is the horizontal and vertical coordinates of the top left vertex of the image,
Figure 94288DEST_PATH_IMAGE011
Figure 159196DEST_PATH_IMAGE012
is the horizontal and vertical coordinates of the left lower vertex of the image,
Figure 948160DEST_PATH_IMAGE013
Figure 207365DEST_PATH_IMAGE014
is the horizontal and vertical coordinates of the lower right vertex of the image,
Figure 714570DEST_PATH_IMAGE015
Figure 317590DEST_PATH_IMAGE016
the horizontal and vertical coordinates of the upper right vertex of the image are taken;
(4.3) calculating coordinate offset of all the matched feature points and the control points in the four regions, taking the average value as the coordinate offset of the region, wherein the mathematical expression is as follows:
Figure 226640DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure 889702DEST_PATH_IMAGE018
the extracted feature points of the images to be stitched,
Figure 884203DEST_PATH_IMAGE019
is a central control point, and is characterized in that,
Figure 523870DEST_PATH_IMAGE020
is a characteristic point number, in common
Figure 349743DEST_PATH_IMAGE021
A pair of characteristic points is formed by the characteristic points,
Figure 386969DEST_PATH_IMAGE022
for all matching pairs of feature points
Figure 665504DEST_PATH_IMAGE023
The amount of the directional coordinate offset is,
Figure 813589DEST_PATH_IMAGE024
for all matching pairs of feature points
Figure 198696DEST_PATH_IMAGE025
The amount of the directional coordinate offset is,
Figure 469140DEST_PATH_IMAGE026
for all pairs of characteristic points
Figure 969392DEST_PATH_IMAGE027
The average coordinate offset of the directions is,
Figure 655588DEST_PATH_IMAGE028
for all feature points
To is in pair
Figure 456054DEST_PATH_IMAGE029
Average coordinate offset of direction.
5. The seamless splicing method for the infrared multi-sequence images swept by the unmanned aerial vehicle area array according to claim 1, wherein the calculating method for the coordinate offset of all pixels in the images in the step (6) is realized by the following sub-steps:
(5.1) calculating a bisection angle of an angle formed by the central point and four vertexes of the image to be spliced, wherein the mathematical expression of the bisection angle is as follows;
Figure 333618DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure 586744DEST_PATH_IMAGE031
is a bisector angle of the first and second vertices and the center point, wherein,
Figure 76632DEST_PATH_IMAGE032
a bisection angle of an angle between the second vertex and the third vertex and the center point, wherein,
Figure 466025DEST_PATH_IMAGE033
is a bisection angle of an angle between the third vertex and the fourth vertex and the center point, wherein,
Figure 15955DEST_PATH_IMAGE034
the bisection angle of the included angle between the fourth vertex and the first vertex and the central point;
(5.2) calculating a bisector of an overlapping area of the image to be spliced and the adjacent image, wherein the mathematical expression of the bisector is as follows:
Figure 257842DEST_PATH_IMAGE035
wherein, the first and the second end of the pipe are connected with each other,
Figure 551420DEST_PATH_IMAGE036
Figure 795320DEST_PATH_IMAGE037
Figure 312889DEST_PATH_IMAGE038
Figure 478291DEST_PATH_IMAGE039
respectively the distances from the midpoint of the image to the bisector of the left, right, upper and lower overlapped regions,
Figure 605254DEST_PATH_IMAGE040
Figure 438080DEST_PATH_IMAGE041
Figure 126551DEST_PATH_IMAGE042
and
Figure 575987DEST_PATH_IMAGE043
respectively the distance between the central point of the image to be spliced and the upper, lower, left and right sides,
Figure 211367DEST_PATH_IMAGE044
Figure 931324DEST_PATH_IMAGE045
Figure 993958DEST_PATH_IMAGE046
and
Figure 930690DEST_PATH_IMAGE047
respectively the minimum distance between the central point of the image to be spliced and the upper, lower, left and right adjacent images;
(5.3) calculating the angles and the distances of all the pixels in the image from the center point, wherein the mathematical expression is as follows:
Figure 369761DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 911601DEST_PATH_IMAGE049
is the angle of the picture element from the center point,
Figure 941874DEST_PATH_IMAGE050
is the distance of the picture element from the center point,
Figure 864437DEST_PATH_IMAGE051
Figure 841621DEST_PATH_IMAGE052
is the horizontal and vertical coordinates of the center point of the image,
Figure 237967DEST_PATH_IMAGE053
Figure 704720DEST_PATH_IMAGE054
is the horizontal and vertical coordinates of a certain pixel,
Figure 350465DEST_PATH_IMAGE055
is 0, 180 or 360 degrees when the pixel is at the upper left of the center point
Figure 367225DEST_PATH_IMAGE056
Is 180 degrees, when the pixel is at the left lower part of the central point
Figure 883657DEST_PATH_IMAGE057
Is 360 degrees, when the pixel is positioned at the lower right part of the central point
Figure 255733DEST_PATH_IMAGE058
Is 0 degree, when the pixel is at the upper right of the center point
Figure 654353DEST_PATH_IMAGE059
Is 180 degrees;
(5.4) judging the direction of the bisector of the pixel in the overlapping area, if the pixel is far away from the center of the image to be spliced, the coordinate offset of the pixel is the average coordinate offset of the direction; if the direction is close to the center of the image to be spliced, the coordinate offset of the pixel is weighted and calculated according to the angle between the coordinate offset and two adjacent vertexes and the projection distance from the midpoint, and the mathematical expression is as follows:
Figure 238918DEST_PATH_IMAGE060
wherein, the first and the second end of the pipe are connected with each other,
Figure 344277DEST_PATH_IMAGE061
is the deformation weight coefficient(s) of the object,
Figure 651368DEST_PATH_IMAGE062
the picture element is
Figure 209389DEST_PATH_IMAGE063
Deviation of coordinate directionsThe amount of the compound (A) is,
Figure 394382DEST_PATH_IMAGE064
the picture element is
Figure 354248DEST_PATH_IMAGE065
The amount of shift in the coordinate direction is,
Figure 333706DEST_PATH_IMAGE066
the projection distance of the pixel at the middle point of the region to which the pixel belongs;
Figure 146066DEST_PATH_IMAGE067
the perpendicular distance from the midpoint to the edge for the region,
Figure 869171DEST_PATH_IMAGE068
Figure 683544DEST_PATH_IMAGE069
is the included angle between the vertical line of the adjacent direction and the connecting line of the picture element and the central point,
Figure 99481DEST_PATH_IMAGE070
in adjacent directions
Figure 632094DEST_PATH_IMAGE071
And
Figure 680863DEST_PATH_IMAGE072
average coordinate offset of direction.
6. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the suture line calculation method in the step (7) is realized through the following sub-steps:
(6.1) determining the starting point and the ending point of the suture line at the edge of the overlapping area, wherein if the starting point and the ending point of the suture line are adjacent to the course, the starting point and the ending point of the suture line are at the upper edge and the lower edge of the overlapping area; if the images are adjacent images in the span direction, starting and ending points of the sewing line are positioned at the left edge and the right edge of the overlapping area;
(6.2) calculating the gray difference of the overlapping areas of the adjacent images
Figure 349742DEST_PATH_IMAGE073
The mathematical expression is as follows:
Figure 936581DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 956489DEST_PATH_IMAGE075
is the gray value of the overlapping area of the images to be spliced,
Figure 755818DEST_PATH_IMAGE076
the gray value of the overlapping area of the adjacent images; (6.3) finding the maximum and minimum gray difference of all the lines and determining the average gray difference of the maximum and minimum gray differences
Figure 108564DEST_PATH_IMAGE077
The mathematical expression is as follows:
Figure 803988DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 107930DEST_PATH_IMAGE079
is the minimum gray level difference in the overlapping region,
Figure 914212DEST_PATH_IMAGE080
is the maximum gray level difference in the overlap region;
(6.4) Gray-scale weighted distance conversion of the overlapping region Gray-scale image
Figure 885579DEST_PATH_IMAGE081
The mathematical expression is as follows:
Figure 486325DEST_PATH_IMAGE082
wherein, the first and the second end of the pipe are connected with each other,
Figure 41678DEST_PATH_IMAGE083
the horizontal and vertical coordinates of any pixel in the overlapping area are shown;
(6.5) starting from the starting point of the suture line, calculating the point with the minimum cost function of the next row as the position of the next point of the suture line, wherein the calculation method of the cost function is the median value of the maximum gray difference and the minimum gray difference in the overlapping area
Figure 651651DEST_PATH_IMAGE084
The mathematical expression is as follows:
Figure 415207DEST_PATH_IMAGE085
and (6.6) after the suture line is determined, taking images on two sides of the suture line to construct a mask.
7. The unmanned aerial vehicle area array sweeping infrared multi-sequence image seamless splicing method according to claim 1, wherein the image splicing in the step (7) is realized by the following sub-steps:
(7.1) establishing all image masks;
and (7.2) if repeated masks appear in the triple-overlapped or quadruple-overlapped area, only taking the mask of the image at the forefront of the shooting sequence, and if no mask exists, taking the image at the position corresponding to the first appearing image.
CN202211306562.XA 2022-10-25 2022-10-25 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging Active CN115393196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211306562.XA CN115393196B (en) 2022-10-25 2022-10-25 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211306562.XA CN115393196B (en) 2022-10-25 2022-10-25 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging

Publications (2)

Publication Number Publication Date
CN115393196A true CN115393196A (en) 2022-11-25
CN115393196B CN115393196B (en) 2023-03-24

Family

ID=84129183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211306562.XA Active CN115393196B (en) 2022-10-25 2022-10-25 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging

Country Status (1)

Country Link
CN (1) CN115393196B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN107945113A (en) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 The antidote of topography's splicing dislocation
CN109118429A (en) * 2018-08-02 2019-01-01 武汉大学 A kind of medium-wave infrared-visible light multispectral image rapid generation
CN109961399A (en) * 2019-03-15 2019-07-02 西安电子科技大学 Optimal stitching line method for searching based on Image distance transform
AU2020101709A4 (en) * 2020-05-18 2020-09-17 Zhejiang University Crop yield prediction method and system based on low-altitude remote sensing information from unmanned aerial vehicle
CN112862683A (en) * 2021-02-07 2021-05-28 同济大学 Adjacent image splicing method based on elastic registration and grid optimization
CN113506216A (en) * 2021-06-24 2021-10-15 煤炭科学研究总院 Rapid suture line optimization method for panoramic image splicing
WO2021213508A1 (en) * 2020-04-24 2021-10-28 安翰科技(武汉)股份有限公司 Capsule endoscopic image stitching method, electronic device, and readable storage medium
WO2022027313A1 (en) * 2020-08-05 2022-02-10 深圳市大疆创新科技有限公司 Panoramic image generation method, photography apparatus, flight system, and storage medium
CN114936971A (en) * 2022-06-08 2022-08-23 浙江理工大学 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
CN115082314A (en) * 2022-06-28 2022-09-20 中国科学院光电技术研究所 Method for splicing optical surface defect images in step mode through self-adaptive feature extraction

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN107945113A (en) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 The antidote of topography's splicing dislocation
CN109118429A (en) * 2018-08-02 2019-01-01 武汉大学 A kind of medium-wave infrared-visible light multispectral image rapid generation
CN109961399A (en) * 2019-03-15 2019-07-02 西安电子科技大学 Optimal stitching line method for searching based on Image distance transform
WO2021213508A1 (en) * 2020-04-24 2021-10-28 安翰科技(武汉)股份有限公司 Capsule endoscopic image stitching method, electronic device, and readable storage medium
AU2020101709A4 (en) * 2020-05-18 2020-09-17 Zhejiang University Crop yield prediction method and system based on low-altitude remote sensing information from unmanned aerial vehicle
WO2022027313A1 (en) * 2020-08-05 2022-02-10 深圳市大疆创新科技有限公司 Panoramic image generation method, photography apparatus, flight system, and storage medium
CN112862683A (en) * 2021-02-07 2021-05-28 同济大学 Adjacent image splicing method based on elastic registration and grid optimization
CN113506216A (en) * 2021-06-24 2021-10-15 煤炭科学研究总院 Rapid suture line optimization method for panoramic image splicing
CN114936971A (en) * 2022-06-08 2022-08-23 浙江理工大学 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
CN115082314A (en) * 2022-06-28 2022-09-20 中国科学院光电技术研究所 Method for splicing optical surface defect images in step mode through self-adaptive feature extraction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
V T MANU; B M MEHTRE: ""Visual artifacts based image splicing detection in uncompressed images"", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, VISION AND INFORMATION SECURITY (CGVIS)》 *
杨国鹏; 周欣; 韦红波; 邢平: ""面阵摆扫航空相机序列图像的大区域无缝拼接"", 《测绘科学》 *
袁艳等: "基于投影变换与SIFT结合的摆扫图像拼接技术", 《现代电子技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117132913B (en) * 2023-10-26 2024-01-26 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Also Published As

Publication number Publication date
CN115393196B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110966991B (en) Single unmanned aerial vehicle image positioning method without control point
Yahyanejad et al. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs
CN109903227B (en) Panoramic image splicing method based on camera geometric position relation
CN110595476B (en) Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion
CN104156968B (en) Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN107808362A (en) A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN106373088B (en) The quick joining method of low Duplication aerial image is tilted greatly
CN107918927A (en) A kind of matching strategy fusion and the fast image splicing method of low error
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
US11307595B2 (en) Apparatus for acquisition of distance for all directions of moving body and method thereof
CN105245841A (en) CUDA (Compute Unified Device Architecture)-based panoramic video monitoring system
CN104732482A (en) Multi-resolution image stitching method based on control points
US8666170B2 (en) Computer system and method of matching for images and graphs
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN109118429B (en) Method for rapidly generating intermediate wave infrared-visible light multispectral image
CN101930603B (en) Method for fusing image data of medium-high speed sensor network
CN111192194B (en) Panoramic image stitching method for curtain wall building facade
CN106952219B (en) Image generation method for correcting fisheye camera based on external parameters
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN115393196B (en) Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
Moussa et al. A fast approach for stitching of aerial images
CN116228539A (en) Unmanned aerial vehicle remote sensing image stitching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant