CN113313659B - High-precision image stitching method under multi-machine cooperative constraint - Google Patents

High-precision image stitching method under multi-machine cooperative constraint Download PDF

Info

Publication number
CN113313659B
CN113313659B CN202110450860.5A CN202110450860A CN113313659B CN 113313659 B CN113313659 B CN 113313659B CN 202110450860 A CN202110450860 A CN 202110450860A CN 113313659 B CN113313659 B CN 113313659B
Authority
CN
China
Prior art keywords
image
point
pixel
splicing
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110450860.5A
Other languages
Chinese (zh)
Other versions
CN113313659A (en
Inventor
席建祥
杨小冈
卢瑞涛
谢学立
陈彤
郭杨
王乐
刘祉祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN202110450860.5A priority Critical patent/CN113313659B/en
Publication of CN113313659A publication Critical patent/CN113313659A/en
Application granted granted Critical
Publication of CN113313659B publication Critical patent/CN113313659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-precision image splicing method under multi-machine cooperative constraint, which is used for constructing a multi-scale multi-view image splicing model based on space pose information of a multi-unmanned aerial vehicle from the perspective of multi-machine cooperative control. Aiming at the phenomena of double image, pixel fracture and the like of a single homography matrix, a self-adaptive homography matrix method is provided for improving the splicing effect. And for the problems of blurring in the same overlapping region and inconsistent colors after splicing, a weighted smoothing algorithm is adopted, and the smooth transition of the images of the overlapping part is realized through weight distribution, so that the color difference problem near the splicing coincidence is effectively solved. The method provided by the invention has good splicing performance, obviously improves registration accuracy, and meets the requirements of high real-time performance and high accuracy for splicing multi-scale multi-view aerial images in the multi-machine collaborative inspection flight process.

Description

High-precision image stitching method under multi-machine cooperative constraint
Technical Field
The invention relates to the technical field of image data processing, in particular to a high-precision image splicing method under multi-machine cooperative constraint.
Background
Image stitching is the process of combining multiple images with overlapping areas in the same scene into a wide-view, high-resolution complete image. Currently common image registration methods include image gray-scale based, transform domain based, and feature based methods. The feature-based matching method is widely applied and comprises basic HOG (Histogram of Oriented Gradients, direction gradient histogram) features, SIFT (Scale-Invariant Feature Transform, scale invariant feature transform) algorithms, SURF (Speeded Up Robust Features, accelerated robust feature) algorithms, ORB (Oriented FAST and Rotated BRIEF, rapid feature point extraction and description algorithms) and other improved algorithms. The HOG features have invariance to image geometry and optical deformation, have strong robustness, but have poor instantaneity and sensitivity to noise points, and are difficult to process for shielding problems; the SIFT algorithm has strong robustness and high reliability, but has large calculated amount and can not meet the real-time requirement of splicing; SURF algorithm matching speed is high, operation efficiency is high, but performance such as scale invariance, rotation invariance and the like is not ideal; the ORB algorithm has high calculation speed, the running time is far better than SIFT and SURF, the real-time performance is good, and the ORB algorithm has invariance to noise and perspective transformation thereof. In addition, patent CN105869120a discloses a real-time optimization method for image stitching, which reduces the search range of feature points by locking the overlapping area, and improves the stitching speed. Patent CN108921780a discloses a fast splicing method of unmanned aerial vehicle aerial photographing sequence images, which has good stability and low redundant information, but the detail information of the images is easy to lose, which is not beneficial to generating a high-precision splicing result graph.
Because the camera visual angle is limited and the environment is complex, a high-resolution comprehensive wide-area result graph is difficult to acquire in real time, and the patrol efficiency can be greatly improved by utilizing a multi-mode sensor collaborative patrol task area carried by a plurality of unmanned aerial vehicles, but two problems still exist when the unmanned aerial vehicles are used for patrol: firstly, the information sharing and communication problems among multiple machines are solved, the transmission of a large amount of complex data is completed in a very short time, and the information is prevented from being asynchronous; secondly, the post-processing problem of the images shot by multiple machines is solved, the image information acquired by the multiple machines is separated, and the flight heights, the relative positions and the attitudes of the unmanned aerial vehicles are different, so that the dimensions and the visual angles of the aerial images acquired by the sensors are different. At present, the unmanned aerial vehicle aerial image splicing method has ideal splicing effect on the same plane image, but the conditions of mutual intersection, overlapping and the like of a plurality of visual angle images with a plurality of scales often occur in multi-machine collaborative shooting, and the existing algorithm cannot meet the requirements of high real-time performance and high precision at the same time.
In order to solve the problem, how to reasonably utilize a multi-machine cooperation strategy and realize the splicing of multi-scale multi-view aerial images becomes the problem to be solved at present.
Disclosure of Invention
Aiming at the problems, the invention provides a high-precision image splicing method under multi-machine cooperative constraint, which can solve the problems of low splicing efficiency and poor quality caused by large data volume and complex types in the cooperative inspection process of multiple unmanned planes.
The core idea of the invention is as follows:
firstly, from the perspective of multi-machine cooperative control, acquiring aerial images in real time, and selecting a rapid splicing scheme for splicing among unmanned aerial vehicles and splicing among single machine sequences; then, an ORB feature extraction algorithm is utilized, the difference of the transformation relations of images with different scales and different visual angles under the condition of multi-camera shooting is fully considered, and a self-adaptive homography matrix is constructed to finish high-precision registration and splicing of the images; and finally, realizing smooth transition of the overlapped images by using a weighted smoothing algorithm, and solving the visual effect problems of breakage, obvious seam and the like of the spliced large-view-field images, thereby improving the image splicing quality.
The technical solution for realizing the purpose of the invention is as follows:
the high-precision image splicing method under the multi-machine cooperative constraint is characterized by comprising the following steps of:
step 1: acquiring ground images in real time through multiple sensors and combining position, attitude and angle information among multiple unmanned aerial vehicles to construct a multi-machine collaborative real-time image imaging model, so as to determine an actual imaging area;
step 2: extracting a feature point set of an original image in an actual imaging area by using an ORB method, performing coarse image matching, purifying a coarse matching result by using a random sampling consistency algorithm, and removing mismatching points;
step 3: constructing a self-adaptive homography matrix, mapping the purified images to the same coordinate system for preliminary stitching, then carrying out time sequence correction on each sequence of images, and then carrying out color correction by adopting a weighted smoothing algorithm;
step 4: and outputting a result image after the image stitching is completed.
Further, the specific operation steps of the step 1 include:
step 11: resolving the acquired ground image data to obtain unmanned aerial vehicle motion parameters, so as to realize the flight control of the machine body;
step 12: the unmanned aerial vehicle attitude angle is obtained by utilizing an inertial navigation unit, a GPS and a barometer which are carried on the unmanned aerial vehicle: pitch angle θ, roll angle φ, yaw angleAnd information such as coordinates, flight height, etc.;
step 13: respectively establishing a ground coordinate system, a machine body coordinate system, a camera coordinate system and an image coordinate system;
step 14: establishing a coordinate transformation relation p between a machine body coordinate system and a ground coordinate system by combining an attitude angle of an unmanned aerial vehicle g =L b p b And establishing a multi-machine collaborative real-time image forming model.
Further, the specific operation steps of the step 2 include:
step 21: selecting any pixel point S in an original image, drawing a circle with a radius of 3 pixels by taking S as a circle center, detecting 16 pixel points falling on the circle, recording the number of continuous pixel points with the pixel gray level satisfying the formula (2) in the 16 pixel points on the neighborhood circle as h, and judging whether the h is larger than a preset threshold epsilon or not d If the gray value is larger than the gray value, judging S as the characteristic point, and the gray value condition satisfied by the pixel is as follows:
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the circle center, epsilon d N represents the gray difference value, which is the threshold value of the gray difference value;
step 22: for each pixel point S, the circumference vertical is directly detectedGray values at four pixel positions in directions I (1), I (9) and horizontal directions I (5) and I (13), and the gray difference value between the pixels at the four positions I (t) and the selected point is counted to be larger than epsilon d The number M of pixels of (1), namely:
M=size(I∈{|I(t)-I(s)|>ε d }),ε d >0 (3);
if the characteristic point satisfies the formula (3) and M is more than or equal to 3, judging S as the characteristic point, otherwise, directly excluding the characteristic point;
step 23: calculating hamming distances d between any feature point S selected from one image and all feature points V in other images, sequencing the obtained distances d, selecting one point closest to the distance as a matching point, and establishing rough matching point pairs to form a feature point set;
step 24: and screening the obtained feature point set by adopting a random sampling consistency method, and finally removing mismatching points to obtain a purified matching feature point set.
Further, the specific operation steps of step 24 include:
step 241: q points are selected from the obtained feature point set, and all feature points P of the first image are calculated according to a set registration straight line model 1 Mapping point set P on second image * 2 And satisfies the mapping relationship:and m=1, 2, …, Q;
step 242: calculation ofEach point and a second image feature point set P 2 The Euclidean distance of the corresponding point in the set feature point gray level difference value is set as delta, and P is counted * 2 The number N of the characteristic points meeting the Euclidean distance less than delta T :
δ>0;
Step 243: re-randomly selectingQ points, repeating steps 241-242K times, and recording N times each time T Until the iteration is finished;
step 244: selecting n=max (N Tk (k=1, 2, …, K)) registration model as the final fitting result, thereby rejecting mismatching points.
Further, the specific operation steps of the step 3 include:
step 31: adopting a mobile DLT method, and realizing linear fitting of the adaptive homography matrix according to the coordinate values of the purified multiple groups of matched target point pairs;
step 32: performing time sequence correction on the multi-unmanned aerial vehicle aerial image sequence by adopting the position of the center point of the image after homography transformation, and corresponding to time sequence movement parameter flag of the image o The values of (2) are as follows:
wherein P is ox The center point x coordinate of the o-th image after homography transformation is that L is the width of the reference image;
step 33: traversing each frame of the image sequence according to the flag o Is corrected by the following steps: when the flag is o When the frame is 1, the later frame of the image to be spliced is used for splicing, and when the frame is flag o When the frame is-1, the previous frame of the image to be spliced is used for splicing until the flag o When the image is 0, outputting a corrected image sequence;
step 34: the new pixel values of the overlapping region are recalculated using the linear gradient of the image for color correction.
Further, the specific operation steps of step 31 include:
step 311: the feature matching point sets of the two images are respectively X * ' and X * Constructing an adaptive homography transformation relation X * '=H * X *
Step 312: changing the constructed adaptive homography transformation relation into an implicit form and linearizing into: x is X * '×H * X * =0, can be obtained:
step 313: linearization matrixCan be obtained as a=0 for a * Singular value decomposition is carried out to obtain the self-adaptive homography matrix H * Right singular vector->
Wherein the weight isRepresenting a distance from the control point x * The more important the closer pixel data, the variance σ 2 Is a measure of how far a measurement point deviates from a control point;
step 314: according to the obtained right singular vectorsI.e. the adaptive homography matrix H can be reconstructed *
Further, the formula for calculating the new pixel value of the overlapping area in step 34 is:
I(x,y)=I l (x l ,y l )(1-α)+I r (x r ,y r )α (6),
wherein,representing the distance from the pixel point abscissa to the left boundary abscissa of the overlap region, (x) l ,y l ) For the front left image coordinates, (x) r ,y r ) Is spliced withFront right image coordinates, I l (x l ,y l ) For stitching the front left image pixels, I r (x r ,y r ) For the right image pixel before stitching, (x) lmax ,y lmax ) For the maximum value of the left image coordinates before stitching, (x, y) is the post-stitching image coordinates, and I (x, y) is the post-stitching image pixels.
Compared with the prior art, the method has the following beneficial effects:
firstly, the invention provides a self-adaptive homography matrix method aiming at the problems of complex and changeable environments and the transformation of images with different scales and different visual angles under multi-machine cooperation, which basically eliminates the phenomena of double image, pixel fracture and the like of the traditional single homography matrix, has better effect in the aspects of registration error and splicing precision based on the self-adaptive homography matrix method, and meets the high-precision requirement of image splicing.
And secondly, the invention also adopts a weighted smoothing algorithm, realizes the smooth transition of the overlapped part images through weight distribution, and effectively solves the problems of fuzzy overlapped area and inconsistent colors after splicing by combining the linear gradual change idea.
In conclusion, the characteristic registration accuracy of the spliced graph obtained by the method provided by the invention is obviously improved, and the high-accuracy requirement of splicing is met.
Drawings
FIG. 1 is a flow chart of a stitching algorithm of the present invention;
FIGS. 2 (a) - (c) are schematic diagrams of coordinate system transformations;
FIG. 3 is a schematic view of feature point matching of four scene aerial source images;
FIG. 4 is two sets of different aerial source images;
FIG. 5 is a graph of two sets of aerial source image stitching experiments;
fig. 6 is an experimental diagram of multi-scale multi-view image stitching under four scenarios.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The invention provides a high-precision image stitching optimization method under multi-machine cooperative constraint, which comprises the following steps:
step 1: acquiring images in real time by utilizing a plurality of sensors, and establishing a multi-machine collaborative real-time image forming model by combining position, posture and angle information among the plurality of unmanned aerial vehicles;
the multi-sensor real-time ground image acquisition and calculation processing are carried out on the image data to obtain the unmanned aerial vehicle motion parameters, so that the flight control of the machine body is realized. Acquiring attitude angle (pitch angle theta, roll angle phi and yaw angle) of unmanned aerial vehicle by combining inertial navigation unit (IMU), GPS and barometer mounted on unmanned aerial vehicle) Information such as coordinates and flying height. And establishing a multi-machine collaborative imaging model based on pose information acquired by each module carried by the unmanned aerial vehicle so as to conveniently acquire characteristic information of the patrol area. The method comprises the following specific steps:
(1) Establishing a coordinate system
First, a ground coordinate system p is established g =(x g ,y g ,z g ) T Assuming a planar ground surface, the points on the ground surface and the obstacle are all located on the planar surface. Wherein the coordinates (x g ,y g ,z g ) The direction meets the right hand rule of a coordinate system for the components of the ground point in the directions of x, y and z;
secondly, establishing a machine body coordinate system P b =(x b ,y b ,z b ) T Selecting a mass center of a body as an origin, wherein x is b The axis pointing towards the handpiece, y b The axis pointing to the right of the fuselage, z b The shaft points to the lower part of the machine body;
again, a camera coordinate system p is established ci =(x ci ,y ci ,z ci ) T Assuming that each drone i corresponds to a camera, the camera may be placed in any location, and thus the ground coordinates are chosen to describe the camera position. The relation between the camera coordinate system and the ground coordinate system can be represented by a rotation matrix R ci And translation vector t ci Described as p g =R ci p ci +t ci The method comprises the steps of carrying out a first treatment on the surface of the The conversion relation between the ground coordinate system and the camera coordinate system is shown in fig. 2 (a), and the conversion relation between the machine body coordinate system and the ground coordinate system is shown in fig. 2 (b);
finally, an image coordinate system p is established Ii =(x I ,y I ) T The image coordinate system is a two-dimensional coordinate system established for the real-time graph, the upper left corner of the real-time graph is taken as the origin of coordinates, and x I The axis is horizontal to the right, y I Vertical axis x I In the axial direction, the coordinate transformation relationship between the image coordinate system and the camera coordinate system is as follows: p is p Ii =R Ii p ci Wherein the matrix R is transformed Ii The conversion relationship between the image coordinate system and the camera coordinate system is shown in fig. 2 (c) in relation to the camera parameters.
(2) Establishing an imaging model
When the onboard camera images, the imaging area can be regarded as a rectangular pyramid and a plane intersection, and the actual imaging area can be determined through the linear equation of four edges of each unmanned aerial vehicle. Establishing a coordinate transformation relation between a machine body coordinate system and a ground coordinate system by combining an attitude angle of the unmanned aerial vehicle: p is p g =L b p b . Transformation matrix L around three axes of x, y and z b Attitude angle theta, phi, and unmanned plane,In relation, it can be expressed as:
if there are n unmanned aerial vehicle shooting image coordinates p Ii (i=1, 2, …, n) and p g Corresponding, i.e. p g In the imaging range of n unmanned aerial vehicles, image fusion can be carried out on the partial images in the later stage.
Step 2: and extracting gray value information of the continuous pixel points by using an ORB method, further judging image feature points, and carrying out feature matching and purification on the extracted feature points. The method comprises the following specific steps:
(1) Feature extraction
And judging image feature points based on gray value information of continuous pixel points by adopting an ORB method, comparing gray value results of any pixel point S in the image and points in a circular neighborhood of the pixel point S, and extracting a source image feature point set, wherein the specific process is as follows:
first, global detection is performed on candidate points: selecting any pixel point S in an image, drawing a circle with a radius of 3 pixels by taking S as a circle center, considering 16 pixels on the circumference of the pixel point, detecting 16 pixel points falling on the circumference, recording the number of continuous pixel points with the pixel gray level satisfying the formula (2) in 16 pixel points on a neighborhood circle as h, and judging whether h is larger than a preset threshold epsilon or not d Typically the threshold is set to 12, i.e. if h>When=12, it is determined that S is a feature point, and the gray value corresponding to the pixel satisfies the condition:
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the circle center, epsilon d N represents the gray difference value, which is a threshold value of the gray difference value.
Secondly, optimizing and detecting candidate points: and by using the ORB optimization detection method, the feature point extraction is quickened, and the detection efficiency is improved. For each pixel point S, directly detecting gray values of four pixel positions in the circumferential vertical directions I (1), I (9) and horizontal directions I (5) and I (13), and counting that the gray difference value between the four pixel positions I (t) (t is respectively 1, 5, 9 and 13) and the selected pixel is larger than epsilon d The number M of pixels of (1), namely:
M=size(I∈{|I(t)-I(s)|>ε d }),ε d >0 (3),
if S satisfies the formula (3) and M is more than or equal to 3, judging S as a characteristic point, otherwise, directly excluding the characteristic point. Therefore, feature point sets of all images in the original image are screened out, and subsequent feature matching is facilitated.
(2) Feature matching and purification
Firstly, performing feature matching on the extracted feature points: calculating the Hamming distance d of the feature and taking the Hamming distance d as the feature matching phaseEvaluation criteria for similarity. Hamming distance d (S, V) = ΣS [ i ] is measured for any feature point S in an image and all feature points V in other images * ]⊕V[i * ]Wherein i is * =0, 1, …, n-1, meaning that S and V are both n-bit codes, and that the guard is exclusive-or. And sequencing the obtained distances, selecting the closest one as a matching point, and establishing a rough matching point pair. The matching result has a large number of mismatching, and the mismatching points need to be filtered out by a screening mechanism.
Secondly, aiming at the matched characteristic point set, carrying out characteristic purification: screening the feature point set by adopting a RANSAC (Random Sample Consensus, random sampling consistency) method, selecting Q points, and calculating all feature points P of the first image according to an assumed registration straight line model 1 Mapping point set P on second image * 2 And satisfy the mapping relationWherein m=1, 2, …, Q;
calculation ofEach point and a second image feature point set P 2 The Euclidean distance of the corresponding point in the set feature point gray level difference value threshold is delta, and P is counted * 2 The number N of the characteristic points meeting the Euclidean distance less than delta T
δ>0;
Randomly selecting Q points again, repeating the above operation K times and recording N times each time T Until the iteration is finished;
finally, n=max (N Tk (k=1, 2, …, K)) registration model as final fitting result, the mismatching points are removed.
Step 3: constructing a self-adaptive homography matrix, mapping the images to the same coordinate system for stitching, carrying out time sequence correction on each sequence of images, and then carrying out color correction by adopting a weighted smoothing algorithm to finish image stitching. The method comprises the following specific steps:
(1) Adaptive homography transformation
Firstly, calculating a homography matrix between aerial images, mapping the images to the same coordinate system according to the homography matrix, and splicing the images into a rough image;
and then using a moving DLT (Direct Linear Transform, direct linear transformation) method to realize linear fitting of the homography matrix according to the matched target points, wherein the fitting process is as follows: assuming that feature matching point sets of two images are X and X ', a homography transformation relation X' =hx is constructed, that is:
(x' y' 1) T =H(x y 1) T
wherein, (x 'y' 1) T Coordinates of feature points in X' (X y 1) T The coordinates of the feature points in X are 3×3 homography matrices, i.e.:
the row elements of H are denoted as r j (j=1, 2, 3), and
changing it into implicit form and linearizing it: x' ×hx=0, then:
linearization matrixThen ah=0 is available and h is solved. Take the first two rows a of A s And map it to A ε R 2N*9 SVD (Singular Value Decomposition ) decomposition of A and +.>Defined as the minimum singular value of A corresponds toRight singular vectors, available->H can be defined by->Reconstructing;
finally, based on the construction process of the homography matrix H, the adaptive homography matrix H related to the position is designed * Satisfies homography transformation relation X * '=H * X * The weight is given to adapt to the transformation process of the data, so that the limitation that the traditional single homography matrix is applicable only to the view angle range in the same plane is avoided, and the problem of double image or registration error caused by single homography transformation is solved. The specific operation steps comprise:
step 1: the feature matching point sets of the two images are respectively X * ' and X * Constructing an adaptive homography transformation relation X * '=H * X *
Step 2: the constructed adaptive homography transformation relation is changed into an implicit form and linearized into X * '×H * X * =0, can be obtained:
step 3: linearization matrixCan be obtained as a=0 for a * Singular value decomposition is carried out to obtain the self-adaptive homography matrix H * Right singular vector->
Wherein the weight isRepresenting a distance from the control point x * The more important the closer pixel data, the variance σ 2 Is a measure of how far a measurement point deviates from a control point;
step 4: obtained according to formula (4)And reconstruct +.>Obtaining the adaptive homography matrix H *
(2) Timing correction
The invention adopts the position of the center point of the image after homography transformation to carry out time sequence correction on the aerial image sequence of the multiple unmanned aerial vehicles, and the correction formula is as follows:
wherein P is ox Is the x coordinate of the center point of the o-th image after homography transformation, L is the width of the reference image, and flag o Is a time sequence moving parameter corresponding to the image, and the value is as follows:
traversing each frame of the image sequence until a flag o =0, outputting the corrected image sequence.
(3) Color correction
Aiming at the problems of blurring, ghosting and color inconsistency of the same overlapped area after splicing, the new pixel value of the overlapped area is recalculated by adopting the linear gradual change of the image, and the splicing effect is optimized by color correction.
Let α denote the distance from the pixel abscissa to the left boundary abscissa of the overlapping region, the gray value of the pixel of the stitched image is given by:
I(x,y)=I l (x l ,y l )(1-α)+I r (x r ,y r )α (6),
wherein,(x l ,y l ) For the front left image coordinates, (x) r ,y r ) To stitch the front right image coordinates, I l (x l ,y l ) For stitching the front left image pixels, I r (x r ,y r ) For the right image pixel before stitching, (x) lmax ,y lmax ) For the maximum value of the left image coordinates before stitching, (x, y) is the post-stitching image coordinates, and I (x, y) is the post-stitching image pixels.
The good splicing effect can be realized through the processing, and the spliced image is obtained.
Examples
The verification of the invention is completed in a multi-unmanned aerial vehicle cooperative target detection and identification system with the registration number 2020SR 1088587.
1. Experimental environment
CPU:Intel Xeon E5-1650 v4;
RAM:32GB;
GPU NVIDIA TITAN-X, windows10 System, visual studio 2015+Anaconda3.5+Python3.6.
2. Experimental procedure
The experiment utilizes aerial images acquired by a plurality of unmanned aerial vehicle flight platforms in a patrol flight experiment to verify the performance of the proposed splicing algorithm, and specifically comprises two experiments:
(1) Two groups of different aerial source image splicing experiments
According to each step of the splicing method provided by the invention, two groups of aerial source images with different sources are respectively and effectively spliced as shown in figure 4, the splicing position is smoother, and the spliced result is shown in figure 5. As can be seen from the figure, the method effectively expands the visual angle range of the image, retains the respective image characteristics of the two source images, and effectively displays key information such as pits, people, lamp plates and the like which are blocked by big trees in the source images.
(2) Multi-scale multi-view image stitching experiment under four scenes
Four experimental scenes are constructed by utilizing scenes of key roads in a complex area to be patrolled by a plurality of unmanned aerial vehicles, the matching schematic diagrams of the characteristic points of the aerial source images of the four scenes are shown in figure 3, the images are spliced under each scene according to each step of the splicing method provided by the invention, and the splicing result is shown in figure 6. As can be seen from the figure, the splicing accuracy of the method is high and no image distortion exists.
3. Evaluation of splice results
In order to fully verify the splicing method provided by the invention, the splicing effect is objectively and comprehensively evaluated by adopting evaluation parameters such as RMSE (Root Mean Square Error ), dice distance, hausdorff distance and the like.
(1) RMSE evaluation
The root mean square error RMSE evaluation refers to the mean square root of deviation between the pixel point positions of a plurality of feature points of an image to be registered and a reference image after transformation after registration of two images:
wherein Deltax is q And Deltay q Is the error of the q-th registration feature point in the x-direction and the y-direction.
(2) Evaluation of Dice distance
The Dice distance is used to represent the extent to which two sets contain each other:
where TP represents the correct matching point pair number and GT and PR represent the feature point sets of the target image and the transformed image, respectively.
(3) Hausdorff distance evaluation
The Hausdorff distance is used to calculate the matching of two point sets:
H(A,B)=max(h(A,B),h(B,A)),
wherein h (a, B) =max a∈A min b∈B ||a-b|| 2 ,h(B,A)=max b∈B min a∈A ||b-a|| 2
According to the three evaluation indexes, the method provided by the invention is compared and analyzed with a registration algorithm based on SURF and SIFT images, and the obtained comparison result is shown in Table 1:
table 1 quantization index comparison for different methods
As can be seen from Table 1, the method provided by the invention is superior to SURF and SIFT algorithms in three evaluation criteria of RMSE, dice and Hausdorff, the algorithm operation time is short, the characteristic point matching error is obviously reduced, the complex and changeable environment where the algorithm experiment is located is met, and the requirements of high real-time performance and high precision for splicing multi-scale multi-view aerial images in the multi-machine collaborative inspection flight process are met.
What is not described in detail in this specification is prior art known to those skilled in the art. Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.

Claims (5)

1. The high-precision image splicing method under the multi-machine cooperative constraint is characterized by comprising the following steps of:
step 1: acquiring ground images in real time through multiple sensors and combining position, attitude and angle information among multiple unmanned aerial vehicles to construct a multi-machine collaborative real-time image imaging model, so as to determine an actual imaging area;
step 2: extracting a feature point set of an original image in an actual imaging area by using an ORB method, performing coarse image matching, purifying a coarse matching result by using a random sampling consistency algorithm, and removing mismatching points;
step 3: constructing a self-adaptive homography matrix, mapping the purified images to the same coordinate system for preliminary stitching, then carrying out time sequence correction on each sequence of images, and then carrying out color correction by adopting a weighted smoothing algorithm;
step 4: outputting a result image after the image stitching is completed;
the specific operation steps of the step 2 comprise:
step 21: selecting any pixel point S in an original image, drawing a circle with a radius of 3 pixels by taking S as a circle center, detecting 16 pixel points falling on the circle, recording the number of continuous pixel points with the pixel gray level satisfying the formula (2) in the 16 pixel points on the neighborhood circle as h, and judging whether the h is larger than a preset threshold epsilon or not d If the gray value is larger than the gray value, judging S as the characteristic point, and the gray value condition satisfied by the pixel is as follows:
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the circle center, epsilon d N represents the gray difference value, which is the threshold value of the gray difference value;
step 22: for each pixel point S, gray values of four pixel positions in the circumferential vertical directions I (1), I (9) and horizontal directions I (5) and I (13) are directly detected, and the difference value between the gray values of the pixels of the four position points I (t) and the selected point is counted to be larger than epsilon d The number M of pixels of (1), namely:
M=size(I∈{|I(t)-I(s)|>ε d }),ε d >0 (3);
if the characteristic point satisfies the formula (3) and M is more than or equal to 3, judging S as the characteristic point, otherwise, directly excluding the characteristic point;
step 23: calculating hamming distances d between any feature point S selected from one image and all feature points V in other images, sequencing the obtained distances d, selecting one point closest to the distance as a matching point, and establishing rough matching point pairs to form a feature point set;
step 24: screening the obtained feature point set by adopting a random sampling consistency method, and finally removing mismatching points to obtain a purified matching feature point set;
the specific operation steps of the step 3 comprise:
step 31: adopting a mobile DLT method, and realizing linear fitting of the adaptive homography matrix according to the coordinate values of the purified multiple groups of matched target point pairs;
step 32: performing time sequence correction on the multi-unmanned aerial vehicle aerial image sequence by adopting the position of the center point of the image after homography transformation, and corresponding to time sequence movement parameter flag of the image o The values of (2) are as follows:
wherein P is ox The center point x coordinate of the o-th image after homography transformation is that L is the width of the reference image;
step 33: traversing each frame of the image sequence according to the flag o Is corrected by the following steps: when the flag is o When the frame is 1, the later frame of the image to be spliced is used for splicing, and when the frame is flag o When the frame is-1, the previous frame of the image to be spliced is used for splicing until the flag o When the image is 0, outputting a corrected image sequence;
step 34: the new pixel values of the overlapping region are recalculated using the linear gradient of the image for color correction.
2. The method for splicing high-precision images under multi-machine cooperative constraint according to claim 1, wherein the specific operation steps of the step 1 include:
step 11: resolving the acquired ground image data to obtain unmanned aerial vehicle motion parameters, so as to realize the flight control of the machine body;
step 12: inertial navigation unit G carried on multi-unmanned planePS and barometer to obtain unmanned aerial vehicle attitude angle: pitch angle θ, roll angle φ, yaw angleAnd coordinates, flight altitude information;
step 13: respectively establishing a ground coordinate system, a machine body coordinate system, a camera coordinate system and an image coordinate system;
step 14: establishing a coordinate transformation relation p between a machine body coordinate system and a ground coordinate system by combining an attitude angle of an unmanned aerial vehicle g =L b p b And establishing a multi-machine collaborative real-time image forming model.
3. The method for stitching high-precision images under multi-machine collaborative constraint according to claim 2, wherein the specific operation steps of step 24 include:
step 241: q points are selected from the obtained feature point set, and all feature points P of the first image are calculated according to a set registration straight line model 1 Mapping point set P on second image * 2 And satisfies the mapping relationship: p (P) * 2 m =f(P 1 m ) And m=1, 2, …, Q;
step 242: calculation of P * 2 m Each point and a second image feature point set P 2 The Euclidean distance of the corresponding point in the set feature point gray level difference value is set as delta, and P is counted * 2 The number N of the characteristic points meeting the Euclidean distance less than delta T :N T =size(P * 2 m ∈{||P 2 m -P * 2 m || 2 <δ}),δ>0;
Step 243: randomly selecting Q points again, repeatedly executing K times of steps 241-242, and recording N times each time T Until the iteration is finished;
step 244: selecting n=max (N Tk (k=1, 2, …, K)) registration model as the final fitting result, thereby rejecting mismatching points.
4. A method for stitching high-precision images under a multi-machine collaborative constraint according to claim 3, wherein the specific operation steps of step 31 include:
step 311: the feature matching point sets of the two images are respectively X * ' and X * Constructing an adaptive homography transformation relation X * '=H * X *
Step 312: changing the constructed adaptive homography transformation relation into an implicit form and linearizing into: x is X * '×H * X * =0, can be obtained:
step 313: linearization matrixCan be obtained as a=0 for a * Singular value decomposition is carried out to obtain the self-adaptive homography matrix H * Right singular vector->
Wherein the weight isRepresenting a distance from the control point x * The more important the closer pixel data, the variance σ 2 Is a measure of how far a measurement point deviates from a control point;
step 314: according to the obtained right singular vectorsI.e. the self-adaptive homography moment can be reconstructedArray H *
5. The method for high-precision image stitching under multi-machine collaborative constraint according to claim 4, wherein the formula for calculating new pixel values of the overlapping area in step 34 is:
I(x,y)=I l (x l ,y l )(1-α)+I r (x r ,y r )α (6),
wherein,representing the distance from the pixel point abscissa to the left boundary abscissa of the overlap region, (x) l ,y l ) For the front left image coordinates, (x) r ,y r ) To stitch the front right image coordinates, I l (x l ,y l ) For stitching the front left image pixels, I r (x r ,y r ) For the right image pixel before stitching, (x) lmax ,y lmax ) For the maximum value of the left image coordinates before stitching, (x, y) is the post-stitching image coordinates, and I (x, y) is the post-stitching image pixels.
CN202110450860.5A 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint Active CN113313659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450860.5A CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450860.5A CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Publications (2)

Publication Number Publication Date
CN113313659A CN113313659A (en) 2021-08-27
CN113313659B true CN113313659B (en) 2024-01-26

Family

ID=77371027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450860.5A Active CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Country Status (1)

Country Link
CN (1) CN113313659B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154352B (en) * 2022-01-18 2024-05-17 中国科学院长春光学精密机械与物理研究所 Topology structure design method and device for cooperative control of aviation imaging multiple actuators
CN114820737B (en) * 2022-05-18 2024-05-07 浙江圣海亚诺信息技术有限责任公司 Remote sensing image registration method based on structural features
CN116681590B (en) * 2023-06-07 2024-03-12 中交广州航道局有限公司 Quick splicing method for aerial images of unmanned aerial vehicle
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
CN112435163A (en) * 2020-11-18 2021-03-02 大连理工大学 Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
US11164295B2 (en) * 2015-09-17 2021-11-02 Michael Edwin Stewart Methods and apparatus for enhancing optical images and parametric databases

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
CN112435163A (en) * 2020-11-18 2021-03-02 大连理工大学 Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于SIFT特征的彩色图像拼接方法研究;张永梅;张晨希;郭莎;;计算机测量与控制(第08期);全文 *

Also Published As

Publication number Publication date
CN113313659A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN107063228B (en) Target attitude calculation method based on binocular vision
CN114936971A (en) Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
CN111507901B (en) Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN108665499B (en) Near distance airplane pose measuring method based on parallax method
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN111967337A (en) Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN109671109B (en) Dense point cloud generation method and system
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN114022560A (en) Calibration method and related device and equipment
Zhao et al. RTSfM: Real-time structure from motion for mosaicing and DSM mapping of sequential aerial images with low overlap
CN111047631A (en) Multi-view three-dimensional point cloud registration method based on single Kinect and round box
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant