CN110033411B - High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle - Google Patents

High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle Download PDF

Info

Publication number
CN110033411B
CN110033411B CN201910292872.2A CN201910292872A CN110033411B CN 110033411 B CN110033411 B CN 110033411B CN 201910292872 A CN201910292872 A CN 201910292872A CN 110033411 B CN110033411 B CN 110033411B
Authority
CN
China
Prior art keywords
point
image
pixel
suture line
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910292872.2A
Other languages
Chinese (zh)
Other versions
CN110033411A (en
Inventor
李顺龙
徐阳
牛皓伟
郭亚朋
李忠龙
焦兴华
鄂宇辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910292872.2A priority Critical patent/CN110033411B/en
Publication of CN110033411A publication Critical patent/CN110033411A/en
Application granted granted Critical
Publication of CN110033411B publication Critical patent/CN110033411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a highway construction site panoramic image efficient splicing method based on an unmanned aerial vehicle, which solves the problems of local coordinate system deviation, low matching efficiency of characteristic points of the whole image and splicing fuzziness and ghost caused by a dynamic target in the cruising process of the unmanned aerial vehicle through correction of geographic information coordinates and attitude parameters of aerial images, selection of key splicing areas, efficient matching of the characteristic points and rapid image splicing based on an optimal suture line and image fusion. The invention is suitable for the overall safety supervision and management of the highway engineering construction site.

Description

High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
Technical Field
The invention relates to a method for efficiently splicing panoramic images of a highway construction site based on an unmanned aerial vehicle.
Background
With the acceleration of the development process of the traffic construction in China, the construction quality safety accidents of traffic engineering, particularly highway engineering, also present the situation of easy occurrence, multiple occurrence and high occurrence. At present, the safety supervision and management of a highway engineering construction site mostly adopts traditional means such as an artificial telescope and a camera, and has the defects of poor autonomy, low flexibility, limitation of an observation area, observation blind areas, great influence of terrain, incapability of realizing global overall management of the construction site and the like. In view of the above problems, scholars at home and abroad have developed the study of engineering safety management methods based on vision. With the development of image acquisition hardware equipment such as an unmanned aerial vehicle and the like and software algorithm technologies such as computer vision, image processing and the like in recent years, some construction site panoramic image generation methods based on unmanned aerial vehicles and image splicing are available at present, and an early-stage research foundation is laid for realizing overall safety supervision and management of the whole construction site. However, these methods are often not really effective for practical engineering applications. The reason is that firstly, the unmanned aerial vehicle generates local coordinate system deviation in the cruising process, such as visual angle fluctuation in a cruising plane, the traditional method does not consider geographic information coordinate and attitude parameter correction of the unmanned aerial vehicle, so that local image distortion is caused, and further the splicing error is very obvious when a panoramic image is generated; secondly, most of the current image splicing algorithms are based on the matching of feature points in the whole image area, so that the processing efficiency is low, and the realization of real-time or quasi-real-time fast splicing of a plurality of high-resolution images on an actual construction site is very difficult; moreover, due to the inevitable natural wind action and the tracking requirement of the dynamic target of the construction site, the spliced panoramic image generated by the traditional method has blurs and ghosts. How to provide an efficient and accurate panoramic image splicing method aiming at the position and attitude change of an unmanned aerial vehicle in the cruising process is a problem to be solved urgently.
Disclosure of Invention
Based on the defects, the invention provides the method for efficiently splicing the panoramic images of the road construction site based on the unmanned aerial vehicle, and solves the problems of local coordinate system deviation, low efficiency of matching of characteristic points of the whole image and fuzzy splicing and ghost images caused by dynamic targets generated in the cruising process of the unmanned aerial vehicle.
The technology adopted by the invention is as follows: an efficient road construction site panoramic image splicing method based on an unmanned aerial vehicle comprises the following steps:
the method comprises the steps that firstly, geographic information and attitude parameters of an image acquired by an unmanned aerial vehicle are extracted, conversion of position information of the unmanned aerial vehicle from a geographic coordinate system to a local coordinate system is achieved based on Gaussian projection and coordinate rotation translation, homography matrix correction is carried out according to attitude parameters of the unmanned aerial vehicle, and image distortion errors caused by wind-induced vibration cruise angle deviation are eliminated;
secondly, performing adjacent pairwise matching on the images corrected by the geographic information and the attitude parameters, selecting a local key splicing area of the feature points based on the local pixel variation maximum value, and performing feature point matching in the key area based on ORB features;
and thirdly, iteratively performing optimal suture line search and image boundary segmentation weighting fusion algorithm of adjacent images based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image.
The invention also has the following technical characteristics:
1. the first step specifically includes:
the method comprises the steps that a flight control platform is used for controlling the flight direction and speed of the unmanned aerial vehicle, the overlapping rate of adjacent images is guaranteed to be 50%, and continuous processing of multiple images is achieved;
step two, continuously numbering the images obtained in the step one by one, correcting the extracted geographic information and attitude parameters, and performing homography matrix conversion and registration on a plurality of images;
the Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
Figure BDA0002025509210000021
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
Figure BDA0002025509210000022
wherein a is the major semi-axial length of the ellipsoid, and e' are the first and second oblateness of the ellipsoid respectively.
2. In the second step, the method for selecting the key splicing area includes:
Figure BDA0002025509210000023
wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1The overlapping area of the ith image and the i +1 image is shown, and the overlapping rate is controlled to be 50% by the flight control platform.
3. In the second step, after the selection of the key splicing region is completed, the extraction flow of the ORB features is as follows: firstly, any pixel on the image is taken as the center of a circle, a circle is made on the image by a fixed radius, the gray value of the pixel passing by the peripheral arc is counted, then comparing the gray values of the peripheral arc pixels and the central point pixels, counting the number of gray difference values larger than a set threshold value, and using the data as a basis for judging whether a central pixel point is a candidate characteristic point, wherein the radius of the circular template is 3 pixels, comparing a point p to be detected with pixels in a circle formed by 16 pixels around the point p, judging whether enough pixels exist in the circle and the p has different attributes, if so, the p is an angular point, in the gray image, the algorithm compares the gray value of each point with p points, if n continuous pixel points are brighter or darker than the p points, p is an angular point, n is 9, then, N point pairs are selected in a pattern around the keypoint p, and the comparison results of the N point pairs are combined to be used as a descriptor. And taking the key point p as the center of a circle and d as the radius to make a circle O, selecting N point pairs in a certain mode in the circle O, wherein N can be 512, taking the key point as the center of a circle, and taking a connecting line of the key point and the centroid of a point taking area as an X axis to establish a two-dimensional coordinate system, and when the similarity of the two points is greater than a threshold value, the two points are successfully matched.
4. In the third step, the objective optimization function of the optimal suture line is specifically:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Figure BDA0002025509210000031
Ecolor=ΔIi=Ii+1-Ii (6)
Figure BDA0002025509210000032
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference of overlapping pixel points on the two original images. Sx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,
Figure BDA0002025509210000033
representing a convolution operation.
5. The image boundary segmentation weighting fusion algorithm in the third step specifically comprises the following steps:
Figure BDA0002025509210000034
in the formula, (x, y) belongs to R and represents pixel points in a key splicing region R, f (x, y) represents an image after weighted fusion, and f (x, y) represents an image after weighted fusioni(x, y) represents the ith original image, i is 1,2 represents two continuous adjacent images needing to be spliced, diAnd (x, y) represents a sectional weighting coefficient which changes along with the change of the pixel position, the sectional weighting coefficient linearly changes along the height direction of the image, the value range is 0-1, and h is the image height of the key splicing area. The use of the segmented weighted fusion algorithm has the advantages of high calculation speed and clear physical significance. In each image to be stitched, the position farther from the previous image (the closer y is to h), the closer the distance to the optimal suture line is, the closer the corresponding weight coefficient is to 1, and the greater the effect is in image fusion. After image fusion is completed near the optimal suture line, the phenomena of splicing blurring and ghosting caused by the traditional method can be eliminated.
The invention has the beneficial effects that: aiming at the problems of local coordinate system deviation, low efficiency of matching of characteristic points of the whole image and fuzzy and ghost splicing caused by a dynamic target generated in the cruising process of the unmanned aerial vehicle, the construction site unmanned aerial vehicle panoramic high-resolution image splicing is realized by correcting geographic information coordinates and attitude parameters of aerial images, selecting key splicing areas, efficiently matching the characteristic points and quickly splicing images based on an optimal suture line and image fusion. The method improves the calculation efficiency of the unmanned aerial vehicle panoramic high-resolution image splicing and the accuracy of the splicing result, and obviously reduces the manual participation degree in the traditional method. The invention can also meet the requirements of on-line safety monitoring and early warning and real-time data processing on a construction site, directly transmits and splices the acquired images, and the result output delay can be as low as less than ten seconds. The invention improves the automation, intelligence degree and accuracy of the overall safety supervision and management of the construction site, and provides a solution for the overall safety supervision and management of the traffic engineering construction site.
Drawings
FIG. 1 is a flow chart of one embodiment of the present invention;
FIG. 2 is a flow chart of a core algorithm of the present invention;
FIG. 3 is a diagram showing the result of selecting the key area in step two of the present invention;
FIG. 4 is a diagram showing the result of the configuration of ORB feature points in the key area in step two of the present invention;
FIG. 5 is a graph of the optimal stitch line results of step three of the present invention, wherein the black broken line represents the optimal stitch line of two adjacent images;
FIG. 6 is a global high-definition splicing result diagram of a highway engineering construction site performed by the embodiment of the invention;
FIG. 7 is a diagram of the blur and ghost elimination effect of the present invention, wherein FIG. 7(a) is a local stitching blur and ghost map generated by the conventional method, and FIG. 7(b) is a high resolution result map generated by the present invention.
Detailed Description
Example 1
The embodiment is a method for efficiently splicing panoramic images of a highway engineering construction site based on unmanned aerial vehicle geographic information and attitude parameter correction, as shown in fig. 1, the method comprises the following steps:
the method comprises the steps of firstly, extracting geographic information and attitude parameters of an image acquired by the unmanned aerial vehicle, realizing conversion of position information of the unmanned aerial vehicle from a geographic coordinate system to a local coordinate system based on Gaussian projection and coordinate rotation translation, correcting a homography matrix according to the attitude parameters of the unmanned aerial vehicle, and eliminating image distortion errors caused by deviation of a wind-induced vibration cruise angle.
For example, in one embodiment, the resolution of a single original color image is 5472 × 3684, and geographical information such as longitude, latitude, elevation and the like of the corresponding shooting position and attitude parameters such as pitch angle, heading angle, roll angle and the like are extracted from the original image. And then, carrying out homography matrix conversion on the plurality of images according to pairwise matching of adjacent images to obtain a continuous registration result of the plurality of images.
And secondly, performing adjacent pairwise matching on the images after geographic information and posture parameter correction, selecting a local key splicing area of the feature points based on the local pixel change maximum value, and performing feature point matching in the key area based on ORB (ordered FAST and Rotated Brief, rotation accelerated segmentation and binarization robust independent unit) features.
The selection of the local key splicing area is based on the overlapping area of the images to be spliced. The height of the key splicing region is 50% of the image height, namely 1842 pixels; the width is 4-6 degrees according to the inclination angle of the road width edge of the analysis image, the number of transversely occupied pixels in the half image is 125-190, and 150 pixels are selected for automatic frame selection; due to the influence conditions such as wind speed of a construction site and the like, the images are laterally deviated, and the deviation value is within 100 pixels, so that the left and right of adjacent images are respectively increased by 100 pixels during frame selection, and a selected target area is ensured. Namely, the width of the key splicing region in the front image is 300 pixels, and the width of the key splicing region in the rear image is 500 pixels, so that the matching effect of the feature points in the key splicing region is ensured. Fig. 3 is a result diagram of selecting a key region of adjacent images, and fig. 4 is a result diagram of matching ORB feature point regions in the key region.
And thirdly, iteratively searching an optimal suture line of adjacent images and fusing image boundaries based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image.
The process of searching the optimal suture line is to carry out difference operation on the overlapped part of the two images according to the principle of minimum color and geometric error to generate a difference image; then, starting from the first line of the overlapping area by applying the idea of dynamic programming to the difference image, establishing a suture line taking each pixel on the line as a starting point; finally, an optimal suture line is searched from the suture lines. The method comprises the following specific steps: initializing the pixel point of each row of the first row to be a suture line, initializing the intensity value of the suture line to be a standard value of each point, and setting the current point of the suture line to be the row value of the suture line; expanding the line of which the suture line strength is calculated to expand downwards until the last line, wherein the expansion method comprises the steps of adding the current point of each suture line with the 3 pixel criterion values in the next line next to the point for comparison, taking one of the 3 pixels in the next line corresponding to the minimum intensity value as the expansion direction of the suture line, updating the intensity value of the suture line to be the minimum intensity value, and updating the current point of the suture line to be the column where the next pixel value in the next line where the minimum intensity value is obtained is located; selecting the best suture line, and selecting the suture line with the minimum intensity value from all the suture lines as the best suture line. And enabling the picture input to the model to be in accordance with the size of the picture input during training. The black fold in fig. 5 represents the best stitch line for the two adjacent images.
The operation result of the embodiment is developed under MATLAB 2016a and OpenCV 2.0 environment, and is directly suitable for construction site images shot by consumer-grade unmanned aerial vehicles, special shooting or detection equipment is not needed, and the cruising height is 30 meters. The method has the advantages of high splicing precision, high speed and low cost, can be used for offline identification of the overall safety assessment of the construction site, can also be used for quasi-real-time monitoring, has the processing time delay within 5 seconds, and improves the automation, the intellectualization, the accuracy and the processing efficiency of the overall safety supervision of the construction site.
Fig. 6 to 7 are graphs of stitching effects of an embodiment of the present invention, where fig. 6 is a high-definition panorama after 8 images are continuously stitched, fig. 7(a) is a graph of local stitching blur and ghost generated by a conventional method, and fig. 7(b) is a graph of a high-resolution result generated by the present invention.
Example 2
This embodiment is substantially the same as example 1 except that: the first step specifically comprises the following steps:
the method comprises the steps of controlling the flight direction and speed of the unmanned aerial vehicle by adopting a flight control platform PIX4D, ensuring that the overlapping rate of adjacent images is ensured to be 50%, and realizing continuous processing of a plurality of images.
And secondly, continuously numbering the images obtained in the first step, correcting the extracted geographic information and attitude parameters, and performing homography matrix conversion and registration on the plurality of images.
The Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
Figure BDA0002025509210000061
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
Figure BDA0002025509210000062
wherein a is the major semi-axial length of the ellipsoid, e, e' are the first and second oblateness of the ellipsoid respectively,
the method has the advantages that local coordinate conversion is achieved by extracting the geographic coordinate information of the unmanned aerial vehicle; the overlapping rate of adjacent images is ensured to be 50% through the flight control platform, and the continuous splicing of the construction site global images can be realized.
The construction site is fully covered by 8 images with the overlapping rate of 50% collected from a road pavement construction section, and the geographic coordinates are projected to a plane coordinate system to calculate the relative position by adopting Gaussian-Kluker projection. In the embodiment, geographic information and attitude parameters of 8 continuous images acquired by the unmanned aerial vehicle and having an overlapping rate of 50% are extracted, and based on gaussian projection and coordinate rotation translation, correction of the geographic information and attitude parameters of the unmanned aerial vehicle is realized, and the result is shown in table 1.
TABLE 1 geographical information and attitude parameter correction results
Figure BDA0002025509210000063
Figure BDA0002025509210000071
The other steps were the same as in example 1.
Example 3
This embodiment is substantially the same as example 1 except that: in the second step, the selection principle of the key area is
Figure BDA0002025509210000072
Wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1The overlapping area of the ith image and the i +1 image is shown, and the overlapping rate is controlled to be 50% by the flight control platform.
After the key area selection is completed, the extraction flow of the ORB features is as follows:
based on FAST corner detection and BRIEF feature descriptors, ORB features have good robustness and real-time, and the computation cost and memory requirement are both low. Firstly, taking any pixel on an image as a circle center, making a circle on the image by using a fixed radius, counting the gray value of the pixel through which a peripheral arc passes, then comparing the gray values of the peripheral arc pixel and a central point pixel, counting the number of gray difference values larger than a set threshold value, and taking the number as a basis for judging whether the central pixel point is a candidate characteristic point. The radius of a commonly used circular template is 3 pixels, a point p to be detected is compared with pixels in a circle formed by 16 pixels around the point p to be detected, whether enough pixels are different from the p in attribute is judged, if yes, the p can be an angular point, in a gray image, an algorithm is to compare the gray value of each point with the p, and if n continuous pixels are brighter or darker than the p, the p can be the angular point. Through tests, n is 9, and the processing effect, the speed and the robustness obtained by the algorithm are very good. Then, N point pairs are selected in a certain pattern around the key point P, and the comparison results of the N point pairs are combined to be used as a descriptor. And D is taken as the radius of the circle O with the key point P as the center of the circle, and N point pairs are selected in a certain mode in the circle O. In practical application, N may be 512. And establishing a two-dimensional coordinate system by taking the key point as a circle center and taking a connecting line of the key point and the centroid of the point taking area as an X axis. Under different rotation angles, the points extracted in the same point extraction mode are consistent, so that the problem of rotation consistency is solved. And finally, setting thresholds, such as A:10101011 and B:10101010, according to the feature descriptors, and when the similarity of the two points is greater than the threshold, successfully matching the two points.
The other steps and parameters were the same as in example 1.
Example 4
This embodiment is substantially the same as example 1 except that: in the third step, the objective optimization function of the optimal suture line is specifically as follows:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Figure BDA0002025509210000081
Ecolor=ΔIi=Ii+1-Ii (6)
Figure BDA0002025509210000082
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference of overlapping pixel points on the two original images. Sx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,
Figure BDA0002025509210000083
representing a convolution operation.
The other steps and parameters were the same as in example 1.
Example 5
This embodiment is substantially the same as example 1 except that: in step three, a segmented weighted fusion algorithm is adopted. The reason is that if images are simply superposed during splicing, obvious splicing seams are generated at the spliced positions, and a segmentation weighting fusion algorithm of the images is introduced for eliminating the splicing seams. The weighted average weight function is selected by adopting a gradual fading method, and the Euclidean distance from a pixel point to the center of an overlapping area is used as the weight function. When the image transitions in the overlap region, the weight function changes gradually from 1 to 0. The weighted average algorithm can well process exposure difference, and has the advantages of high fusion speed, simple implementation and good real-time property. The processing time of a consumer notebook computer with a hardware configuration of 8GB DDR3 memory, i7-4790 CPUs and software environments of MATLAB 2016a and OpenCV 2.0 is about 4s, the time of a traditional image splicing method based on greedy SIFT feature matching is about 13s, and the efficiency is improved by nearly 2 times.
The other steps and parameters were the same as in example 1.

Claims (1)

1. An efficient road construction site panoramic image splicing method based on an unmanned aerial vehicle is characterized by comprising the following steps:
the method comprises the steps that firstly, a flight control platform is used for controlling the flight direction and speed of the unmanned aerial vehicle, the overlapping rate of adjacent images is guaranteed to be 50%, continuous processing of multiple images is achieved, the obtained images are numbered continuously, extracted geographic information and attitude parameters of the unmanned aerial vehicle are corrected, homography matrix conversion and registration of the multiple images are carried out, and image distortion errors caused by wind-induced vibration cruise angle deviation are eliminated;
the Gaussian-gram projection forward formula of the plane rectangular coordinate and the geographic coordinate of the coordinate transformation is as follows:
Figure FDA0002746329730000011
wherein, x, y are the abscissa and ordinate of the rectangular coordinate system of the level, L, B are the longitude and latitude of the geographic coordinate system of ellipsoid, s is the length of the meridian arc from the equator, N, eta are radius of curvature and intermediate variable respectively, the computational formula is as follows:
Figure FDA0002746329730000012
wherein a is the length of the long semi-axis of the ellipsoid, and e' are respectively the first oblateness and the second oblateness of the ellipsoid;
and secondly, performing adjacent pairwise matching on the images corrected by the geographic information and the attitude parameters, and selecting a local key splicing area of the feature points based on the local pixel variation maximum value, wherein the key splicing area selection method comprises the following steps:
Figure FDA0002746329730000013
wherein x and y represent coordinate values of the pixel in width and height directions, respectively, I represents gray level of the image, and I represents gray level of the imagei∩Ii+1Representing the overlapping area of the ith image and the i +1 image, wherein the overlapping rate is controlled to be 50% by the flight control platform, namely 1842 pixels; the width is 4-6 degrees according to the inclination angle of the road width edge of the analysis image, the number of transversely occupied pixels in the half image is 125-190, and 150 pixels are selected for automatic frame selection; because the images are transversely deviated under the influence conditions of wind speed and the like of a construction site, the deviation value is within 100 pixels, the left and right sides of adjacent images are respectively increased by 100 pixels during frame selection, and the selected target area, namely the key splicing in the previous image is ensuredThe width of the region is 300 pixels, and the width of the subsequent image is 500 pixels, so that the matching effect of the feature points in the key splicing region is ensured;
after the key splicing area is selected, carrying out feature point matching in the key area based on ORB features, wherein the matching method of the ORB features is as follows: firstly, taking any pixel on an image as a circle center, making a circle on the image by using a fixed radius, counting the gray value of a pixel through which a peripheral circular arc passes, then comparing the gray values of the peripheral circular arc pixel and a central point pixel, counting the number of gray difference values which are more than a set threshold value, and taking the number as a basis for judging whether the central pixel point is a candidate characteristic point, wherein the radius of a circular template is 3 pixels, comparing a point p to be detected with an in-circle pixel point formed by 16 pixel points around the point p to judge whether enough pixel points are different from the attribute of the point p, if so, the point p is an angular point, in the gray image, an algorithm is to compare the gray value of each point with the point p, if N continuous pixel points are brighter or darker than the point p, the point p is the angular point, if N is 9, then selecting N point pairs around the key point p in a certain mode, and combining the comparison results of the N point pairs as a descriptor, taking the key point p as the center of a circle and d as the radius to make a circle O, selecting N point pairs in a certain mode in the circle O, taking N as 512, taking the key point as the center of a circle, and taking the connecting line of the key point and the centroid of a point taking area as an X axis to establish a two-dimensional coordinate system, wherein when the similarity of the two points is greater than a threshold value, the two points are successfully matched;
iteratively carrying out optimal suture line search and image boundary segmentation weighting fusion algorithm of adjacent images based on a color and geometric error minimum principle and overlapping area weighted average, and eliminating splicing blur and ghost phenomena to obtain a final panoramic spliced image; the process of searching the optimal suture line is to carry out difference operation on the overlapped part of the two images according to the principle of minimum color and geometric error to generate a difference image; then, starting from the first line of the overlapping area by applying the idea of dynamic programming to the difference image, establishing a suture line taking each pixel on the line as a starting point; finally, an optimal suture line is searched from the suture lines, and the specific steps are as follows: initializing the pixel point of each row of the first row to be a suture line, initializing the intensity value of the suture line to be a standard value of each point, and setting the current point of the suture line to be the row value of the suture line; expanding the line of which the suture line strength is calculated to expand downwards until the last line, wherein the expansion method comprises the steps of adding the current point of each suture line with the 3 pixel criterion values in the next line next to the point for comparison, taking one of the 3 pixels in the next line corresponding to the minimum intensity value as the expansion direction of the suture line, updating the intensity value of the suture line to be the minimum intensity value, and updating the current point of the suture line to be the column where the next pixel value in the next line where the minimum intensity value is obtained is located; selecting an optimal suture line, and selecting the suture line with the minimum intensity value from all the suture lines as the optimal suture line so that the picture input to the model conforms to the size of the picture input during training; the target optimization function of the optimal suture line is specifically as follows:
E(x,y)=Ecolor(x,y)2+Egeometry(x,y) (4)
Figure FDA0002746329730000021
Ecolor=ΔIi=Ii+1-Ii (6)
Figure FDA0002746329730000022
wherein E represents the objective optimization function of the optimal suture, EcolorRepresenting the difference between the colour values of overlapping pixel points on two original images, EgeometryRepresenting the structural difference, S, of overlapping pixel points on two original imagesx,SyRespectively represent Sobel gradient operators, Ii,Ii+1Respectively representing two adjacent images,
Figure FDA0002746329730000023
representing a convolution operation;
the image boundary segmentation weighting fusion algorithm specifically comprises the following steps:
Figure FDA0002746329730000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002746329730000032
representing key splice areas
Figure FDA0002746329730000033
F (x, y) represents weighted fused image, fi(x, y) represents the ith original image, i is 1,2 represents two continuous adjacent images needing to be spliced, diAnd (x, y) represents a sectional weighting coefficient which changes along with the change of the pixel position, the sectional weighting coefficient linearly changes along the height direction of the image, the value range is 0-1, and h is the image height of the key splicing area.
CN201910292872.2A 2019-04-12 2019-04-12 High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle Active CN110033411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910292872.2A CN110033411B (en) 2019-04-12 2019-04-12 High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910292872.2A CN110033411B (en) 2019-04-12 2019-04-12 High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN110033411A CN110033411A (en) 2019-07-19
CN110033411B true CN110033411B (en) 2021-01-12

Family

ID=67238177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910292872.2A Active CN110033411B (en) 2019-04-12 2019-04-12 High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110033411B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569927A (en) * 2019-09-19 2019-12-13 浙江大搜车软件技术有限公司 Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal
CN110796734B (en) * 2019-10-31 2024-01-26 中国民航科学技术研究院 Airport clearance inspection method and device based on high-resolution satellite technology
SG10201913798WA (en) * 2019-12-30 2021-07-29 Sensetime Int Pte Ltd Image processing method and apparatus, and electronic device
CN111680703B (en) * 2020-06-01 2022-06-03 中国电建集团昆明勘测设计研究院有限公司 360-degree construction panorama linkage positioning method based on image feature point detection and matching
CN112308774A (en) * 2020-09-15 2021-02-02 北京中科遥数信息技术有限公司 Unmanned aerial vehicle-based map reconstruction method and system, transmission equipment and storage medium
CN112184662B (en) * 2020-09-27 2023-12-15 成都数之联科技股份有限公司 Camera external parameter initial method and system applied to unmanned aerial vehicle image stitching
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN112907452A (en) * 2021-04-09 2021-06-04 长春理工大学 Optimal suture line searching method for image stitching
CN113286081B (en) * 2021-05-18 2023-04-07 中国民用航空总局第二研究所 Target identification method, device, equipment and medium for airport panoramic video
CN117687426A (en) * 2024-01-31 2024-03-12 成都航空职业技术学院 Unmanned aerial vehicle flight control method and system in low-altitude environment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426153A (en) * 2013-07-24 2013-12-04 广州地理研究所 Unmanned aerial vehicle remote sensing image quick splicing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916452B (en) * 2010-07-26 2012-04-25 中国科学院遥感应用研究所 Method for automatically stitching unmanned aerial vehicle remote sensing images based on flight control information
CN106023086B (en) * 2016-07-06 2019-02-22 中国电子科技集团公司第二十八研究所 A kind of aerial images and geodata joining method based on ORB characteristic matching
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN109389555B (en) * 2018-09-14 2023-03-31 复旦大学 Panoramic image splicing method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426153A (en) * 2013-07-24 2013-12-04 广州地理研究所 Unmanned aerial vehicle remote sensing image quick splicing method

Also Published As

Publication number Publication date
CN110033411A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
CN109509230B (en) SLAM method applied to multi-lens combined panoramic camera
CN109800689B (en) Target tracking method based on space-time feature fusion learning
US20190295420A1 (en) Lane determination method, device and storage medium
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN110992263B (en) Image stitching method and system
CN108765489A (en) A kind of pose computational methods, system, medium and equipment based on combination target
CN106886748B (en) TLD-based variable-scale target tracking method applicable to unmanned aerial vehicle
CN111830953A (en) Vehicle self-positioning method, device and system
CN109829853A (en) A kind of unmanned plane image split-joint method
EP4060980A1 (en) Method and device for generating vehicle panoramic surround view image
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN110109465A (en) A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle
CN110047108A (en) UAV position and orientation determines method, apparatus, computer equipment and storage medium
CN113689331B (en) Panoramic image stitching method under complex background
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
CN112163588A (en) Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment
CN105335977A (en) Image pickup system and positioning method of target object
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN111967337A (en) Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN112947526A (en) Unmanned aerial vehicle autonomous landing method and system
CN116228539A (en) Unmanned aerial vehicle remote sensing image stitching method
CN107423766B (en) Method for detecting tail end motion pose of series-parallel automobile electrophoretic coating conveying mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant