CN117032276A - Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle - Google Patents

Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle Download PDF

Info

Publication number
CN117032276A
CN117032276A CN202310810319.XA CN202310810319A CN117032276A CN 117032276 A CN117032276 A CN 117032276A CN 202310810319 A CN202310810319 A CN 202310810319A CN 117032276 A CN117032276 A CN 117032276A
Authority
CN
China
Prior art keywords
point
unmanned aerial
aerial vehicle
bridge
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310810319.XA
Other languages
Chinese (zh)
Other versions
CN117032276B (en
Inventor
谢海波
王培玉
朱玮峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202310810319.XA priority Critical patent/CN117032276B/en
Priority claimed from CN202310810319.XA external-priority patent/CN117032276B/en
Publication of CN117032276A publication Critical patent/CN117032276A/en
Application granted granted Critical
Publication of CN117032276B publication Critical patent/CN117032276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a bridge detection method and a system based on binocular vision and inertial navigation fusion unmanned aerial vehicle, wherein the method comprises the steps of calculating the real-time position of the unmanned aerial vehicle by utilizing images acquired by binocular vision sensors and pose parameters output by an inertial measurement device; determining a control point based on the real-time position of the unmanned aerial vehicle; determining a detection area according to the control point, and planning a flight path according to the detection area and the control point; and controlling the unmanned aerial vehicle to reach the starting point, controlling the unmanned aerial vehicle to fly according to the flight path, and acquiring the bridge bottom image by using the high-definition camera in the flight process to realize bridge appearance detection. According to the invention, the binocular vision sensor and the inertial measurement device are used for replacing the GNSS of the unmanned aerial vehicle, so that the positioning navigation of the unmanned aerial vehicle is realized, the positioning precision and the positioning stability of the unmanned aerial vehicle are improved, and the problems that the unmanned aerial vehicle cannot complete path planning, automatic cruising, high manual operation difficulty and the like due to weak GNSS signals at the bottom of a bridge are solved.

Description

Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of bridge detection, and particularly relates to a bridge detection method and system based on a binocular vision and inertial navigation fusion unmanned aerial vehicle.
Background
Daily inspection and periodic detection of bridges are the most important and heavy work of maintenance departments. At present, bridge periodic detection mainly adopts a bridge detection vehicle as a mobile platform, diseases are manually searched on the mobile platform and recorded, and then data are arranged to write a report. The existing bridge inspection vehicle is generally provided with: the traffic is influenced by occupying the lane, the safety of the large cantilever structure is poor, the bridge with the cable structure cannot be used, the bridge railing influences the operation, the multi-person high-altitude operation is realized, the detection efficiency is low, the cost is high, the influence of human factors is large, the automation degree is low and the like.
Along with the continuous increase of the number of highway and railway bridges in China, the existing detection vehicles can not meet the requirements of bridge periodic detection and further can not meet the requirements of digital health monitoring of the whole life cycle of the bridge.
Along with the gradual application of unmanned aerial vehicles in various industries, the unmanned aerial vehicle has become a production tool capable of flying, the flying positioning and navigation of the unmanned aerial vehicle highly depend on a navigation satellite system (GNSS for short), but the GNSS signals at the bottom of a bridge are weak or even not, the unmanned aerial vehicle cannot complete path planning and automatic cruising, data acquisition can be carried out only by manually operating the unmanned aerial vehicle, the manual operation difficulty is high, and the acquired image quality cannot be ensured; meanwhile, above the water area environment, the unmanned aerial vehicle flying at the bottom of the bridge is easy to suddenly drop and even explode, so that the unmanned aerial vehicle has not been applied in the aspect of bridge detection on a large scale.
Disclosure of Invention
The invention aims to provide a bridge detection method and system based on a binocular vision and inertial navigation fusion unmanned aerial vehicle, which are used for solving the problems that in the traditional detection technology, the unmanned aerial vehicle cannot complete path planning and automatic cruising due to weak GNSS signals at the bottom of a bridge, the manual operation difficulty is high, the acquired image quality cannot be ensured, and the unmanned aerial vehicle flying at the bottom of the bridge is easy to suddenly drop and even explode.
The invention solves the technical problems by the following technical scheme: a bridge detection method based on a binocular vision and inertial navigation fusion unmanned aerial vehicle is characterized in that a cradle head, an inertial measurement device and a binocular vision sensor are arranged on the unmanned aerial vehicle, a high-definition camera is arranged on the cradle head, lenses of the binocular vision sensor and the high-definition camera are vertically upwards, and the detection method comprises the following steps:
calculating the real-time position of the unmanned aerial vehicle by using the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device; the real-time position is based on a sensor coordinate system, wherein the sensor coordinate system is a left-hand coordinate system taking the position of a binocular vision sensor when started as an origin and taking the length direction of a bridge as an X axis;
Determining a control point based on the real-time position of the unmanned aerial vehicle;
determining a detection area according to the control points or the control points and bridge parameters, and planning a flight path according to the detection area and the control points;
and controlling the unmanned aerial vehicle to reach a control point, controlling the unmanned aerial vehicle to fly automatically according to the flight path, and acquiring the image of the bottom of the bridge by using a high-definition camera in the automatic flight process to realize bridge detection.
Further, the real-time position of the unmanned aerial vehicle is calculated by using the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device, and the method specifically comprises the following steps:
acquiring a left eye image acquired by a left eye camera and a right eye image acquired by a right eye camera of a binocular vision sensor;
respectively carrying out distortion correction on the left eye image and the right eye image by utilizing camera internal parameters of the binocular vision sensor, and respectively extracting FAST feature points from the images after distortion correction;
adding direction characteristics to the FAST characteristic points, and generating Brief descriptors of the characteristic points according to the FAST characteristic points added with the direction characteristics;
based on Brief descriptors of the feature points, matching FAST feature points in the left eye image and the right eye image to obtain feature matching pairs;
Eliminating the feature matching pairs which are mismatched to obtain a feature pair set formed by all the feature matching pairs;
calculating camera pose parameters under a sensor coordinate system according to the feature matching pairs in the feature pair set;
acquiring acceleration and angle information of the unmanned aerial vehicle by using the inertia measurement device, and further acquiring speed and position information of the unmanned aerial vehicle;
and positioning and correcting the flight path of the unmanned aerial vehicle by using the camera pose parameters and the speed and position information of the unmanned aerial vehicle, and reconstructing to obtain the real-time position of the unmanned aerial vehicle under the sensor coordinate system.
Further, while the left eye image and the right eye image are respectively subjected to distortion correction by using the camera internal parameters of the binocular vision sensor, the detection method further comprises:
calculating a left-eye gray level histogram according to the left-eye image, and calculating a right-eye gray level histogram according to the right-eye image;
when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is smaller than a first critical value, indicating that the corresponding image is dark, and increasing exposure; when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is larger than a second critical value, the corresponding image is indicated to be brighter, and the exposure is reduced; wherein the second threshold is greater than the first threshold.
Preferably, the first threshold is 100 and the second threshold is 135.
Further, according to the feature matching pairs in the feature pair set, calculating the camera pose parameters under the sensor coordinate system, specifically comprising the following steps:
calculating space point coordinates under a sensor coordinate system according to pixel coordinates of FAST feature points of each feature matching pair in the feature pair set in a left eye image and a right eye image, so as to obtain a three-dimensional point cloud;
and calculating the pose parameters of the camera under the sensor coordinate system according to the three-dimensional point cloud.
Further, the Z coordinate of the control point is determined by the minimum distance between the unmanned aerial vehicle and the bottom of the bridge, and the calculation expression of the minimum distance is as follows:
d=min(d L ,d W );
S L =γ×a L ×(1-ol),S W =γ×a W ×(1-ol);
wherein d is the minimum distance, f is the focal length of the high-definition camera, SS L 、SS W Respectively the length and the width of the photosensitive film of the high-definition camera, S L The shooting length of a single image is gamma, the bridge detection precision is a L For the resolution of the high definition camera in length, a W For the resolution of the high definition camera in width, ol is the image overlapping rate, S W Is the shooting width of a single image.
Further, determining a detection area according to the control point or the control point and the bridge parameter, and planning a flight path according to the detection area and the control point, specifically including:
For a T beam or a small box beam bridge, the control points comprise starting points, N flight line segments with the length L and the interval D are determined according to the number N of the T beams with the length L, T and the interval D between the T beams, two end points of the N flight line segments form a flight path point set, and the starting points are points in the flight path point set; or, for a T-beam or a small box-beam bridge, the control points comprise a starting point and an ending point, the length L and the width W of the detection area are calculated according to the coordinate values of the starting point and the ending point, and the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image are calculated according to the width W of the detection area W Determining the number N of flight line segments, forming a flight path point set by two endpoints of the N flight line segments, wherein the starting point and the ending point are points in the flight path point set; wherein the length of each flight line segment is equal to the length L of the detection area;
for bridges with equal cross section and single span length smaller than the set bridge length, the control points comprise a starting point and an ending point, the length L and the width W of the detection area are calculated according to the coordinate values of the starting point and the ending point, and the detection is carried out according to the detectionWidth W of region, image overlapping rate ol, and shooting width S of single image W Determining the number N of flight line segments, forming a flight path point set by two endpoints of the N flight line segments, wherein the starting point and the ending point are points in the flight path point set; wherein the length of each flight line segment is equal to the length L of the detection area;
for a constant-section variable-section bridge or continuous bridge, the control point comprises a starting point, an ending point and an intermediate point, the X coordinate of the intermediate point is the same as the X coordinate of the starting point, and the Y coordinate of the intermediate point is the same as the Y coordinate of the ending point; calculating the length L and the width W of the detection area according to the coordinate values of the starting point and the ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W Determining the number N of flight line segments, forming a flight path point set by two end points of the N flight line segments, wherein the starting point, the ending point and the middle point are points in the flight path point set;
for the variable-section bridge, in the XZ plane, each flight line segment is oblique line, and the calculation process of the oblique line slope is as follows: calculating a Z coordinate change value and an X coordinate change value according to the middle point and the end point, and calculating a slope according to the Z coordinate change value and the X coordinate change value; determining Z coordinates of the corresponding endpoints according to the slope and the X coordinates of the endpoints;
For the upper bearing arch bridge, the control points comprise a starting point, an ending point, a first middle point and a second middle point, wherein the first middle point and the second middle point are respectively positioned at two ends of the arch crown, and the distance between the first middle point and the second middle point and the arch crown is equal to the minimum distance; calculating the length L and the width W of the detection area according to the coordinate values of the starting point and the ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W Determining the number N of flight line segments, forming a flight path point set by two endpoints of the N flight line segments, wherein the starting point and the ending point are points in the flight path point set;
for the upper arch bridge, in the XZ plane, each flight line segment is a curve, and the determination process of the function expression of the curve is as follows: constructing a fitting function of the curve, and solving coefficients of the fitting function by adopting a least square method according to the coordinates of the control points so as to obtain a function expression of the curve; and determining the Z coordinate of the corresponding endpoint according to the functional expression of the curve and the X coordinate of the endpoint.
Further, in the flight process, the shooting points of the bridge bottom image comprise the end point of each flight line segment in the flight path and the shooting points on the flight line segments, which are determined by the time interval.
Further, the calculation formula of the time interval is as follows:
wherein t is d For time interval, SS L The length of the photosensitive film of the high-definition camera is d is the minimum distance, f is the focal length of the high-definition camera, ol is the image overlapping rate, and S is the flying speed.
Further, the detection method further comprises: and in the automatic flight process, calculating a residual distance according to the real-time position of the unmanned aerial vehicle and a starting point and a stopping point in a control point, and controlling the flight speed of the unmanned aerial vehicle according to the residual distance.
Further, the calculation expression of the remaining distance is:
Dr x =D x -C x ,Dr y =D y -C y ,Dr z =D z -C z
D x =|x 2 -x 1 |,D y =|y 2 -y 1 |,D z =|z 2 -z 1 |;
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances of the unmanned aerial vehicle to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively C x 、C y 、C z Is the real-time position of the unmanned aerial vehicle, x 1 、y 1 、z 1 For starting in the sensor coordinate systemCoordinates of points x 2 、y 2 、z 2 Is the coordinates of the termination point in the sensor coordinate system.
Further, controlling the flying speed of the unmanned aerial vehicle according to the remaining distance comprises two conditions of altitude change and no altitude change in the automatic flying process;
for the case where there is no height change:
when (when)Controlling the unmanned aerial vehicle to fly at the speed of 0.5m/s on the XY plane of the sensor coordinate system; wherein S is 0 Is a critical value;
when (when)When the unmanned aerial vehicle is in a sensor coordinate system, the unmanned aerial vehicle is controlled to attenuate to 0m/s at a speed of 0.5-0.1 beta and hover; wherein, beta is an attenuation coefficient;
when (when)When the unmanned aerial vehicle hovers, controlling the unmanned aerial vehicle;
for the condition of height change, the flying speed S of the unmanned aerial vehicle in the Z-axis direction of the sensor coordinate system is controlled while the unmanned aerial vehicle is controlled to fly at the corresponding speed on the XY plane of the sensor coordinate system z The specific calculation formula is as follows:
S z =0.5×D z /D x
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances that the unmanned aerial vehicle needs to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively.
Further, the detection method further comprises the step of splicing all the bridge bottom images to obtain a bridge bottom integral image, and specifically comprises the following steps:
extracting real-time position of the unmanned aerial vehicle and camera attitude parameters of the high-definition camera when acquiring bottom images of each bridge;
performing SIFT feature extraction on each bridge bottom image, determining an adjacent relation between the bridge bottom images according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, and matching feature points of two adjacent bridge bottom images to obtain feature point matching pairs;
Taking all the characteristic point matching pairs and camera attitude parameters of the high-definition camera as inputs, and performing sparse three-dimensional reconstruction by utilizing a three-dimensional reconstruction algorithm based on motion to obtain internal parameters, external parameters and bridge bottom point clouds corresponding to each bridge bottom image;
processing the point cloud at the bottom of the bridge by adopting a plane fitting algorithm to determine the plane of the point cloud at the bottom of the bridge;
determining the mapping relation from each bridge bottom image to the bridge bottom point cloud plane according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, the internal parameters and the external parameters corresponding to each bridge bottom image and the bridge bottom point cloud plane;
according to the mapping relation, mapping each bridge bottom image to a bridge bottom point cloud plane by adopting an image perspective transformation algorithm;
performing exposure compensation on the bridge bottom images of the bridge bottom point cloud plane to ensure that the brightness of all the bridge bottom images is consistent;
and determining a splicing seam of two adjacent bridge bottom images in the overlapping area, and adopting an image fusion algorithm to eliminate dislocation and artifacts between the images at the splicing seam so as to realize seamless splicing between the adjacent images and obtain an integral image of the bridge bottom.
Based on the same conception, the invention also provides a bridge detection system based on the binocular vision and inertial navigation fusion unmanned aerial vehicle, which comprises a unmanned aerial vehicle and a ground control end which are in communication connection, wherein a cradle head, an inertial measurement device and a binocular vision sensor are arranged on the unmanned aerial vehicle, a high-definition camera is arranged on the cradle head, and the binocular vision sensor and a lens of the high-definition camera are vertically upwards; the unmanned aerial vehicle further comprises a data processing module and a flight control module;
The binocular vision sensor is used for acquiring a left eye image and a right eye image in the bridge detection process;
the inertial measurement device is used for acquiring pose parameters of the unmanned aerial vehicle in the bridge detection process;
the high-definition camera is used for collecting images of the bottom of the bridge in the automatic flight process of the unmanned aerial vehicle according to the flight path;
the data processing module is used for calculating the real-time position of the unmanned aerial vehicle according to the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device; under the control of a ground control end, determining a control point based on the real-time position of the unmanned aerial vehicle; determining a detection area according to the control points or the control points and bridge parameters, and planning a flight path according to the detection area and the control points; acquiring a bridge bottom image acquired by a high-definition camera, and processing the bridge bottom image to realize bridge detection;
the flight control module is used for controlling the unmanned aerial vehicle to automatically fly according to the flight path under the flight instruction;
the ground control end is used for controlling the flight of the unmanned aerial vehicle so as to determine a control point based on the real-time position of the unmanned aerial vehicle; and issuing flight instructions and transmitting bridge parameters to the unmanned aerial vehicle.
Advantageous effects
Compared with the prior art, the invention has the advantages that:
according to the invention, the binocular vision sensor and the inertial measurement device are used for replacing the GNSS of the unmanned aerial vehicle, so that the positioning navigation of the unmanned aerial vehicle is realized, the positioning precision and the positioning stability of the unmanned aerial vehicle are improved, and the problems that the unmanned aerial vehicle cannot complete path planning and automatic cruising, the manual operation difficulty is high, and high-quality images cannot be spliced due to weak GNSS signals at the bottom of a bridge in the traditional detection technology are solved;
when the flight path is planned, different flight paths are respectively planned according to the bottom characteristics of different bridges, and the unmanned aerial vehicle automatically cruises according to the flight paths, so that automatic bridge detection is realized, manual operation is not needed in the cruising process, the detection efficiency is improved, and the detection cost is reduced;
according to the invention, all bridge bottom images are spliced seamlessly to form high-definition image data of the bridge bottom, the bridge appearance disease detection is carried out by utilizing the high-definition image data, compared with manual detection, the detection efficiency is greatly improved, the data support is provided for establishing the bridge bottom disease data electronic document, and the high-precision full-coverage bridge bottom appearance detection is realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawing in the description below is only one embodiment of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a unmanned aerial vehicle according to an embodiment of the present application;
FIG. 2 is a flow chart of a bridge detection method according to an embodiment of the present application;
FIG. 3 is a schematic representation of the flight path of a T-beam or trabecular girder bridge in an embodiment of the application;
FIG. 4 is a schematic illustration of a flight path of a bridge having a medium cross section single span bridge length less than a set bridge length in an embodiment of the present application;
FIG. 5 is a schematic representation of a medium section, variable cross-section bridge or continuous bridge flight path layout in an embodiment of the application;
fig. 6 is a schematic illustration of a flight path layout of an overpass in accordance with an embodiment of the present application.
The system comprises a 100-unmanned aerial vehicle, a 101-high-definition camera, a 102-binocular vision sensor, a 103-data processing module and a 104-power module.
Detailed Description
The following description of the embodiments of the present application will be made more apparent and fully by reference to the accompanying drawings, in which it is shown, however, only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The technical scheme of the application is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
As shown in the schematic structural diagram of the unmanned aerial vehicle in fig. 1, a cradle head, an inertial measurement device and a binocular vision sensor 102 are arranged on the unmanned aerial vehicle 100, a high-definition camera 101 is arranged on the cradle head, and lenses of the binocular vision sensor 102 and the high-definition camera 101 are vertically upwards; the drone 100 also includes a data processing module 103, a flight control module, a power module 104, and a wireless transmission module. The unmanned aerial vehicle 100 is in wireless communication with a ground control end, so that the flight of the unmanned aerial vehicle can be controlled through the ground control end, and the automatic cruising of the unmanned aerial vehicle can be controlled according to a flight path.
As shown in fig. 2, the bridge detection method based on the binocular vision and inertial navigation fusion unmanned aerial vehicle provided by the embodiment of the invention comprises the following steps:
step 1: and calculating the real-time position of the unmanned aerial vehicle by using the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device.
In the traditional technology, unmanned aerial vehicle utilizes GNSS to realize location navigation, but in bridge detection field, because GNSS signal weak even not at bridge bottom will lead to unmanned aerial vehicle unable completion route planning and automatic cruising, can only the manual work to control unmanned aerial vehicle through ground control end, and the manual work degree of difficulty is big, and the image quality of gathering can't guarantee. In order to solve the problems, the invention adopts the binocular vision sensor and the inertial measurement device to realize real-time positioning and navigation of the unmanned aerial vehicle, eliminates the drift of positioning data and improves the positioning navigation precision.
Before unmanned aerial vehicle positioning navigation is performed by using the binocular vision sensor, internal parameters of a left eye camera and a right eye camera of the binocular vision sensor are firstly compared (c 1 ,c 2 ) Optical center (o) l (cx l ,cy l ),o r (cx r ,cy r ) A rotation matrix R and a translation matrix T of the left eye camera relative to the right eye camera, and focal lengths f of the left eye camera and the right eye camera. In this embodiment, calibration of camera parameters is the prior art.
The real-time position of the unmanned aerial vehicle is based on a sensor coordinate system, wherein the sensor coordinate system takes the position of a binocular vision sensor when in starting as an origin and takes the length direction (or the driving direction) of a bridge as a left-hand coordinate system of an X axis. When the unmanned aerial vehicle is required to be utilized for bridge detection, the unmanned aerial vehicle is manually controlled to fly to one side of the bridge through the ground control end, the binocular vision sensor is started to be ready, and the position of the binocular vision sensor or the position of the unmanned aerial vehicle is the origin of coordinates, such as the coordinate systems shown in fig. 3-6.
In this embodiment, the real-time position of the unmanned aerial vehicle is calculated by using the image collected by the binocular vision sensor and the pose parameter output by the inertial measurement device, and specifically includes the following steps:
step 1.1: acquiring a left eye image acquired by a left eye camera and a right eye image acquired by a right eye camera of a binocular vision sensor;
Step 1.2: respectively carrying out distortion correction on the left eye image and the right eye image by utilizing camera internal parameters of the binocular vision sensor, and respectively extracting FAST characteristic points from the images after distortion correction;
step 1.3: adding direction characteristics to the FAST characteristic points, and generating Brief descriptors of the characteristic points according to the FAST characteristic points added with the direction characteristics;
step 1.4: based on Brief descriptors of the feature points, calculating the Hamming distance according to an exclusive OR mode, and matching the most FAST feature points with the same elements on the corresponding bit positions in the left eye image and the right eye image to realize the feature point matching of the left eye image and the right eye image, so as to obtain feature matching pairs;
step 1.5: removing the feature matching pairs which are mismatched by adopting a RANSAC algorithm to obtain a feature pair set F (p) formed by all the feature matching pairs; wherein p is the number of feature matching pairs in the feature pair set;
step 1.6: matching pairs F according to features in the set of pairs F (p) i Calculating the pose parameters of the camera under the sensor coordinate system;
step 1.7: acquiring acceleration and angle information of the unmanned aerial vehicle by using an inertia measurement device, and further acquiring speed and position information of the unmanned aerial vehicle;
step 1.8: and positioning and correcting the flight path of the unmanned aerial vehicle by using the pose parameters of the camera and the speed and position information of the unmanned aerial vehicle, and reconstructing to obtain the real-time position of the unmanned aerial vehicle under the sensor coordinate system.
In step 1.6, pairs F are matched according to features in the feature pair set F (p) i The calculating of the camera pose parameters under the sensor coordinate system specifically comprises the following steps:
step 1.61: matching pairs F according to each feature in the set of feature pairs F (p) i Calculating space point coordinates of the FAST feature points under a sensor coordinate system in pixel coordinates of the left eye image and the right eye image, so as to obtain a three-dimensional point cloud;
step 1.62: and according to the three-dimensional point cloud, calculating the pose parameters of the camera under the sensor coordinate system by adopting an iterative nearest point algorithm.
Set feature matching pair f i The pixel coordinate of the FAST feature point in the left eye image is P l (x l ,y l ) Feature matching pair f i The pixel coordinate of the FAST feature point in the left eye image is P r (x r ,y r ) Then is combined with f i The corresponding spatial point coordinates P (x, y, z) are established by the following relation:
the space point coordinates P (x, y, z) can be solved by the combined type (1) and (2).
In step 1.62, the camera pose parameters include a spatial position of the binocular vision sensor in the sensor coordinate system and a camera pose angle, the pose angle including a pitch angle, a roll angle, and a yaw angle.
In step 1.7, the basic principle of inertial positioning is based on Newton's law of mechanics, using inertial measurement units (Inertial measurement unit, IMU) with three The accelerometer measures the acceleration (a) x ,a y ,a z ) Then, obtaining the speed and position information of the measured object through integral operation; the gyroscope measures angle information.
The inertial measurement unit measures acceleration (a) at 0-t x ,a y ,a z ) Integration is performed to obtain the velocity (V) x ,V y ,V z ) Wherein, the method comprises the steps of, wherein,initial speed after IMU initialization:
then for the velocity at time t (V x ,V y ,V z ) Integrating to obtain offset or position information (T x 、T y 、T z ) Wherein, the method comprises the steps of, wherein,initial position after initializing for IMU:
due to the reasons of too fast movement, insufficient scene illumination, shielding and the like of the unmanned aerial vehicle, the images acquired by the binocular vision sensor may have motion blur, the overlapping area between two frames of images is too small, and effective characteristic points are insufficient, so that characteristic matching fails, positioning errors are larger, and even positioning interruption occurs.
In the quick motion of short time, IMU can be effectual estimate the motion, acquire comparatively reliable positioning data, even scene moving speed is faster and change under comparatively complicated circumstances around, also can guarantee outstanding positioning efficiency. However, since IMU positioning data is obtained by time integration, after the time is long, errors are slowly accumulated, and positioning drift occurs, so that positioning errors are large.
Compared with the IMU, the binocular vision sensor provides rich image information, so that the problem of drift hardly occurs, and when the camera is in a low-speed mobile environment, the positioning data provided by the binocular vision sensor can relieve and correct the drift condition in the IMU; under the condition that the scene moving speed is high and the surrounding change is complex, the deficiency of binocular vision sensor positioning can be made up by fusing the IMU positioning data. Therefore, by adopting the positioning scheme of the IMU fusion binocular vision sensor, the stability and the precision of positioning data can be improved, and the whole positioning system has better robustness.
Because the light at the bottom of the bridge is complex, the photo-taking difference between the inner side light and the outer side light is large, underexposure or overexposure can occur in the bottom plate and at the edge of the bridge; when this occurs, the data processing module may not detect the target feature point in the image acquired by the binocular vision sensor, so that a positioning error occurs. In order to eliminate positioning errors, the exposure of the binocular vision sensor is automatically controlled, so that the histogram features of the images acquired by the binocular vision sensor reach optimal distribution, and the positioning robustness of the binocular vision sensor is improved. Therefore, while the camera internal parameters of the binocular vision sensor are used for respectively correcting the distortion of the left eye image and the right eye image, the detection method further comprises the following steps:
Calculating a left-eye gray histogram from the left-eye image, and calculating a right-eye gray histogram from the right-eye image;
when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is smaller than a first critical value, the corresponding image is shown to be dark, and the exposure is increased; when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is larger than a second critical value, the corresponding image is indicated to be brighter, and the exposure is reduced; wherein the second threshold is greater than the first threshold.
In this embodiment, the first critical value is 100, and the second critical value is 135, that is, when the maximum peak value of the gray level histogram corresponding to the left eye image or the right eye image is between 100 and 135, the histogram features are considered to reach the optimal distribution, so as to improve the positioning accuracy.
Step 2: a control point is determined.
Before the unmanned aerial vehicle automatically cruises according to the flight path, a control point is determined, and the flight path during the automatic cruising is planned according to the control point. The control points are determined by manually operating the unmanned aerial vehicle through the ground control end, and the control points to be determined are different for different types of bridges. When the number of the control points is 1, the length and the width of the bridge are required to be input; when the number of control points is 2 and above, the length and width of the bridge do not need to be input.
For a T beam or small box beam bridge, the unmanned aerial vehicle is controlled to fly to the lower right corner of the bottom of the bridge through a ground control end, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, a control point (for example, a point O1 in fig. 3) in a sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining a binocular vision sensor with an IMU, and at the moment, the unmanned aerial vehicle hovers at the control point and takes the control point as a starting point. Because there is only one control point, the interval D of the T beam and the length L of the T beam are transmitted to the unmanned aerial vehicle through the ground control end, so that the flight path planning is performed.
And a data processing module on the unmanned aerial vehicle determines a detection area by using the input interval D of the T beam and the length L of the T beam, so that a flight path is planned according to the detection area. In order to reduce the power consumption of the unmanned aerial vehicle, when the unmanned aerial vehicle automatically cruises, a control point hovering after the unmanned aerial vehicle determines the control point is taken as a starting point (for example, a point O1 in fig. 3).
In one embodiment of the present invention, two control points may also be used for path planning for a T-beam or small box girder bridge: firstly, controlling the unmanned aerial vehicle to fly to the lower right corner of the bottom of the bridge through a ground control end, adjusting the distance between the unmanned aerial vehicle and the bottom of the bridge to enable the distance to be equal to the minimum distance, and determining a control point in a sensor coordinate system according to the real-time position of the unmanned aerial vehicle calculated by combining a binocular vision sensor with an IMU; and then the unmanned aerial vehicle flies to the left upper corner of the bottom of the bridge along the diagonal line, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, and another control point in the sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU, and the unmanned aerial vehicle hovers at the control point.
The data processing module on the unmanned aerial vehicle utilizes the two control points to determine a detection area, so that a flight path is planned according to the detection area. In order to reduce the power loss of the unmanned aerial vehicle, when the unmanned aerial vehicle automatically cruises, a control point hovers after the unmanned aerial vehicle determines the control point is taken as a starting point, and another control point is taken as an ending point, namely the control point comprises the starting point and the ending point.
In this embodiment, the lower right corner and the upper left corner of the bottom of the bridge refer to the forward direction facing the driving direction or the X coordinate, and the bridge deck is overlooked and projected on the lower right corner and the upper left corner of the rectangular area.
In order to ensure the definition of the bridge bottom image acquired by the high-definition camera, the minimum distance between the unmanned aerial vehicle and the bridge bottom is calculated according to the focal length of the high-definition camera and the size of the photosensitive film of the high-definition camera, the Z coordinate (i.e. the height) of the control point is determined according to the minimum distance, and the calculation expression of the minimum distance is as follows:
d=min(d L ,d W ) (5)
S L =γ×a L ×(1-ol) (8)
S W =γ×a W ×(1-ol) (9)
wherein d is the minimum distance, f is the focal length of the high-definition camera, SS L 、SS W Respectively the length and the width of the photosensitive film of the high-definition camera, S L The shooting length of a single image of a high-definition camera is gamma, and the bridge detection precision is a L For the resolution of the high definition camera in length, a W For the resolution of the high definition camera in width, ol is the image overlapping rate, S W Is the shooting width of a single image.
For bridges with uniform cross section and single span length smaller than the set bridge length (for example, the set bridge length is 20 m), controlling the unmanned aerial vehicle to fly to the lower right corner of the bottom of the bridge through a ground control end, adjusting the distance between the unmanned aerial vehicle and the bottom of the bridge to enable the distance to be equal to the minimum distance (calculated according to formulas (5) - (9)), and determining one control point (for example, point O1 in fig. 4) in a sensor coordinate system according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU; and then the unmanned aerial vehicle flies to the left upper corner of the bottom of the bridge along the diagonal line, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance (calculated according to formulas (5) - (9)), and another control point (such as a point O2 in fig. 4) in the sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU, and the unmanned aerial vehicle hovers at the control point.
The data processing module on the unmanned aerial vehicle utilizes the two control points to determine a detection area, so that a flight path is planned according to the detection area. In order to reduce the power consumption of the unmanned aerial vehicle, when the unmanned aerial vehicle automatically cruises, a control point hovering after the unmanned aerial vehicle determines the control point is taken as a starting point (for example, a point O2 in fig. 4), and another control point is taken as an ending point (for example, a point O1 in fig. 4), namely, the control point comprises the starting point and the ending point.
For a constant-section variable-section bridge or continuous bridge, firstly, controlling the unmanned aerial vehicle to fly to the lower right corner of the bottom of the bridge through a ground control end, adjusting the distance between the unmanned aerial vehicle and the bottom of the bridge to be equal to the minimum distance (calculated according to formulas (5) - (9)), and determining a control point (such as point O1 in fig. 5) in a sensor coordinate system according to the real-time position of the unmanned aerial vehicle calculated by combining a binocular vision sensor with an IMU; the unmanned aerial vehicle is then maneuvered to fly to the upper right corner of the bridge bottom along the driving direction or the bridge length direction, the distance between the unmanned aerial vehicle and the bridge bottom is adjusted to be equal to the minimum distance (calculated according to formulas (5) - (9)), and a middle point (e.g., point O2 in fig. 5) in a sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU; and then the unmanned aerial vehicle is operated to translate to the upper left corner of the bottom of the bridge, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance (calculated according to formulas (5) - (9)), and another control point (such as a point O3 in fig. 5) in the sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU, and the unmanned aerial vehicle hovers at the control point.
The data processing module on the unmanned plane utilizes the control points (3) to determine the detection area, so that the flight path is planned according to the detection area. In order to reduce the power consumption of the unmanned aerial vehicle, when the unmanned aerial vehicle automatically cruises, a control point hovering after the unmanned aerial vehicle determines the control point is taken as a starting point (for example, a point O3 in fig. 5), and another control point is taken as an ending point (for example, a point O1 in fig. 5), wherein the control point comprises a starting point, an ending point and an intermediate point.
For the upper-bearing arch bridge, firstly, the unmanned aerial vehicle is controlled to fly to the lower right corner of the bottom of the bridge through a ground control end, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, and one control point (such as a point O1 in fig. 6) in a sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU; the unmanned aerial vehicle is then maneuvered to fly below the right side of the vault, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, and a first middle point (e.g., point O2 in fig. 6, namely the right vault point) in a sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by the binocular vision sensor in combination with the IMU; then, the unmanned aerial vehicle is operated to translate to the lower part of the left side of the vault, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, and a second middle point (e.g. a point O3 in fig. 6) in a sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU; finally, the unmanned aerial vehicle is controlled to fly to the upper left corner, the distance between the unmanned aerial vehicle and the bottom of the bridge is adjusted to be equal to the minimum distance, and another control point (for example, a point O4 in fig. 6) in the sensor coordinate system is determined according to the real-time position of the unmanned aerial vehicle calculated by combining the binocular vision sensor with the IMU, and the unmanned aerial vehicle hovers at the control point.
The data processing module on the unmanned plane utilizes the control points (4) to determine the detection area, so that the flight path is planned according to the detection area. In order to reduce the power consumption of the unmanned aerial vehicle, when the unmanned aerial vehicle automatically cruises, a control point hovering after the unmanned aerial vehicle determines the control point is taken as a starting point (for example, a point O4 in fig. 6), and another control point is taken as an ending point (for example, a point O1 in fig. 6), wherein the control point comprises a starting point, an ending point, a first middle point and a second middle point.
Step 3: and determining a detection area according to the control points or the control points and the bridge parameters, and planning a flight path during automatic cruising according to the detection area and the control points.
The number of flight line segments included in the flight path is different for different bridge types, and the flight line segments are also different. In the embodiment, flight path planning is performed on four types of bridges:
(1) For a T beam or a small box beam bridge, when the control points only comprise starting points, determining N flight line segments with the length L and the interval D according to the number N of the T beams with the length L, T and the interval D between the T beams, and forming a flight path point set M (N) by two end points of the N flight line segments, wherein the starting points are points (such as the point O1 in fig. 3) in the flight path point set M (N), and N is the number of the flight path points in the flight path point set M (N).
The number of the flight line segments is equal to T Liang Shuliang N, the length of the flight line segments is equal to the length L of the T beam, the distance between the flight line segments is equal to the distance D between the T beams, each flight line segment has two endpoints, the endpoints of the N flight line segments form a flight path point set M (N), and then n=2×N. The coordinates of each of the flight path points in the flight path point set M (n) can be determined, for example, the first end point of the first flight line segment is set as a start point O1, and the coordinates of the start point O1 are set as (x) 1 ,y 1 ,z 1 ) The coordinates of the second end point of the first flight line segment are (x 1 +L,y 1 ,z 1 ) Flying according to the S-shaped path, wherein the coordinates of the first endpoint of the second flying line segment are (x) 1 +L,y 1 -D,z 1 ) The coordinates of the second end point of the second flight line segment are (x 1 ,y 1 -D,z 1 ) The first end point of the third flight line segment has a coordinate (x 1 ,y 1 -2D,z 1 ) The second of the third flight line segmentThe coordinates of the end points are (x 1 +L,y 1 -2D,z 1 ) And so on, the coordinates of each path point in the flight path point set M (n) can be obtained, and then the flight path is obtained. To reduce power consumption, the flight path is S-shaped. The unmanned aerial vehicle automatically cruises according to the flight path, and when flying to the last path point in the flight path point set M (n), the unmanned aerial vehicle hovers and waits for returning. The last path point in the set of flight path points M (n) serves as the termination point.
For a T-beam or small box-beam bridge, when the control point comprises a starting point and an ending point, calculating the length L and the width W of a detection area according to the coordinate values of the starting point and the ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W Determining the number N of flight line segments, wherein two endpoints of the N flight line segments form a flight path point set, and the starting point and the ending point are points in the flight path point set; wherein the length of each flight line segment is equal to the length L of the detection area.
The control points comprise a starting point and a termination point, wherein the starting point is the control point corresponding to the hovering of the unmanned aerial vehicle after the control point is determined, and the other control point is the termination point. As shown in fig. 3, the present embodiment takes a point O1 as a starting point, and sets the coordinates of the starting point O1 as (x 1 ,y 1 ,z 1 ) The termination point has a coordinate of (x 2 ,y 2 ,z 2 ) The length and width of the detection area are respectively:
L=|x 2 -x 1 | (10)
W=|y 2 -y 1 | (11)
(2) For bridges with uniform cross section and single span length smaller than the set bridge length (for example, 20 meters), the length L and the width W of the detection area are calculated according to the coordinate value of the control point, and the width W of the detection area, the image overlapping rate ol and the shooting width S of the single image are calculated W And determining the number N of the flight line segments, and forming a flight path point set by two endpoints of the N flight line segments.
The control point comprises a starting point and an ending point, wherein the starting point is a control corresponding to hovering of the unmanned aerial vehicle after the control point is determinedThe control point is the end point. As shown in fig. 4, the present embodiment uses the point O2 as the start point, the point O1 as the end point, and the coordinates of the start point O2 are set as (x 2 ,y 2 ,z 2 ) The termination point O1 has a coordinate of (x 1 ,y 1 ,z 1 ) The length of the detection area is calculated according to equation (10) (i.e., the difference between the X coordinates of the start point and the end point), and the width of the detection area is calculated according to equation (11) (i.e., the difference between the Y coordinates of the start point and the end point). The calculation formula of the number N of the flight line segments is as follows:
when the calculation result of the formula (12) is not an integer, the calculation result is rounded up to obtain the number N, namely, the detection area is equally divided into N equal parts. The length of the flight line segment is equal to the length L of the detection area, and the distance between the flight line segments is equal to (1-ol) x S W Each flight line segment has two end points, and the end points of the N flight line segments form a flight path point set M (N), where n=2×n. The coordinates of each of the flight path points in the set of flight path points M (n) can be determined, for example, the first end point of the first flight line segment is the starting point O2, and the second end point of the first flight line segment is the coordinates (x) 2 -L,y 2 ,z 2 ) Flying according to the S-shaped path, wherein the coordinates of the first endpoint of the second flying line segment are (x) 2 -L,y 2 +(1-ol)×S W ,z 2 ) The coordinates of the second end point of the second flight line segment are (x 2 ,y 2 +(1-ol)×S W ,z 2 ) The first end point of the third flight line segment has a coordinate (x 2 ,y 2 +2(1-ol)×S W ,z 2 ) The coordinates of the second end point of the third flight line segment are (x 2 -L,y 2 +2(1-ol)×S W ,z 2 ) And so on, the coordinates of each path point in the flight path point set M (n) can be obtained, and then the flight path is obtained. The unmanned aerial vehicle automatically cruises according to the flight path, and when flying to the last path point (namely the termination point O1) in the flight path point set M (n), the unmanned aerial vehicle hovers and waits forAnd (5) returning.
(3) For a constant-section, variable-section bridge or continuous bridge, calculating the length L and the width W of a detection area according to the coordinate values of a starting point and an ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W And determining the number N of the flight line segments, and forming a flight path point set by two endpoints of the N flight line segments.
The control points comprise a starting point, an ending point and an intermediate point, the X coordinate of the intermediate point is identical to the X coordinate of the starting point, the Y coordinate of the intermediate point is identical to the Y coordinate of the ending point, the starting point is the control point corresponding to the hovering of the unmanned aerial vehicle after the control point is determined, and the other control point is the ending point. As shown in fig. 5, in the present embodiment, the point O3 is taken as a start point, the point O1 is taken as an end point, the point O2 is taken as an intermediate point, and the coordinates of the start point O3 are set as (x) 3 ,y 3 ,z 3 ) The termination point O1 has a coordinate of (x 1 ,y 1 ,z 1 ) The coordinates of the intermediate point O2 are (x 2 ,y 2 ,z 2 ),x 2 =x 3 ,y 2 =y 1 The length and the width of the detection area are calculated according to the formulas (10) and (11), and the calculation formula of the number N of the flight line segments is shown as the formula (12).
For a bridge with a uniform cross section, Z coordinate value of each path point in the flying path point set M (n) is unchanged, and Z is equal to Z 2 =z 3 =z 1 Each of the coordinates of the route points is determined in the same manner as the (2) th point.
For the variable cross-section bridge, the Z coordinate value of the path point is changed, and in the XZ plane, each flight line segment is a slope (as shown in fig. 5), so that the slope of the slope needs to be calculated, and then the Z coordinate of each path point is determined according to the slope. The slope of the oblique line is calculated as follows: obtaining a fitting straight line according to the intermediate point O2 and the termination point O1, and calculating a Z coordinate change value |z according to the coordinates of the intermediate point O2 and the termination point O1 2 -z 1 Values of change in the coordinates of i and X 2 -x 1 Z coordinate change value |z 2 -z 1 Values of change in coordinates of i and X, X 2 -x 1 The ratio of I is the slope k; determining the corresponding endpoint according to the slope k and the X coordinate of the endpoint (or path point)Or a corresponding path point).
For a variable cross-section bridge, the X-coordinate and Y-coordinate of each of the path points in the flying path point set M (n) are determined in the same manner as the (2) -th point (as shown in FIG. 5), and when the X-coordinate and Y-coordinate of the path point are determined, the Z-coordinate Z of the path point i Equal to kx i Wherein x is i The X coordinate of the ith route point can be determined, so that the coordinate of each route point in the flight route point set M (n) can be determined, and the flight route can be obtained.
Starting from the upper left corner (e.g. point O3, namely the nth path point), the unmanned aerial vehicle reaches the nth-1 path point according to the corresponding flight line segment (oblique line) in the flight path, then translates to the nth-2 path point, then reaches the nth-3 path point according to the corresponding flight line segment (oblique line) in the flight path, then translates to the nth-4 path point, and so on until reaching the 1 st path point (lower right corner, namely the O1 point), and the unmanned aerial vehicle returns as shown in fig. 5.
(4) For the upper arch bridge, the length L and the width W of the detection area are calculated according to the coordinate values of the starting point and the ending point, and the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image are calculated W And determining the number N of the flight line segments, and forming a flight path point set by two endpoints of the N flight line segments.
The control points comprise a starting point, an ending point, a first middle point and a second middle point, the first middle point and the second middle point are respectively positioned at two ends of the vault, the distance between the first middle point and the vault is equal to the minimum distance, the starting point is the control point corresponding to hovering of the unmanned aerial vehicle after the control point is determined, and the other control point is the ending point. As shown in fig. 6, in the present embodiment, the point O4 is used as the starting point, the point O1 is used as the ending point, the point O3 is used as the second intermediate point, the point O2 is used as the first intermediate point, and the coordinates of the starting point O4 are set as (x 4 ,y 4 ,z 4 ) The termination point O1 has a coordinate of (x 1 ,y 1 ,z 1 ) The second intermediate point O3 has coordinates (x 3 ,y 3 ,z 3 ) The first intermediate point O2 has a coordinate (x 2 ,y 2 ,z 2 ) The length and width of the detection area are calculated according to formulas (10) and (11), respectively, and the flight line segment is obtainedThe calculation formula of the number N is shown in formula (12).
As shown in fig. 6, in the XZ plane, each of the flight line segments is curved, and to determine the flight path, it is necessary to determine a curved function expression of each of the flight line segments. The curve function expression is determined by: constructing a fitting function of the curve, and solving coefficients of the fitting function by adopting a least square method according to coordinates of the control points so as to obtain a function expression of the curve; the Z coordinate of the corresponding endpoint (or corresponding path point) is determined from the functional expression of the curve and the X coordinate of the endpoint (or path point).
The expression of the fitting function is:
z=a 0 +a 1 x+a 2 x 2 (13)
substituting the coordinates of the starting point, the ending point and one of the intermediate points (the first intermediate point or the second intermediate point) into the formula (13), and solving the coefficients by a least square method to obtain the coefficient a of the fitting function 0 、a 1 、a 2 Further, a function expression F (x) =z=a of the curve is obtained 0 +a 1 x+a 2 x 2
For the upper arch bridge, the X-coordinate and Y-coordinate of each path point in the flying path point set M (n) are determined in the same manner as the (2) -th point (as shown in FIG. 6), and when the X-coordinate and Y-coordinate of the path point are determined, the Z-coordinate Z of the path point i Equal to F (x) i ) Wherein x is i The X coordinate of the ith route point can be determined, so that the coordinate of each route point in the flight route point set M (n) can be determined, and the flight route can be obtained.
Starting from the upper left arch corner (e.g. point O4, i.e. the nth path point), the unmanned aerial vehicle arrives at the nth-1 path point according to the corresponding flight line segment (curve) in the flight path, translates to the nth-2 path point, arrives at the nth-3 path point according to the corresponding flight line segment (curve) in the flight path, translates to the nth-4 path point, and so on until the 1 st path point (i.e. the lower right arch corner, point O1) is reached, and the unmanned aerial vehicle returns as shown in fig. 6.
Step 4: and controlling the unmanned aerial vehicle to reach the starting point, controlling the unmanned aerial vehicle to automatically fly according to the flight path, and acquiring the bridge bottom image by using the high-definition camera in the automatic flight process.
When the control point is determined, the unmanned aerial vehicle hovers over the control point, and the hovering control point is taken as a starting point, so that the unmanned aerial vehicle does not need to be controlled to reach the starting point; when the hovering control point is not taken as a starting point, the unmanned aerial vehicle needs to be controlled to reach the starting point.
Step 5: and in the automatic flight process, determining shooting points of the high-definition camera.
When the unmanned aerial vehicle automatically cruises according to the flight path, a high-definition camera is required to be used for collecting images of the bottom of the bridge in the cruising process so as to realize bridge appearance detection. In this embodiment, the shooting points of the high-definition camera include an end point of each flight line segment and a shooting point determined by a time interval on the flight line segment, where a calculation formula of the time interval is as follows:
wherein t is d For time interval, SS L The length of the photosensitive film of the high-definition camera is d is the minimum distance, f is the focal length of the high-definition camera, ol is the image overlapping rate, and S is the flying speed.
The black dots as shown in fig. 3 to 6 are shot points on the flight paths of different types of bridges.
Step 6: in the automatic flight process, calculating a residual distance according to the real-time position of the unmanned aerial vehicle and the starting point and the ending point, and controlling the flight speed of the unmanned aerial vehicle according to the residual distance.
When the unmanned aerial vehicle automatically cruises, the flight speed of the unmanned aerial vehicle is controlled in addition to the unmanned aerial vehicle flight according to the flight path. The flying speed is determined by the remaining distance, and the calculation formula of the remaining distance is as follows:
D x =|x 2 -x 1 | (15)
D y =|y 2 -y 1 | (16)
D z =|z 2 -z 1 | (17)
Dr x =D x -C x (18)
Dr y =D y -C y (19)
Dr z =D z -C z (20)
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances of the unmanned aerial vehicle to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively C x 、C y 、C z Is the real-time position of the unmanned aerial vehicle, x 1 、y 1 、z 1 X is the coordinate of the starting point under the sensor coordinate system 2 、y 2 、z 2 Is the coordinates of the termination point in the sensor coordinate system.
Controlling the flying speed of the unmanned aerial vehicle according to the remaining distance, wherein the flying speed comprises two conditions of height change and no height change in the flying process; for the case where there is no height change:
when (when)Controlling the unmanned aerial vehicle to fly at the speed of 0.5m/s on the XY plane of the sensor coordinate system;
when (when)When the unmanned aerial vehicle is in a sensor coordinate system, the unmanned aerial vehicle is controlled to attenuate to 0m/s at a speed of 0.5-0.1 beta and hover; wherein, beta is an attenuation coefficient;
when (when)And when the unmanned aerial vehicle reaches the ending point, controlling the unmanned aerial vehicle to hover.
When (when)When the flying speed of the unmanned plane is 0.5-0And 1 beta is attenuated to 0, and reaches the end point in a smooth mode, so that excessive shaking of the unmanned aerial vehicle is avoided, and the stability of the unmanned aerial vehicle is improved.
Critical value S 0 The range of the value of (2) is 0.1-0.5. In the present embodiment, the threshold S 0 0.5.
For the condition of height change (such as a variable cross-section bridge or an overhead arch bridge), the flying speed S of the unmanned aerial vehicle in the Z-axis direction of the sensor coordinate system is controlled while the unmanned aerial vehicle flies at the corresponding speed on the XY surface of the sensor coordinate system z The specific calculation formula is as follows:
S z =0.5×D z /D x (21)
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances that the unmanned aerial vehicle needs to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively.
For the case where there is a change in height, whenAnd->And when the unmanned aerial vehicle reaches the ending point, controlling the unmanned aerial vehicle to hover.
Step 7: after the unmanned aerial vehicle automatically cruises according to the flight path, all the bridge bottom images are spliced to obtain a bridge bottom integral image, so that bridge detection is realized.
In the automatic cruising process, the high-definition camera collects bridge bottom images at each shooting point, and the real-time position of the unmanned aerial vehicle and the camera posture of the high-definition camera are stored when the bridge bottom images are stored. In this embodiment, the step of stitching all the bridge bottom images to obtain the bridge bottom integral image specifically includes the following steps:
step 7.1: and extracting the real-time position of the unmanned aerial vehicle and the camera attitude parameters of the high-definition camera when each bridge bottom image is acquired.
In this embodiment, the camera attitude parameters include a pitch angle, a roll angle, and a yaw angle.
Step 7.2: and performing SIFT feature extraction on each bridge bottom image, determining the adjacent relation between the bridge bottom images according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, and matching the feature points of the two adjacent bridge bottom images to obtain a feature point matching pair.
When a certain frame of bridge bottom image is acquired, the real-time position of the unmanned aerial vehicle is P1, the real-time position of the unmanned aerial vehicle closest to the real-time position P1 is P2, and then the bridge bottom image corresponding to P1 and the bridge bottom image corresponding to P2 are adjacent images. And obtaining a first frame bridge bottom image, a second frame bridge bottom image, a third frame bridge bottom image and … … which are sequentially adjacent according to the adjacent relation, performing feature point matching on the first frame bridge bottom image and the second frame bridge bottom image, and performing feature point matching on the second frame bridge bottom image and the third frame bridge bottom image, … …, thereby obtaining all feature point matching pairs.
Performing feature point matching on the first frame bridge bottom image and the second frame bridge bottom image, wherein the camera attitude parameters comprise camera attitude parameters for acquiring the first frame bridge bottom image and the second frame bridge bottom image; performing feature point matching on the second frame bridge bottom image and the third frame bridge bottom image, wherein the camera attitude parameters comprise camera attitude parameters for acquiring the second frame bridge bottom image and the third frame bridge bottom image; and the like, obtaining all the characteristic point matching pairs and corresponding camera attitude parameters.
In step 7.2, the RANSAC algorithm is also adopted to remove the feature point matching pairs which are mismatched.
Step 7.3: and taking all the characteristic point matching pairs and camera attitude parameters of the high-definition camera as inputs, and performing sparse three-dimensional reconstruction by utilizing a three-dimensional reconstruction algorithm based on motion to obtain an internal reference K, an external reference Ec and a bridge bottom point cloud Bc corresponding to each bridge bottom image.
Sparse three-dimensional reconstruction using a Motion-based three-dimensional reconstruction algorithm (i.e., SFM algorithm) is known in the art, and can be found in Schonberger J L, frahm J M.Structure-from-Motion reconstruction [ C ]// IEEE Conference on Computer Vision & Pattern reconstruction.IEEE, 2016:4104-4113.DOI:10.1109/CVPR.2016.445.
And restoring the three-dimensional structure of the image by the point cloud at the bottom of the bridge, and restoring the matched characteristic points into the three-dimensional structure.
Step 7.4: and processing the bridge bottom point cloud Bc By adopting a plane fitting algorithm to determine a bridge bottom point cloud plane P (ax+by+cz+d=0).
The bridge bottom point cloud plane is a projection plane and has the following relation:
wherein f represents a focal length of the high-definition camera, (c) x ,c y ) Represents the optical center of a high definition camera, a 11 ,…,a 33 Representing a rotation matrix calculated from camera pose, t 1 ,…,t 3 Representing the camera pan position.
Step 7.5: according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, the internal parameters K and the external parameters Ec corresponding to each bridge bottom image, and the bridge bottom point cloud plane P (ax+by+cz+D=0), the mapping relation E from each bridge bottom image to the bridge bottom point cloud plane is determined, specifically including:
wherein I (x, y, 0) is the pixel position of the pixel point in the bridge bottom image, I k (x, y, 0) is the pixel position of the pixel point after the distortion correction of the bridge bottom image; p%x, y, z) is I k (x, y, 0) maps to a location on the bridge bottom point cloud plane.
Step 7.6: according to the mapping relation E, mapping each bridge bottom image to a bridge bottom point cloud plane by adopting an image perspective transformation algorithm, wherein the specific projection process comprises the following steps of:
wherein, (X, Y, Z) is the position of the bridge bottom image in the projection plane, and (X, Y) is the position of the corresponding pixel point in the bridge bottom image.
Step 7.7: and performing exposure compensation on the bridge bottom images of the bridge bottom point cloud plane to ensure that the brightness of all the bridge bottom images is consistent.
Because the bridge bottom images shot at different moments have different overall brightness due to unfixed exposure, obvious brightness change can occur when the images are directly spliced, and therefore, the exposure compensation is needed to be carried out on the images before the images are spliced, so that the overall brightness of the different bridge bottom images is consistent.
Step 7.8: determining a splicing seam of two adjacent bridge bottom images in an overlapping area by adopting a Graphcut algorithm, eliminating dislocation and artifacts between the images by adopting a Laplacian multiband fusion algorithm at a plurality of pixels at the splicing seam, and realizing seamless splicing between the adjacent images to obtain an integral image of the bridge bottom.
As shown in fig. 1, the embodiment of the invention also provides a bridge detection system based on a binocular vision and inertial navigation fusion unmanned aerial vehicle, which comprises an unmanned aerial vehicle 100 and a ground control end which are in communication connection, wherein a cradle head, an inertial measurement device and a binocular vision sensor 102 are arranged on the unmanned aerial vehicle 100, a high-definition camera 101 is arranged on the cradle head, and lenses of the binocular vision sensor 102 and the high-definition camera 101 are vertically upwards; the unmanned aerial vehicle 100 further comprises a data processing module 103, a flight control module, a power module 104 and a wireless transmission module; the high-definition camera 101, the inertial measurement device and the binocular vision sensor 102 are respectively connected with the data processing module 103, and the data processing module 103 is connected with the flight control module.
Binocular vision sensor 102 for acquiring left and right eye images during bridge inspection.
And the inertial measurement device is used for acquiring pose parameters of the unmanned aerial vehicle in the bridge detection process.
The high-definition camera 101 is used for acquiring the bridge bottom image in the automatic flight process of the unmanned aerial vehicle 100 according to the flight path.
The data processing module 103 is configured to calculate a real-time position of the unmanned aerial vehicle 100 according to the image acquired by the binocular vision sensor 102 and the pose parameter output by the inertial measurement device; under the control of the ground control end, determining a control point based on the real-time position of the unmanned aerial vehicle 100; determining a detection area according to the control points or the control points and bridge parameters, and planning a flight path according to the detection area and the control points; and acquiring the bridge bottom image acquired by the high-definition camera 101, and processing the bridge bottom image to realize bridge detection.
And the flight control module is used for controlling the unmanned aerial vehicle 100 to automatically fly according to the flight path under the flight instruction.
The ground control end is used for controlling the flight of the unmanned aerial vehicle 100 so as to determine a control point based on the real-time position of the unmanned aerial vehicle; and issuing flight instructions and transmitting bridge parameters to the unmanned aerial vehicle 100.
The foregoing disclosure is merely illustrative of specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art will readily recognize that changes and modifications are possible within the scope of the present invention.

Claims (13)

1. The bridge detection method based on the binocular vision and inertial navigation fusion unmanned aerial vehicle is characterized in that a cradle head, an inertial measurement device and a binocular vision sensor are arranged on the unmanned aerial vehicle, a high-definition camera is arranged on the cradle head, lenses of the binocular vision sensor and the high-definition camera are vertically upwards, and the detection method comprises the following steps:
calculating the real-time position of the unmanned aerial vehicle by using the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device; the real-time position is based on a sensor coordinate system, wherein the sensor coordinate system is a left-hand coordinate system taking the position of a binocular vision sensor when started as an origin and taking the length direction of a bridge as an X axis;
determining a control point based on the real-time position of the unmanned aerial vehicle;
determining a detection area according to the control points or the control points and bridge parameters, and planning a flight path according to the detection area and the control points;
and controlling the unmanned aerial vehicle to reach a control point, controlling the unmanned aerial vehicle to fly automatically according to the flight path, and acquiring the image of the bottom of the bridge by using a high-definition camera in the automatic flight process so as to realize the image acquisition of the bridge appearance detection.
2. The bridge inspection method according to claim 1, wherein the real-time position of the unmanned aerial vehicle is calculated by using the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device, and specifically comprising the following steps:
acquiring a left eye image acquired by a left eye camera and a right eye image acquired by a right eye camera of a binocular vision sensor;
respectively carrying out distortion correction on the left eye image and the right eye image by utilizing camera internal parameters of the binocular vision sensor, and respectively extracting FAST feature points from the images after distortion correction;
adding direction characteristics to the FAST characteristic points, and generating Brief descriptors of the characteristic points according to the FAST characteristic points added with the direction characteristics;
based on Brief descriptors of the feature points, matching FAST feature points in the left eye image and the right eye image to obtain feature matching pairs;
eliminating the feature matching pairs which are mismatched to obtain a feature pair set formed by all the feature matching pairs;
calculating camera pose parameters under a sensor coordinate system according to the feature matching pairs in the feature pair set;
acquiring acceleration and angle information of the unmanned aerial vehicle by using the inertia measurement device, and further acquiring speed and position information of the unmanned aerial vehicle;
And positioning and correcting the flight path of the unmanned aerial vehicle by using the camera pose parameters and the speed and position information of the unmanned aerial vehicle, and reconstructing to obtain the real-time position of the unmanned aerial vehicle under the sensor coordinate system.
3. The bridge inspection method according to claim 2, wherein the inspection method further comprises, while distortion correction is performed on the left-eye image and the right-eye image respectively using camera internal parameters of a binocular vision sensor:
calculating a left-eye gray level histogram according to the left-eye image, and calculating a right-eye gray level histogram according to the right-eye image;
when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is smaller than a first critical value, indicating that the corresponding image is dark, and increasing exposure; when the maximum peak value of the left-eye gray level histogram or the right-eye gray level histogram is larger than a second critical value, the corresponding image is indicated to be brighter, and the exposure is reduced; wherein the second threshold is greater than the first threshold.
4. The bridge inspection method according to claim 2, wherein the step of calculating the camera pose parameters under the sensor coordinate system according to the feature matching pairs in the feature pair set specifically comprises the following steps:
Calculating space point coordinates under a sensor coordinate system according to pixel coordinates of FAST feature points of each feature matching pair in the feature pair set in a left eye image and a right eye image, so as to obtain a three-dimensional point cloud;
and calculating the pose parameters of the camera under the sensor coordinate system according to the three-dimensional point cloud.
5. The bridge inspection method according to claim 1, wherein the Z coordinate of the control point is determined by a minimum distance between the unmanned aerial vehicle and the bridge bottom, and the minimum distance is calculated by the following expression:
d=min(d L ,d W );
S L =γ×a L ×(1-ol),S W =γ×a W ×(1-ol);
wherein d is the minimum distance, f is the focal length of the high-definition camera, SS L 、SS W Respectively the length and the width of the photosensitive film of the high-definition camera, S L The shooting length of a single image is gamma, the bridge detection precision is a L For the resolution of the high definition camera in length, a W For the resolution of the high definition camera in width, ol is the image overlapping rate, S W Is the shooting width of a single image.
6. The bridge inspection method according to claim 1, wherein the detecting area is determined according to the control point or the control point and the bridge parameter, and the flying path is planned according to the detecting area and the control point, specifically comprising:
for a T beam or a small box beam bridge, the control points comprise starting points, N flight line segments with the length L and the interval D are determined according to the number N of the T beams with the length L, T and the interval D between the T beams, two end points of the N flight line segments form a flight path point set, and the starting points are points in the flight path point set; or, for a T-beam or a small box-beam bridge, the control points comprise a starting point and an ending point, the length L and the width W of the detection area are calculated according to the coordinate values of the starting point and the ending point, and the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image are calculated according to the width W of the detection area W Determining the number N of flight line segments, forming a flight path point set by two endpoints of the N flight line segments, wherein the starting point and the ending point are points in the flight path point set; wherein the length of each flight line segment is equal to the detection areaA length L;
for bridges with uniform cross section and single span length smaller than the set bridge length, the control points comprise a starting point and an ending point, the length L and the width W of a detection area are calculated according to the coordinate values of the starting point and the ending point, and the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image are calculated according to the width W of the detection area W Determining the number N of flight line segments, forming a flight path point set by two endpoints of the N flight line segments, wherein the starting point and the ending point are points in the flight path point set; wherein the length of each flight line segment is equal to the length L of the detection area;
for a constant-section variable-section bridge or continuous bridge, the control point comprises a starting point, an ending point and an intermediate point, the X coordinate of the intermediate point is the same as the X coordinate of the starting point, and the Y coordinate of the intermediate point is the same as the Y coordinate of the ending point; calculating the length L and the width W of the detection area according to the coordinate values of the starting point and the ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W Determining the number N of flight line segments, forming a flight path point set by two end points of the N flight line segments, wherein the starting point, the ending point and the middle point are points in the flight path point set;
for the variable-section bridge, in the XZ plane, each flight line segment is oblique line, and the calculation process of the oblique line slope is as follows: calculating a Z coordinate change value and an X coordinate change value according to the middle point and the end point, and calculating a slope according to the Z coordinate change value and the X coordinate change value; determining Z coordinates of the corresponding endpoints according to the slope and the X coordinates of the endpoints;
for the upper bearing arch bridge, the control points comprise a starting point, an ending point, a first middle point and a second middle point, wherein the first middle point and the second middle point are respectively positioned at two ends of the arch crown, and the distance between the first middle point and the second middle point and the arch crown is equal to the minimum distance; calculating the length L and the width W of the detection area according to the coordinate values of the starting point and the ending point, and according to the width W of the detection area, the image overlapping rate ol and the shooting width S of a single image W Determining the number N of the flight line segments, and forming a flight path point set by two end points of the N flight line segments, wherein the starting point isAnd the termination point is a point in the flight path point set;
for the upper arch bridge, in the XZ plane, each flight line segment is a curve, and the determination process of the function expression of the curve is as follows: constructing a fitting function of the curve, and solving coefficients of the fitting function by adopting a least square method according to the coordinates of the control points so as to obtain a function expression of the curve; and determining the Z coordinate of the corresponding endpoint according to the functional expression of the curve and the X coordinate of the endpoint.
7. The bridge inspection method according to claim 1, wherein the shot points of the bridge bottom image include an end point of each flight line segment in the flight path and a shot point on the flight line segment determined by a time interval during the flight.
8. The bridge inspection method according to claim 7, wherein the calculation formula of the time interval is:
wherein t is d For time interval, SS L The length of the photosensitive film of the high-definition camera is d is the minimum distance, f is the focal length of the high-definition camera, ol is the image overlapping rate, and S is the flying speed.
9. The bridge inspection method according to any one of claims 1 to 8, characterized in that the inspection method further comprises: and in the automatic flight process, calculating a residual distance according to the real-time position of the unmanned aerial vehicle and a starting point and a stopping point in a control point, and controlling the flight speed of the unmanned aerial vehicle according to the residual distance.
10. The bridge inspection method according to claim 9, wherein the calculation expression of the remaining distance is:
Dr x =D x -C x ,Dr y =D y -C y ,Dr z =D z -C z
D x =|x 2 -x 1 |,D y =|y 2 -y 1 |,D z =|z 2 -z 1 |;
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances of the unmanned aerial vehicle to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively C x 、C y 、C z Is the real-time position of the unmanned aerial vehicle, x 1 、y 1 、z 1 X is the coordinate of the starting point under the sensor coordinate system 2 、y 2 、z 2 Is the coordinates of the termination point in the sensor coordinate system.
11. The bridge inspection method of claim 9, wherein controlling the flying speed of the unmanned aerial vehicle according to the remaining distance includes both a height change and a lack of a height change during the automatic flying;
for the case where there is no height change:
when (when)Controlling the unmanned aerial vehicle to fly at the speed of 0.5m/s on the XY plane of the sensor coordinate system; wherein S is 0 Is a critical value;
when (when)When the unmanned aerial vehicle is in a sensor coordinate system, the unmanned aerial vehicle is controlled to attenuate to 0m/s at a speed of 0.5-0.1 beta and hover; wherein, beta is an attenuation coefficient;
when (when)When the unmanned aerial vehicle hovers, controlling the unmanned aerial vehicle;
for the presence of height variationsUnder the chemical condition, the unmanned aerial vehicle is controlled to fly at the corresponding speed on the XY plane of the sensor coordinate system, and meanwhile, the flying speed S of the unmanned aerial vehicle in the Z-axis direction of the sensor coordinate system is also controlled z The specific calculation formula is as follows:
S z =0.5×D z /D x
wherein Dr is x 、Dr y 、Dr z The remaining distances in the X axis, Y axis and Z axis of the sensor coordinate system are respectively D x 、D y 、D z The distances that the unmanned aerial vehicle needs to move on the X axis, the Y axis and the Z axis under the sensor coordinate system are respectively.
12. The bridge inspection method according to claim 1, further comprising the step of stitching all bridge bottom images to obtain a bridge bottom overall image, specifically comprising:
extracting real-time position of the unmanned aerial vehicle and camera attitude parameters of the high-definition camera when acquiring bottom images of each bridge;
performing SIFT feature extraction on each bridge bottom image, determining an adjacent relation between the bridge bottom images according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, and matching feature points of two adjacent bridge bottom images to obtain feature point matching pairs;
taking all the characteristic point matching pairs and camera attitude parameters of the high-definition camera as inputs, and performing sparse three-dimensional reconstruction by utilizing a three-dimensional reconstruction algorithm based on motion to obtain internal parameters, external parameters and bridge bottom point clouds corresponding to each bridge bottom image;
processing the point cloud at the bottom of the bridge by adopting a plane fitting algorithm to determine the plane of the point cloud at the bottom of the bridge;
determining the mapping relation from each bridge bottom image to the bridge bottom point cloud plane according to the real-time position of the unmanned aerial vehicle when each bridge bottom image is acquired, the internal parameters and the external parameters corresponding to each bridge bottom image and the bridge bottom point cloud plane;
According to the mapping relation, mapping each bridge bottom image to a bridge bottom point cloud plane by adopting an image perspective transformation algorithm;
performing exposure compensation on the bridge bottom images of the bridge bottom point cloud plane to ensure that the brightness of all the bridge bottom images is consistent;
and determining a splicing seam of two adjacent bridge bottom images in the overlapping area, and adopting an image fusion algorithm to eliminate dislocation and artifacts between the images at the splicing seam so as to realize seamless splicing between the adjacent images and obtain an integral image of the bridge bottom.
13. The bridge detection system based on the binocular vision and inertial navigation fusion unmanned aerial vehicle is characterized by comprising an unmanned aerial vehicle and a ground control end which are in communication connection, wherein a cradle head, an inertial measurement device and a binocular vision sensor are arranged on the unmanned aerial vehicle, a high-definition camera is arranged on the cradle head, and lenses of the binocular vision sensor and the high-definition camera are vertically upwards; the unmanned aerial vehicle further comprises a data processing module and a flight control module;
the binocular vision sensor is used for acquiring a left eye image and a right eye image in the bridge detection process;
the inertial measurement device is used for acquiring pose parameters of the unmanned aerial vehicle in the bridge detection process;
The high-definition camera is used for collecting images of the bottom of the bridge in the automatic flight process of the unmanned aerial vehicle according to the flight path;
the data processing module is used for calculating the real-time position of the unmanned aerial vehicle according to the image acquired by the binocular vision sensor and the pose parameter output by the inertial measurement device; under the control of a ground control end, determining a control point based on the real-time position of the unmanned aerial vehicle; determining a detection area according to the control points or the control points and bridge parameters, and planning a flight path according to the detection area and the control points; acquiring a bridge bottom image acquired by a high-definition camera, and processing the bridge bottom image to realize bridge detection;
the flight control module is used for controlling the unmanned aerial vehicle to automatically fly according to the flight path under the flight instruction;
the ground control end is used for controlling the flight of the unmanned aerial vehicle so as to determine a control point based on the real-time position of the unmanned aerial vehicle; and issuing flight instructions and transmitting bridge parameters to the unmanned aerial vehicle.
CN202310810319.XA 2023-07-04 Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle Active CN117032276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310810319.XA CN117032276B (en) 2023-07-04 Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310810319.XA CN117032276B (en) 2023-07-04 Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN117032276A true CN117032276A (en) 2023-11-10
CN117032276B CN117032276B (en) 2024-06-25

Family

ID=

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106645205A (en) * 2017-02-24 2017-05-10 武汉大学 Unmanned aerial vehicle bridge bottom surface crack detection method and system
CN107179322A (en) * 2017-06-15 2017-09-19 长安大学 A kind of bridge bottom crack detection method based on binocular vision
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108921847A (en) * 2018-08-08 2018-11-30 长沙理工大学 Bridge floor detection method based on machine vision
CN109357663A (en) * 2018-11-21 2019-02-19 陕西高速公路工程试验检测有限公司 Detection System for Bridge
CN109911188A (en) * 2019-03-18 2019-06-21 东南大学 The bridge machinery UAV system of non-satellite navigator fix environment
CN109990778A (en) * 2019-04-11 2019-07-09 株洲时代电子技术有限公司 A kind of bridge pedestal inspection flight course planning method
CN109990777A (en) * 2019-04-11 2019-07-09 株洲时代电子技术有限公司 A kind of bridge bottom surface inspection flight course planning method
US20200098103A1 (en) * 2018-09-21 2020-03-26 Chongqing Construction Engineering Group Corporation Limited High-precision Intelligent Detection Method For Bridge Diseases Based On Spatial Position
US20200363202A1 (en) * 2019-05-17 2020-11-19 Hexagon Technology Center Gmbh Fully automatic position and alignment determination method for a terrestrial laser scanner and method for ascertaining the suitability of a position for a deployment for surveying
CN112098326A (en) * 2020-08-20 2020-12-18 东南大学 Automatic detection method and system for bridge diseases
JP2021033177A (en) * 2019-08-28 2021-03-01 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Adapter, imaging apparatus, support mechanism, and moving object
CN113821044A (en) * 2021-07-06 2021-12-21 西北工业大学 Bridge detection unmanned aerial vehicle autonomous navigation and stability control method based on reinforcement learning
CN113971660A (en) * 2021-09-30 2022-01-25 哈尔滨工业大学 Computer vision method for bridge health diagnosis and intelligent camera system
CN114092549A (en) * 2021-06-25 2022-02-25 上海航天控制技术研究所 Dynamic networking cooperative detection system and method
US20220130145A1 (en) * 2019-12-01 2022-04-28 Pointivo Inc. Systems and methods for generating of 3d information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
CN115345945A (en) * 2022-08-10 2022-11-15 上海托旺数据科技有限公司 Automatic inspection method and system for reconstructing expressway by using multi-view vision of unmanned aerial vehicle
CN115908276A (en) * 2022-10-27 2023-04-04 浙江大学 Bridge apparent damage binocular vision intelligent detection method and system integrating deep learning
JP2023072355A (en) * 2021-11-12 2023-05-24 株式会社Soken Flying body photographing place determination device, flying body photographing place determination method, and flying body photographing place determination program
CN116188417A (en) * 2023-02-19 2023-05-30 南京理工大学 Slit detection and three-dimensional positioning method based on SLAM and image processing

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106645205A (en) * 2017-02-24 2017-05-10 武汉大学 Unmanned aerial vehicle bridge bottom surface crack detection method and system
CN107179322A (en) * 2017-06-15 2017-09-19 长安大学 A kind of bridge bottom crack detection method based on binocular vision
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108921847A (en) * 2018-08-08 2018-11-30 长沙理工大学 Bridge floor detection method based on machine vision
US20200098103A1 (en) * 2018-09-21 2020-03-26 Chongqing Construction Engineering Group Corporation Limited High-precision Intelligent Detection Method For Bridge Diseases Based On Spatial Position
CN109357663A (en) * 2018-11-21 2019-02-19 陕西高速公路工程试验检测有限公司 Detection System for Bridge
CN109911188A (en) * 2019-03-18 2019-06-21 东南大学 The bridge machinery UAV system of non-satellite navigator fix environment
CN109990778A (en) * 2019-04-11 2019-07-09 株洲时代电子技术有限公司 A kind of bridge pedestal inspection flight course planning method
CN109990777A (en) * 2019-04-11 2019-07-09 株洲时代电子技术有限公司 A kind of bridge bottom surface inspection flight course planning method
US20200363202A1 (en) * 2019-05-17 2020-11-19 Hexagon Technology Center Gmbh Fully automatic position and alignment determination method for a terrestrial laser scanner and method for ascertaining the suitability of a position for a deployment for surveying
JP2021033177A (en) * 2019-08-28 2021-03-01 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Adapter, imaging apparatus, support mechanism, and moving object
US20220130145A1 (en) * 2019-12-01 2022-04-28 Pointivo Inc. Systems and methods for generating of 3d information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
CN112098326A (en) * 2020-08-20 2020-12-18 东南大学 Automatic detection method and system for bridge diseases
CN114092549A (en) * 2021-06-25 2022-02-25 上海航天控制技术研究所 Dynamic networking cooperative detection system and method
CN113821044A (en) * 2021-07-06 2021-12-21 西北工业大学 Bridge detection unmanned aerial vehicle autonomous navigation and stability control method based on reinforcement learning
CN113971660A (en) * 2021-09-30 2022-01-25 哈尔滨工业大学 Computer vision method for bridge health diagnosis and intelligent camera system
JP2023072355A (en) * 2021-11-12 2023-05-24 株式会社Soken Flying body photographing place determination device, flying body photographing place determination method, and flying body photographing place determination program
CN115345945A (en) * 2022-08-10 2022-11-15 上海托旺数据科技有限公司 Automatic inspection method and system for reconstructing expressway by using multi-view vision of unmanned aerial vehicle
CN115908276A (en) * 2022-10-27 2023-04-04 浙江大学 Bridge apparent damage binocular vision intelligent detection method and system integrating deep learning
CN116188417A (en) * 2023-02-19 2023-05-30 南京理工大学 Slit detection and three-dimensional positioning method based on SLAM and image processing

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DANIEL REAGAN 等: "《Feasibility of using digital image correlation for unmanned aerial vehicle structural health monitoring of bridges》", 《STRUCTURAL HEALTH MONITORING》, 31 December 2018 (2018-12-31), pages 1 - 17 *
WENLAI MA 等: "《Nonlinear Robust Fault-Tolerant Tracking Control of a Tri-Rotor UAV Against Actuator’s Abnormal Behavior》", 《ACTUATORS》, 26 March 2023 (2023-03-26), pages 1 - 16 *
余加勇 等: "《基于无人机及Mask R-CNN的桥梁结构 裂缝智能识别》", 《中 国公路学报》, vol. 34, no. 12, 31 December 2021 (2021-12-31) *
薛爱新 等: "《近景摄影测量技术在斜拉桥桥塔健康 监测中的应用研究》", 《中外公路第37》, 30 June 2017 (2017-06-30), pages 1 - 4 *
谢海波: "《浅析无人机航摄***测绘大比例尺地形 图应用及常见问题处理》", 《低碳世界》, 5 March 2016 (2016-03-05), pages 1 - 2 *

Similar Documents

Publication Publication Date Title
CN109238240B (en) Unmanned aerial vehicle oblique photography method considering terrain and photography system thereof
CN106529495B (en) Obstacle detection method and device for aircraft
CN110470226B (en) Bridge structure displacement measurement method based on unmanned aerial vehicle system
US20190387209A1 (en) Deep Virtual Stereo Odometry
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN106289250A (en) A kind of course information acquisition system
CN111127540B (en) Automatic distance measurement method and system for three-dimensional virtual space
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN112819711B (en) Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN107941167B (en) Space scanning system based on unmanned aerial vehicle carrier and structured light scanning technology and working method thereof
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
CN113177918B (en) Intelligent and accurate inspection method and system for electric power tower by unmanned aerial vehicle
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN117032276B (en) Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
CN117190875A (en) Bridge tower displacement measuring device and method based on computer intelligent vision
CN110136168B (en) Multi-rotor speed measuring method based on feature point matching and optical flow method
CN117032276A (en) Bridge detection method and system based on binocular vision and inertial navigation fusion unmanned aerial vehicle
CN116182855A (en) Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment
CN116824433A (en) Visual-inertial navigation-radar fusion self-positioning method based on self-supervision neural network
CN107423766B (en) Method for detecting tail end motion pose of series-parallel automobile electrophoretic coating conveying mechanism
CN113720323B (en) Monocular vision inertial navigation SLAM method and device based on point-line feature fusion
WO2022246851A1 (en) Aerial survey data-based testing method and system for autonomous driving perception system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant