CN117406789A - Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis - Google Patents
Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis Download PDFInfo
- Publication number
- CN117406789A CN117406789A CN202311339317.3A CN202311339317A CN117406789A CN 117406789 A CN117406789 A CN 117406789A CN 202311339317 A CN202311339317 A CN 202311339317A CN 117406789 A CN117406789 A CN 117406789A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- obstacle
- support structure
- support
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000010191 image analysis Methods 0.000 title claims abstract description 23
- 238000007689 inspection Methods 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 claims description 44
- 238000012544 monitoring process Methods 0.000 claims description 44
- 230000007613 environmental effect Effects 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 23
- 230000002787 reinforcement Effects 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 15
- 230000009471 action Effects 0.000 claims description 13
- 230000002093 peripheral effect Effects 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000010354 integration Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention discloses an automatic planning method for a multi-unmanned-plane bridge support inspection route based on image analysis, which relates to the technical field of unmanned planes.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an automatic planning method for a multi-unmanned aerial vehicle bridge support inspection route based on image analysis.
Background
The existing aerial remote sensing technical means cannot meet the requirement of economic development, the new remote sensing technology is increasingly developed economic construction and cultural career service, the unmanned plane is used as an aerial remote sensing platform technology, and the novel application technology developed by adapting to the requirement can better meet the requirement of the aerial remote sensing service at present and update the old geographic data
The unmanned aerial vehicle aerial photography takes an unmanned aerial vehicle as an aerial platform, takes airborne remote sensing equipment such as a high-resolution CCD digital camera, a light optical camera, an infrared scanner, a laser scanner, a magnetic measuring instrument and the like to acquire information, processes image information by a computer, and makes an image according to certain precision requirements, and the whole system has outstanding characteristics in terms of design and optimal combination, and is a novel application technology integrating high-altitude photography, remote control, telemetry, video image microwave transmission and computer image information processing
For the effect of monitoring bridge bottom support seat positions by the existing monitoring technology, as the bridge bottom is piled up by weeds and gravels, the surrounding environment is complex and difficult to effectively observe, unmanned aerial vehicle monitoring is derived for improving the non-manual detection efficiency of the bridge bottom support seat, but the traditional unmanned aerial vehicle camera shooting detection method is usually executed by a single unmanned aerial vehicle, the coverage range of monitoring is limited, key information of the bridge bottom support seat and surrounding areas thereof cannot be completely captured, and a few unmanned aerial vehicles are arranged to simultaneously execute shooting modes of shooting operation, although the coverage area of the bridge bottom and the unmanned aerial vehicle monitoring area can be improved to a certain extent, the synchronous execution of a plurality of unmanned aerial vehicles can lead to the staggering of unmanned aerial vehicle routes, the obtained monitoring area content is repeated, the operation efficiency is lower, and therefore, an unmanned aerial vehicle camera shooting method aiming at the bridge bottom support seat positions and capable of updating routes in real time in flight is needed to solve the problems.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides an automatic planning method for a multi-unmanned-plane bridge support inspection route based on image analysis, which solves the problems that in the prior art, the key information efficiency of a single unmanned-plane capturing bridge base support and the surrounding area thereof is low, and the synchronous execution of a plurality of unmanned planes can cause the staggering of unmanned plane routes, and the obtained monitoring area content is repeated.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme, and the invention provides an automatic planning method for a multi-unmanned aerial vehicle bridge support inspection route based on image analysis, which comprises the following steps:
step 1, deploying a multi-point unmanned aerial vehicle, namely deploying the unmanned aerial vehicle at a bridge bottom support position and near the bridge bottom support, namely, an original point position, and monitoring and covering a bridge bottom area and a bridge bottom peripheral unmanned aerial vehicle route area;
step 2, environment scene recognition, wherein an energy monitoring system, an environment scene recognition module and an emergency reminding module are arranged in each unmanned aerial vehicle, the environment scene recognition module comprises a computer vision unit and a sensor system, the surrounding environment of the multi-potential coverage area in the step 1 is monitored in real time, and monitoring information comprises: the unmanned aerial vehicle monitoring system comprises a support structure, obstacle types and positions, temperature, humidity, wind speed and wind direction parameters, wherein the support structure, the obstacle types and the positions are obtained after image and video analysis, and the temperature, the humidity, the wind speed and the wind direction are obtained through monitoring by built-in sensors of the unmanned aerial vehicle;
step 3, unmanned aerial vehicle network data sharing, based on the area information covered by each unmanned aerial vehicle, establishing a communication network, and sharing the environmental data collected by the unmanned aerial vehicle;
step 4, based on path planning of shared data, uploading the acquired data of each unmanned aerial vehicle to a cloud service platform, and based on the cloud service platform, executing a path planning algorithm by utilizing an integrated data set to determine an optimal detection path and covering the whole detection area;
and 5, analyzing the scene in the current range by staying the unmanned aerial vehicle at the original point for 3-10 min before taking off in real time, simultaneously continuously acquiring environmental data in the flight stage, continuously analyzing the environmental data by a cloud service platform, and monitoring the state of the support area.
The invention is further arranged to: the computer vision unit is used for analyzing the support structure and the obstacle type, and the method specifically comprises the following steps:
collecting images and video data by a high-resolution camera equipped by each unmanned aerial vehicle, photographing the peripheral images at intervals of 1s, and continuously recording route peripheral videos;
the acquired images and videos are transmitted to a computer vision unit on the unmanned aerial vehicle, and the images and videos are analyzed and detected to detect a support structure and an obstacle by adopting a deep learning and image processing technology;
the analyzed environmental information temperature, humidity, wind speed and wind direction, the support structure, the type of the obstacle and the position information are transmitted to the cloud service platform in real time through a communication network;
the invention is further arranged to: the support structure and obstacle detection image video analysis steps specifically comprise:
acquiring images and video frames from the unmanned aerial vehicle camera for support structure and obstacle detection;
brightness and contrast adjustment is carried out on the acquired image, and a Gaussian filter is used for removing noise in the image;
detecting a support structure and an obstacle by using a deep learning target detection model FasterR-CNN, and outputting a support structure and an obstacle boundary frame and categories;
transmitting the detection and analysis results to a cloud service platform;
the invention is further arranged to: the step of detecting the support structure and the obstacle specifically comprises the following steps:
preparing in advance a dataset containing markers, including images of the support structure and the obstacle, and corresponding bounding boxes and class labels;
adjusting the size and format of the acquired image and inputting a FasterR-CNN model;
obtaining a prediction result of the support structure and the obstacle through forward propagation calculation;
the bounding box of the model output support structure and obstacle is denoted b= (x, y, w, h), where x and y are the coordinates of the upper left corner of the bounding box and w and h are the width and height;
the model outputs a category prediction result of the support structure and the obstacle at the same time to be expressed as C, and specifically the support structure and the obstacle;
outputting a boundary frame B and a class C of the support structure and the obstacle as detection results;
the invention is further arranged to: the steps of training the FasterR-CNN model in the detection support structure and the obstacle are specifically as follows:
defining a total loss function: l (L) t.t =L l.c +L c.s +L r.g ,
Wherein L is l.c Is a loss of positioning, L c.s Is a classification loss, L r.g Is regularization loss;
defining the true coordinates of the boundary box B as B g.t The predicted coordinate is B p.d Calculating a positioning loss by adopting a smooth L1 loss:
wherein smooths L 1 is a smooth L1 loss function;
defining the true label of category C as C g.t The prediction probability is C p.d Classification loss is calculated using cross entropy loss:
and adding a weight attenuation item;
calculating the gradient of the loss function to the model parameters using a back propagation algorithm;
updating model parameters using Adam to minimize the total loss function;
repeating training iteration for a plurality of times until the model converges;
the invention is further arranged to: the prediction of the class C is expressed as a probability distribution, the probability P (C i ) Given by the softmax function, in particular:
wherein C is i Is the i-th category, zj is the model output category score, P (C i ) Having N parameters representing probabilities of different categories;
the invention is further arranged to: the optimal detection path planning method specifically comprises the following steps:
each unmanned aerial vehicle uploads the collected environmental data to a cloud service platform, wherein the environmental data comprises the positions of the unmanned aerial vehicles, sensor measured values and obstacle information;
the cloud service platform receives and integrates data from a plurality of unmanned aerial vehicles into a comprehensive environment data set;
performing path planning based on the deep reinforcement learning model to generate a detection path of each unmanned aerial vehicle;
converting the generated detection path into a flight plan of the unmanned aerial vehicle, wherein the flight plan comprises a waypoint and a flight speed;
the invention is further arranged to: the method for planning the execution path based on the deep reinforcement learning model specifically comprises the following steps:
defining a state space, representing the state of the unmanned aerial vehicle, including the position, the sensor measurement value and the environment information;
defining an action space, wherein the action space represents actions which can be taken by the unmanned aerial vehicle, and comprises subsequent voyages and paths;
defining an objective function J of path planning based on a depth Q network:
where T is the number of time steps, y is the discount factor, R t Is a reward for each time step;
training by using a deep reinforcement learning model, learning to select the optimal action, namely a subsequent voyage point, and maximizing an objective function J;
in each time step, selecting the next waypoint of each unmanned aerial vehicle according to the deep reinforcement learning model to generate a detection path;
the invention is further arranged to: in the step 5:
when the system detects an abnormal condition or the energy monitoring system detects that the current unmanned aerial vehicle does not support the flight of the route, the emergency reminding module informs an operator in real time, and meanwhile, the cloud service platform automatically plans the unmanned aerial vehicle route.
(III) beneficial effects
The invention provides an automatic planning method for a multi-unmanned aerial vehicle bridge support inspection route based on image analysis. The beneficial effects are as follows:
according to the multi-unmanned-plane bridge support inspection route automatic planning method based on image analysis, a plurality of unmanned planes are deployed on the basis of original points, a bridge base support and surrounding areas of the bridge base support are covered, and all-dimensional detection is carried out, wherein each unmanned plane is provided with an environment scene recognition module, the environment scene recognition module comprises a computer vision unit and a sensor system, parameters of a support structure, obstacles, temperature, humidity, wind speed and wind direction are monitored in real time, advanced image analysis of the support structure and the obstacles is achieved through deep learning and image processing technology, target detection is carried out by adopting a Faster R-CNN model, and positions and categories of the support structure and the obstacles are accurately recognized.
Based on support structure and barrier position category, unmanned aerial vehicle carries out network data sharing, and the environmental data who gathers every unmanned aerial vehicle is transmitted to cloud service platform, carries out data integration, and then based on the integration dataset, adopts the degree of depth reinforcement learning model to carry out the route planning, makes every unmanned aerial vehicle can independently select best monitoring route, improves efficiency and coverage.
In the actual shooting stage, the unmanned aerial vehicle stays for 3 to 10 minutes before taking off, analysis of a scene in the current range is carried out, route planning in a visual range is obtained by scanning the current monitored area, a subsequent path changing scheme is obtained after analysis according to the real-time support structure after taking off and the position and type scanning of the obstacle, and the acquisition of high-quality monitoring data is ensured.
Meanwhile, an emergency reminding module is added, when the system detects abnormal conditions or energy problems, an operator is informed in real time, and meanwhile, the cloud service platform automatically plans the unmanned aerial vehicle route, so that the flight safety is guaranteed.
The problem that in the prior art, single unmanned aerial vehicle catches bridge bottom support and peripheral region key information efficiency is low, and synchronous execution of a plurality of unmanned aerial vehicles can lead to unmanned aerial vehicle route staggering, and the content of the obtained monitoring region is repeated is solved. Therefore, the inspection of the bridge bottom support seat part is used as a part which is difficult to observe in bridge inspection, the surrounding environment is complex, and unmanned aerial vehicle detection is a more convenient means.
Drawings
FIG. 1 is a flow chart of an automatic planning method of a multi-unmanned-plane bridge support inspection route based on image analysis;
fig. 2 is a flow chart of support structure and obstacle detection in the automatic planning method of the multi-unmanned aerial vehicle bridge support inspection route based on image analysis.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1-2, the invention provides an automatic planning method for a multi-unmanned aerial vehicle bridge support inspection route based on image analysis, which comprises the following steps:
step 1, deploying a multi-point unmanned aerial vehicle, namely deploying the unmanned aerial vehicle at a bridge bottom support position and near the bridge bottom support, namely, an original point position, and monitoring and covering a bridge bottom area and a bridge bottom peripheral unmanned aerial vehicle route area;
step 2, environment scene recognition, wherein an energy monitoring system, an environment scene recognition module and an emergency reminding module are arranged in each unmanned aerial vehicle, the environment scene recognition module comprises a computer vision unit and a sensor system, the surrounding environment of the multi-point coverage area in step 1 is monitored in real time, and monitoring information comprises: the unmanned aerial vehicle monitoring system comprises a support structure, obstacle types and positions, temperature, humidity, wind speed and wind direction parameters, wherein the support structure, the obstacle types and the positions are obtained after image and video analysis, and the temperature, the humidity, the wind speed and the wind direction are obtained through monitoring by built-in sensors of the unmanned aerial vehicle;
the computer vision unit is used for analyzing the support structure and the obstacle type, and the method specifically comprises the following steps:
collecting images and video data by a high-resolution camera equipped by each unmanned aerial vehicle, photographing the peripheral images at intervals of 1s, and continuously recording route peripheral videos;
the acquired images and videos are transmitted to a computer vision unit on the unmanned aerial vehicle, and the images and videos are analyzed and detected to detect a support structure and an obstacle by adopting a deep learning and image processing technology;
the analyzed environmental information temperature, humidity, wind speed and wind direction, the support structure, the type of the obstacle and the position information are transmitted to the cloud service platform in real time through a communication network;
the support structure and obstacle detection image video analysis steps specifically comprise:
acquiring images and video frames from the unmanned aerial vehicle camera for support structure and obstacle detection;
brightness and contrast adjustment is carried out on the acquired image, and a Gaussian filter is used for removing noise in the image;
detecting a support structure and an obstacle by using a deep learning target detection model FasterR-CNN, and outputting a support structure and an obstacle boundary frame and categories;
transmitting the detection and analysis results to a cloud service platform;
the step of detecting the support structure and the obstacle specifically comprises:
preparing in advance a dataset containing markers, including images of the support structure and the obstacle, and corresponding bounding boxes and class labels;
adjusting the size and format of the acquired image and inputting a FasterR-CNN model;
obtaining a prediction result of the support structure and the obstacle through forward propagation calculation;
the bounding box of the model output support structure and obstacle is denoted b= (x, y, w, h), where x and y are the coordinates of the upper left corner of the bounding box and w and h are the width and height;
the model outputs a category prediction result of the support structure and the obstacle at the same time to be expressed as C, and specifically the support structure and the obstacle;
outputting a boundary frame B and a class C of the support structure and the obstacle as detection results;
the steps of training the FasterR-CNN model in the detection support structure and the obstacle are specifically as follows:
defining a total loss function: l (L) t.t =L l.c +L c.s +L r.g ,
Wherein L is l.c Is a loss of positioning, L c.s Is a classification loss, L r.g Is regularization loss;
defining the true coordinates of the boundary box B as B g.t The predicted coordinate is B p.d Calculating a positioning loss by adopting a smooth L1 loss:
wherein smooths L 1 is a smooth L1 loss function;
defining the true label of category C as C g.t The prediction probability is C p.d Classification loss is calculated using cross entropy loss:
and adding a weight attenuation item;
calculating the gradient of the loss function to the model parameters using a back propagation algorithm;
updating model parameters using Adam to minimize the total loss function;
repeating training iteration for a plurality of times until the model converges;
the prediction of category C is expressed as a probability distribution, the probability P (C i ) Given by the softmax function, in particular:
wherein C is i Is the i-th category, zj is the model output category score, P (C i ) Having N parameters representing probabilities of different categories;
step 3, unmanned aerial vehicle network data sharing, based on the area information covered by each unmanned aerial vehicle, establishing a communication network, and sharing the environmental data collected by the unmanned aerial vehicle;
step 4, based on path planning of shared data, uploading the acquired data of each unmanned aerial vehicle to a cloud service platform, and based on the cloud service platform, executing a path planning algorithm by utilizing an integrated data set to determine an optimal detection path and covering the whole detection area;
the optimal monitoring path planning method specifically comprises the following steps:
each unmanned aerial vehicle uploads the collected environmental data to a cloud service platform, wherein the environmental data comprises the positions of the unmanned aerial vehicles, sensor measured values and obstacle information;
the cloud service platform receives and integrates data from a plurality of unmanned aerial vehicles into a comprehensive environment data set;
performing path planning based on the deep reinforcement learning model to generate a detection path of each unmanned aerial vehicle;
converting the generated detection path into a flight plan of the unmanned aerial vehicle, wherein the flight plan comprises a waypoint and a flight speed;
the method for planning the execution path based on the deep reinforcement learning model specifically comprises the following steps:
defining a state space, representing the state of the unmanned aerial vehicle, including the position, the sensor measurement value and the environment information;
defining an action space, wherein the action space represents actions which can be taken by the unmanned aerial vehicle, and comprises subsequent voyages and paths;
defining an objective function J of path planning based on a depth Q network:
where T is the number of time steps, y is the discount factor, R t Is a reward for each time step;
training by using a deep reinforcement learning model, learning to select the optimal action, namely a subsequent voyage point, and maximizing an objective function J;
in each time step, selecting the next waypoint of each unmanned aerial vehicle according to the deep reinforcement learning model to generate a monitoring path;
step 5, real-time analysis, namely, the unmanned aerial vehicle stays at an original point position for 3-10 min before taking off, the scene in the current range is analyzed, meanwhile, environmental data are continuously acquired in the flight stage, the cloud service platform continuously analyzes the environmental data, and the state of a support area is monitored;
in step 5:
when the system detects an abnormal condition or the energy monitoring system detects that the current unmanned aerial vehicle does not support the flight of the route, the emergency reminding module informs an operator in real time, and meanwhile, the cloud service platform automatically plans the unmanned aerial vehicle route.
Example 1
S1, deploying 5 multi-point unmanned aerial vehicles, deploying the unmanned aerial vehicles at the bridge bottom support and near the bridge bottom support, namely, original point positions, and monitoring and covering a bridge bottom area and a bridge bottom peripheral unmanned aerial vehicle route area;
s2, environment scene recognition, wherein an energy monitoring system, an environment scene recognition module and an emergency reminding module are arranged in each unmanned aerial vehicle, the environment scene recognition module comprises a computer vision unit and a sensor system, the surrounding environment of the multi-potential coverage area in the step 1 is monitored in real time, and monitoring information comprises: the unmanned aerial vehicle monitoring system comprises a support structure, obstacle types and positions, temperature, humidity, wind speed and wind direction parameters, wherein the support structure, the obstacle types and the positions are obtained after image and video analysis, and the temperature, the humidity, the wind speed and the wind direction are obtained through monitoring by built-in sensors of the unmanned aerial vehicle;
s3, sharing network data of the unmanned aerial vehicles, establishing a communication network based on the area information covered by each unmanned aerial vehicle, and sharing environment data collected by the unmanned aerial vehicles;
s4, based on path planning of shared data, uploading data acquired by each unmanned aerial vehicle to a cloud service platform, and based on the cloud service platform, executing a path planning algorithm by utilizing an integrated data set to determine an optimal monitoring path, and meanwhile covering the whole detection area, wherein the path planning specifically comprises the following steps:
each unmanned aerial vehicle uploads the collected environmental data to a cloud service platform, wherein the environmental data comprises the positions of the unmanned aerial vehicles, sensor measured values and obstacle information;
the cloud service platform receives and integrates data from a plurality of unmanned aerial vehicles into a comprehensive environment data set;
performing path planning based on the deep reinforcement learning model to generate a detection path of each unmanned aerial vehicle;
converting the generated detection path into a flight plan of the unmanned aerial vehicle, wherein the flight plan comprises a waypoint and a flight speed;
defining an objective function J of path planning based on a depth Q network:
where T is the number of time steps, y is the discount factor, R t Is per time stepRewards;
training by using a deep reinforcement learning model, learning to select the optimal action, namely a subsequent voyage point, and maximizing an objective function J;
in each time step, selecting the next waypoint of each unmanned aerial vehicle according to the deep reinforcement learning model to generate a monitoring path;
s5, the unmanned plane stays at the original point position for 3-10 min before taking off, the scene in the current range is analyzed, meanwhile, environmental data are continuously acquired in the flight stage, the cloud service platform continuously analyzes the environmental data, and the state of the support area is monitored.
The method of example 1 and the unmanned aerial vehicle imaging method of the bridge bottom support part of application number CN113640830a were used to detect 3 different bridge bottom support parts, and the detection results were used as experimental group 1, experimental group 2 and experimental group 3, and the analysis results obtained by the detection method of CN113640830a were used as control group 1, control group 2 and control group 3.
In combination with the above, in the present application:
according to the multi-unmanned-plane bridge support inspection route automatic planning method based on image analysis, a plurality of unmanned planes are deployed on the basis of original points, a bridge base support and surrounding areas of the bridge base support are covered, and all-round monitoring is carried out, wherein each unmanned plane is provided with an environment scene recognition module, the environment scene recognition module comprises a computer vision unit and a sensor system, parameters of a support structure, obstacles, temperature, humidity, wind speed and wind direction are monitored in real time, advanced image analysis of the support structure and the obstacles is achieved through deep learning and image processing technology, target detection is carried out by adopting a Faster R-CNN model, and positions and categories of the support structure and the obstacles are accurately recognized.
Based on support structure and barrier position category, unmanned aerial vehicle carries out network data sharing, and the environmental data who gathers every unmanned aerial vehicle is transmitted to cloud service platform, carries out data integration, and then based on the integration dataset, adopts the degree of depth reinforcement learning model to carry out the route planning, makes every unmanned aerial vehicle can independently select best monitoring route, improves efficiency and coverage.
In the actual shooting stage, the unmanned aerial vehicle stays for 3 to 10 minutes before taking off, analysis of a scene in the current range is carried out, route planning in a visual range is obtained by scanning the current monitored area, a subsequent path changing scheme is obtained after analysis according to the real-time support structure after taking off and the position and type scanning of the obstacle, and the acquisition of high-quality monitoring data is ensured.
Meanwhile, an emergency reminding module is added, when the system detects abnormal conditions or energy problems, an operator is informed in real time, and meanwhile, the cloud service platform automatically plans the unmanned aerial vehicle route, so that the flight safety is guaranteed.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.
Claims (9)
1. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on image analysis is characterized by comprising the following steps:
step 1, deploying a multi-point unmanned aerial vehicle, namely deploying the unmanned aerial vehicle at a bridge bottom support position and near the bridge bottom support, namely, an original point position, and monitoring and covering a bridge bottom area and a bridge bottom peripheral unmanned aerial vehicle route area;
step 2, environment scene recognition, wherein an energy monitoring system, an environment scene recognition module and an emergency reminding module are arranged in each unmanned aerial vehicle, the environment scene recognition module comprises a computer vision unit and a sensor system, the surrounding environment of the multi-point position coverage area in the step 1 is monitored in real time, and monitoring information comprises: the unmanned aerial vehicle monitoring system comprises a support structure, obstacle types and positions, temperature, humidity, wind speed and wind direction parameters, wherein the support structure, the obstacle types and the positions are obtained after image and video analysis, and the temperature, the humidity, the wind speed and the wind direction are obtained through monitoring by built-in sensors of the unmanned aerial vehicle;
step 3, unmanned aerial vehicle network data sharing, based on the area information covered by each unmanned aerial vehicle, establishing a communication network, and sharing the environmental data collected by the unmanned aerial vehicle;
step 4, based on path planning of shared data, uploading the acquired data of each unmanned aerial vehicle to a cloud service platform, and based on the cloud service platform, executing a path planning algorithm by utilizing an integrated data set to determine an optimal monitoring path and covering the whole detection area;
and 5, analyzing the scene in the current range by staying the unmanned aerial vehicle at the original point for 3-10 min before taking off in real time, simultaneously continuously acquiring environmental data in the flight stage, continuously analyzing the environmental data by a cloud service platform, and monitoring the state of the support area.
2. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on the image analysis according to claim 1, wherein the method for analyzing the support structure and the obstacle type by the computer vision unit is specifically as follows:
collecting images and video data by a high-resolution camera equipped by each unmanned aerial vehicle, photographing the peripheral images at intervals of 1s, and continuously recording route peripheral videos;
the acquired images and videos are transmitted to a computer vision unit on the unmanned aerial vehicle, and the images and videos are analyzed and detected to detect a support structure and an obstacle by adopting a deep learning and image processing technology;
the analyzed environmental information such as temperature, humidity, wind speed and wind direction, support structure, obstacle type and position information are transmitted to the cloud service platform in real time through a communication network.
3. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on the image analysis according to claim 2, wherein the support structure and obstacle detection image video analysis step specifically comprises the following steps:
acquiring images and video frames from the unmanned aerial vehicle camera for support structure and obstacle detection;
brightness and contrast adjustment is carried out on the acquired image, and a Gaussian filter is used for removing noise in the image;
detecting a support structure and an obstacle by using a deep learning target detection model FasterR-CNN, and outputting a support structure and an obstacle boundary frame and categories;
and transmitting the detection and analysis results to the cloud service platform.
4. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on the image analysis according to claim 2, wherein the step of detecting the support structure and the obstacle comprises the following steps:
preparing in advance a dataset containing markers, including images of the support structure and the obstacle, and corresponding bounding boxes and class labels;
adjusting the size and format of the acquired image and inputting a FasterR-CNN model;
obtaining a prediction result of the support structure and the obstacle through forward propagation calculation;
the bounding box of the model output support structure and obstacle is denoted b= (x, y, w, h), where x and y are the coordinates of the upper left corner of the bounding box and w and h are the width and height;
the model outputs a category prediction result of the support structure and the obstacle at the same time to be expressed as C, and specifically the support structure and the obstacle;
outputting the boundary box B and the class C of the support structure and the obstacle as detection results.
5. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on the image analysis according to claim 4, wherein the step of training the FasterR-CNN model in the detection support structure and the obstacle is specifically as follows:
defining a total loss function: l (L) t.t =L l.c +L c.s +L r.g ,
Wherein L is l.c Is a loss of positioning, L c.s Is a classification loss, L r.g Is regularization loss;
defining the true coordinates of the boundary box B as B g.t The predicted coordinate is B p.d Calculating a positioning loss by adopting a smooth L1 loss:
wherein smooths L 1 is a smooth L1 loss function;
defining the true label of category C as C g.t The prediction probability is C p.d Classification loss is calculated using cross entropy loss:
and adding a weight attenuation item;
calculating the gradient of the loss function to the model parameters using a back propagation algorithm;
updating model parameters using Adam to minimize the total loss function;
the training iterations are repeated a number of times until the model converges.
6. The automatic planning method for the inspection route of the bridge support with multiple unmanned aerial vehicles based on image analysis according to claim 2, wherein the prediction result of the category C is represented as a probability distribution, and the probability P (C i ) Given by the softmax function, in particular:
wherein C is i Is the ith category, zj is the category of model outputScore, P (C) i ) There are N parameters representing probabilities of different categories.
7. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on image analysis according to claim 1, wherein the optimal monitoring path planning method is specifically:
each unmanned aerial vehicle uploads the collected environmental data to a cloud service platform, wherein the environmental data comprises the positions of the unmanned aerial vehicles, sensor measured values and obstacle information;
the cloud service platform receives and integrates data from a plurality of unmanned aerial vehicles into a comprehensive environment data set;
performing path planning based on the deep reinforcement learning model to generate a detection path of each unmanned aerial vehicle;
and converting the generated detection path into a flight plan of the unmanned aerial vehicle, including waypoints and flight speeds.
8. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge support based on the image analysis according to claim 7, wherein the method for planning the execution route based on the deep reinforcement learning model is specifically as follows:
defining a state space, representing the state of the unmanned aerial vehicle, including the position, the sensor measurement value and the environment information;
defining an action space, wherein the action space represents actions which can be taken by the unmanned aerial vehicle, and comprises subsequent voyages and paths;
defining an objective function J of path planning based on a depth Q network:
where T is the number of time steps, y is the discount factor, R t Is a reward for each time step;
training by using a deep reinforcement learning model, learning to select the optimal action, namely a subsequent voyage point, and maximizing an objective function J;
and in each time step, selecting the next waypoint of each unmanned aerial vehicle according to the deep reinforcement learning model to generate a detection path.
9. The automatic planning method for the inspection route of the multi-unmanned aerial vehicle bridge bearing based on the image analysis according to claim 1, wherein in the step 5:
when the system detects an abnormal condition or the energy monitoring system detects that the current unmanned aerial vehicle does not support the flight of the route, the emergency reminding module informs an operator in real time, and meanwhile, the cloud service platform automatically plans the unmanned aerial vehicle route.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311339317.3A CN117406789A (en) | 2023-10-16 | 2023-10-16 | Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311339317.3A CN117406789A (en) | 2023-10-16 | 2023-10-16 | Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117406789A true CN117406789A (en) | 2024-01-16 |
Family
ID=89497346
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311339317.3A Pending CN117406789A (en) | 2023-10-16 | 2023-10-16 | Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117406789A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118075728A (en) * | 2024-04-24 | 2024-05-24 | 北京语言大学 | Unmanned aerial vehicle response decision-making method and device for emergency communication scene |
-
2023
- 2023-10-16 CN CN202311339317.3A patent/CN117406789A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118075728A (en) * | 2024-04-24 | 2024-05-24 | 北京语言大学 | Unmanned aerial vehicle response decision-making method and device for emergency communication scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111353413B (en) | Low-missing-report-rate defect identification method for power transmission equipment | |
CN108037770B (en) | Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence | |
CN108496129B (en) | Aircraft-based facility detection method and control equipment | |
CN110866887A (en) | Target situation fusion sensing method and system based on multiple sensors | |
US20200074176A1 (en) | Method and arrangement for condition monitoring of an installation with operating means | |
Khan et al. | Unmanned aerial vehicle–based traffic analysis: Methodological framework for automated multivehicle trajectory extraction | |
CN115439424A (en) | Intelligent detection method for aerial video image of unmanned aerial vehicle | |
CN114240868A (en) | Unmanned aerial vehicle-based inspection analysis system and method | |
CN113298035A (en) | Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition | |
CN117406789A (en) | Automatic planning method for multi-unmanned aerial vehicle bridge support inspection route based on image analysis | |
CN112445241A (en) | Ground surface vegetation identification method and system based on unmanned aerial vehicle remote sensing technology and readable storage medium | |
CN111931559A (en) | Method for classifying tree species in corridor area of power transmission line | |
CN114089786A (en) | Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway | |
CN116719339A (en) | Unmanned aerial vehicle-based power line inspection control method and system | |
CN115240093A (en) | Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion | |
CN117589167A (en) | Unmanned aerial vehicle routing inspection route planning method based on three-dimensional point cloud model | |
Liu et al. | Framework for automated UAV-based inspection of external building façades | |
CN117572885B (en) | Night tracking method, system and related device based on thermal infrared camera of unmanned aerial vehicle | |
CN117215316B (en) | Method and system for driving environment perception based on cooperative control and deep learning | |
CN112542800A (en) | Method and system for identifying transmission line fault | |
CN116222579A (en) | Unmanned aerial vehicle inspection method and system based on building construction | |
CN114217641B (en) | Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment | |
CN115909110A (en) | Lightweight infrared unmanned aerial vehicle target tracking method based on Simese network | |
CN112069997B (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net | |
CN115014359A (en) | Unmanned aerial vehicle path planning and positioning navigation system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |