CN112363526B - System and method for automatically planning fine route for unmanned aerial vehicle inspection - Google Patents
System and method for automatically planning fine route for unmanned aerial vehicle inspection Download PDFInfo
- Publication number
- CN112363526B CN112363526B CN202010852227.4A CN202010852227A CN112363526B CN 112363526 B CN112363526 B CN 112363526B CN 202010852227 A CN202010852227 A CN 202010852227A CN 112363526 B CN112363526 B CN 112363526B
- Authority
- CN
- China
- Prior art keywords
- aerial vehicle
- unmanned aerial
- detection
- route
- edge computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 95
- 239000012212 insulator Substances 0.000 claims description 31
- 238000013528 artificial neural network Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 17
- 239000003086 colorant Substances 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 239000012535 impurity Substances 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 239000012491 analyte Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an automatic fine route planning system and method for unmanned aerial vehicle inspection, wherein the system comprises an unmanned aerial vehicle, edge computing equipment and a controller, wherein the edge computing equipment and the controller are arranged on the unmanned aerial vehicle, and the system finally obtains an accurate route which can accurately shoot a target detection object in the same environment. The invention can realize automatic fine operation in a complex environment, is beneficial to liberating productivity and improving inspection efficiency.
Description
Technical Field
The invention relates to an automatic planning method, in particular to an automatic fine route planning system for unmanned aerial vehicle inspection, and further relates to an automatic fine route planning method.
Background
The existing unmanned aerial vehicle inspection mode adopts the mode that an air route is planned in advance on the ground through air route planning software to carry out inspection operation on an inspection line. The working mode has the advantage that the effective working distance is short due to the fact that actual signals are shielded when the operation and maintenance work is patrolled and examined in the mountainous and hilly areas. The existing unmanned aerial vehicle inspection mode further comprises the step of manually operating the unmanned aerial vehicle for inspection. The data standards of the polling photos are not uniform, the data quality is uneven, the polling efficiency is low and the like due to human factors.
Disclosure of Invention
In order to solve the problems, the invention provides an automatic fine route planning system and method for unmanned aerial vehicle inspection, which adopt an unmanned aerial vehicle automatic fine inspection scheme, can realize automatic fine operation in a complex environment, are beneficial to liberating productivity and improve inspection efficiency.
The technical scheme of the invention is as follows:
an unmanned aerial vehicle inspection fine route automatic planning system comprises an unmanned aerial vehicle, edge computing equipment and a controller, wherein the edge computing equipment and the controller are arranged on the unmanned aerial vehicle;
the unmanned aerial vehicle comprises a video acquisition unit, and the unmanned aerial vehicle photographs the detected object at different waypoints according to the setting in the route;
the edge computing equipment collects information of video collecting units of different waypoints, simply filters video streams and then detects each frame of picture based on a neural network;
the controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots detection objects of different waypoints, records GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot a target detection object under the same environment.
Furthermore, after the unmanned aerial vehicle takes off, the unmanned aerial vehicle flies around the tower within a range of 3-10 meters, and the GPS is utilized for accurate positioning; the unmanned aerial vehicle swings within the range of 10-50 centimeters when hovering.
Further, the specific process of simply filtering the video stream is as follows:
after a waypoint is reached, the video acquisition unit carries out large-range scanning, the edge computing equipment starts a tower identification neural network, the position of the tower closest to the unmanned aerial vehicle is identified firstly, the type, the orientation, the coordinate and the size of the tower are identified from the video stream, and the identification speed is 10 frames per second; after the poles and towers are identified, firstly, the poles and towers in each frame are cut out, so that sundries except the poles and towers are filtered.
Further, the specific process of detecting each frame of picture based on the neural network is as follows:
amplifying and photographing each detected insulator in the graph, and switching the neural network to a detection object recognition neural network by the edge computing equipment; the neural network identifies the detected objects of the cut tower pictures, including insulators, tower plates, the types, the orientations, the coordinates, the sizes and the definitions of the top angles of the towers and the bottom supports of the towers; at the moment, the controller records the coordinates, the inclination angle and the orientation of the unmanned aerial vehicle and the sitting inclination angle and the orientation of the camera as a first pose.
Further, the controller collects the detection result of the edge computing device, and the unmanned aerial vehicle and the video acquisition unit are controlled according to the detection result as follows:
step (1): according to the position of the first detection object in the figure, the controller moves the camera so that the insulator is moved to the right center of the image, wherein the image recognition tracks the insulator at the speed of 10 frames per second;
step (2): gradually amplifying and focusing to make the size of the detected object larger and larger on the way, wherein the insulator is tracked and calculated at the speed of 10 frames per second by image recognition, and the inclination angle of the camera is continuously controlled to keep the insulator at the center of the image;
and (3): when the detected object is enlarged to 1/4 the total size of the picture, namely the width and the length are half of the picture, photographing the insulator;
and (4): and (3) reducing the focal length and the inclination angle of the camera to the first pose, and repeating the step (1) for the next detected object.
Further, in the step (1), when the detection object is tracked, a linear matching algorithm is used, the first frame detection object and the second frame detection object generate a loss matrix, and loss values of the two detection objects between two frames are recorded in the loss matrix;
the greater the likelihood of a match between the test objects, the smaller the loss value; conversely, if the probability of matching between the detection objects is smaller, the loss value is larger; simultaneously generating two false detection objects so that all the detection objects cannot be matched with each other; the loss value between different detection objects of different frames is determined by the types, coordinates, aspect ratio, definition and colors of different detection objects, and the loss value is generated by different specific gravities.
Further, after scanning all detection objects in the waypoint, recording GPS information, time, photographing times, adjusting times, stability and definition of the waypoint, wherein the unmanned aerial vehicle flies to the next waypoint and photographs the insulator again at different angles, and the air route which is recorded and produced after inspection is finished is an accurate air route which can accurately photograph the target detection object in the same environment.
The fine route automatic planning method based on the system is carried out as follows:
the unmanned aerial vehicle photographs the detected object at different waypoints according to the setting in the route;
the edge computing equipment collects information of video collecting units of different waypoints, simply filters video streams and then detects each frame of picture based on a neural network;
the controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots detection objects of different waypoints, records GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot a target detection object under the same environment.
Compared with the prior art, the invention has the following beneficial effects:
the invention greatly saves the time for editing the production route by the route editing software in advance and removes the artificial links in the process. Adopt the automatic meticulous scheme of patrolling and examining of unmanned aerial vehicle, can realize automatic meticulous operation under the complex environment, be favorable to liberation productivity, promote and patrol and examine efficiency.
Drawings
FIG. 1 is a diagram showing the positional relationship of a detection object A, B in the example;
FIG. 2 is a flow chart of the calculation of two frames of the test object at a certain time.
Detailed Description
The technical solutions in the embodiments will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples without making any creative effort, shall fall within the protection scope of the present invention.
Unless otherwise defined, technical or scientific terms used in the embodiments of the present application should have the ordinary meaning as understood by those having ordinary skill in the art. The use of "first," "second," and similar terms in the present embodiments does not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. "mounted," "connected," and "coupled" are to be construed broadly and may, for example, be fixedly coupled, detachably coupled, or integrally coupled; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. "Upper," "lower," "left," "right," "lateral," "vertical," and the like are used solely in relation to the orientation of the components in the figures, and these directional terms are relative terms that are used for descriptive and clarity purposes and that can vary accordingly depending on the orientation in which the components in the figures are placed.
The unmanned aerial vehicle inspection fine route automatic planning system comprises an unmanned aerial vehicle, edge computing equipment and a controller, wherein the edge computing equipment and the controller are arranged on the unmanned aerial vehicle;
the unmanned aerial vehicle comprises a video acquisition unit, and the unmanned aerial vehicle can photograph the detected object at different waypoints according to the setting in the air route.
The edge computing device collects information of video collecting units of different waypoints, simply filters video streams, and then detects each frame of picture based on a neural network.
The controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots detection objects of different waypoints, records GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot a target detection object under the same environment.
After the unmanned aerial vehicle takes off, the unmanned aerial vehicle flies around a tower within a range of 3-10 meters, and the GPS is utilized for accurate positioning; the unmanned aerial vehicle swings within the range of 10-50 centimeters when hovering.
The specific process of simply filtering the video stream is as follows:
after a waypoint is reached, the video acquisition unit carries out large-range scanning, the edge computing equipment starts a tower identification neural network, the position of the tower closest to the unmanned aerial vehicle is identified firstly, the type, the orientation, the coordinate and the size of the tower are identified from the video stream, and the identification speed is 10 frames per second; after the poles and towers are identified, firstly, the poles and towers in each frame are cut out, so that sundries except the poles and towers are filtered.
The specific process of detecting each frame of picture based on the neural network is as follows:
amplifying and photographing each detected insulator in the figure, and switching the neural network to the detected object recognition neural network by the edge computing device, wherein the edge computing device of the embodiment is Miao Xiao 2.
The neural network identifies the detected objects of the cut tower pictures, including insulators, tower plates, the types, the orientations, the coordinates, the sizes and the definitions of the top angles of the towers and the bottom supports of the towers; at the moment, the controller records the coordinates, the inclination angle and the orientation of the unmanned aerial vehicle and the sitting inclination angle and the orientation of the camera as a first pose.
The controller collects the detection result of the edge computing equipment, and the unmanned aerial vehicle and the video acquisition unit are controlled according to the detection result as follows:
step (1): according to the position of the first insulator in the graph, the controller moves the camera so that the insulator is moved to the right center of the image, wherein the image recognition tracks the insulator at the speed of 10 frames per second;
step (2): gradually amplifying and focusing to enable the size of the insulator to be larger and larger on the way, wherein the insulator is tracked and calculated whether the insulator is clear at the speed of 10 frames per second by image recognition, and the inclination angle of the camera is continuously controlled to enable the insulator to be always kept at the center of the image;
and (3): when the insulator is enlarged to 1/4 of the total size of the picture, namely the width and the length are half of the picture, photographing the insulator;
and (4): and (3) reducing the focal length and the inclination angle of the camera to the first pose, and repeating the step (1) for the next insulator.
In the step (1), when the detection object is tracked, a linear matching algorithm is used, the first frame detection object and the second frame detection object generate a loss matrix, and loss values of the two detection objects between two frames are recorded in the loss matrix;
the greater the likelihood of a match between the test objects, the smaller the loss value; conversely, if the probability of matching between the detection objects is smaller, the loss value is larger; simultaneously generating two false detection objects so that all the detection objects cannot be matched with each other; the loss value between different detection objects of different frames is determined by the types, coordinates, aspect ratio, definition and colors of different detection objects, and the loss value is generated by different specific gravities.
Assuming that between two frames, the first frame has M detection frames, the second frame has N detection frames, and the two frames respectively add the virtual detection frames to M + N, the number of the detection frames of the first frame becomes M + N, where M are the true detection frames and N are the virtual detection frames. The number of second frame detection boxes also becomes M + N, where N are true detection boxes and M are dummy detection boxes.
The best matching case is: and M is equal to N, and each detection frame of the first frame respectively finds each detection frame of the second frame to be matched.
Worst case: all the matching of the true detection frames of the first frame are the virtual detection frames of the second frame, and all the matching of the true detection frames of the second frame are the virtual detection frames of the first frame, which is equivalent to all the mismatching.
The loss matrix is: cTX; the constraint conditions are as follows: AX > b, X > 0;
assuming that T is M + N, in the above equation, C is a loss matrix of size 1x T2. It is vectorized by a T x T loss matrix. Representing the loss between T detection boxes between two frames. The loss is determined by the properties of the two detection boxes, such as type, coordinates, aspect ratio, sharpness, color. In the matching, the detection frame between two frames is more expected to match the real detection frame of the other side as far as possible, and the virtual detection frame is matched only when the condition is not met.
In the constraint condition, X is a matching matrix of the size of T2X 1, and the elements in the matrix can only be 0 or 1, which represents whether T detection boxes match between two frames. 0 represents no match and 1 represents a match.
A is an identity matrix of the size TxT 2. When j is an integer multiple of i [i][j]1. b is an identity matrix of size Nx1, all entries being 1.
The two constraints of Ax > b and X >0 are such that the sum of every T values in X is 1, and X is a matching matrix of T2X 1 size, so that only T values in X are 1. This condition is equivalent to constraining the match condition to be a 1-to-1 match. Each test object between two frames must be 1-to-1 matched. In the case that the above conditions are satisfied, the minimum loss function CX may use linear programming to find the global optimal matching matrix X.
For example: the calculation process of two frames of the detected object at a certain time is shown in fig. 2.
After scanning all insulators in this waypoint, take notes the GPS information, time, the number of times of shooing, adjustment number of times, stability and the definition of this waypoint, and unmanned aerial vehicle will fly to next waypoint to shoot the insulator once more at different angles, accomplish to patrol and examine the route of production that the back was recorded and be an accurate route that can shoot target detection object under the same environment accurately.
The fine route automatic planning method of the system of the embodiment is carried out as follows:
the unmanned aerial vehicle photographs the detected object at different waypoints according to the setting in the route;
the edge computing equipment collects information of video collecting units of different waypoints, simply filters video streams and then detects each frame of picture based on a neural network;
the controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots detection objects of different waypoints, records GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot a target detection object under the same environment.
Preferably, in the analyte tracking algorithm, the analyte should be detected in each frame of the image theoretically. However, because the insulator identification difficulty is high, the identification rate of the image identification algorithm is not perfect, and an unidentified object may not be detected in one or two frames during the process, and the tracking of the detected object may cause the tracking loss. In this case, a general algorithm uses positions of the detection objects in different frames, and then finds out the detection objects with similar positions between the two frames to perform matching, thereby achieving the function of tracking the detection objects. However, in a picture with more detected objects such as a tower, tracking using only the positions of the same detected object in different frames is easy to be confused. For example, the position of the detection object AB appearing from the first frame in the lower diagram is shown in fig. 1 (cross point), and the position of the detection object AB in the second frame is shown in fig. 1 (circle point). If the nearest distance principle is used, the detection object B in the first frame is matched with the detection object A in the second frame, and the tracking loss of the detection object A in the first frame and the detection object B in the second frame occurs.
In order to solve this problem, a linear matching algorithm is used in the algorithm. In the above example, the first frame detector AB and the second frame detector AB generate a loss matrix. The loss values of the two detected objects between two frames are recorded in the loss matrix. The greater the likelihood of a match between the test objects, the less loss. Conversely, the smaller the likelihood of a match between the test objects, the greater the loss. Two false detects are generated simultaneously in case all the detects do not match each other. The loss value between different detection objects of different frames is determined by the types, coordinates, aspect ratio, definition, colors and other elements of different detection objects, and the loss value is generated by different specific gravities. The linear matching algorithm can quickly calculate the global minimum loss value under one-to-one matching according to the recursive algorithm, so as to achieve the global optimum matching.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. The utility model provides an unmanned aerial vehicle patrols and examines meticulous airline automatic planning system which characterized in that: the system comprises an unmanned aerial vehicle, edge computing equipment arranged on the unmanned aerial vehicle and a controller;
the unmanned aerial vehicle comprises a video acquisition unit, and the unmanned aerial vehicle photographs the detected object at different waypoints according to the setting in the route;
the edge computing equipment collects information of video collecting units of different waypoints, simply filters video streams and then detects each frame of picture based on a neural network;
the controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots the detection objects of different waypoints, records the GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot the target detection object in the same environment;
the specific process of simply filtering the video stream is as follows:
after a waypoint is reached, the video acquisition unit carries out large-range scanning, the edge computing equipment starts a tower identification neural network, the position of the tower closest to the unmanned aerial vehicle is identified firstly, the type, the orientation, the coordinate and the size of the tower are identified from the video stream, and the identification speed is 10 frames per second; after the towers are identified, firstly, the towers in each frame are cut out, so that impurities except the towers are filtered;
the specific process of detecting each frame of picture based on the neural network is as follows:
amplifying and photographing each detected insulator in the graph, and switching the neural network to a detection object recognition neural network by the edge computing equipment; the neural network identifies the detected objects of the cut tower pictures, including insulators, tower plates, the types, the orientations, the coordinates, the sizes and the definitions of the top angles of the towers and the bottom supports of the towers; at the moment, the controller records the coordinate, the inclination angle and the orientation of the unmanned aerial vehicle and the sitting inclination angle and the orientation of the camera as a first pose;
the controller collects the detection result of the edge computing equipment, and the unmanned aerial vehicle and the video acquisition unit are controlled according to the detection result as follows:
step (1): according to the position of the first detection object in the figure, the controller moves the camera so that the insulator is moved to the right center of the image, wherein the image recognition tracks the insulator at the speed of 10 frames per second;
step (2): gradually amplifying and focusing to make the size of the detected object larger and larger on the way, wherein the insulator is tracked and calculated at the speed of 10 frames per second by image recognition, and the inclination angle of the camera is continuously controlled to keep the insulator at the center of the image;
and (3): when the detected object is enlarged to 1/4 the total size of the picture, namely the width and the length are half of the picture, photographing the insulator;
and (4): and (3) reducing the focal length and the inclination angle of the camera to the first pose, and repeating the step (1) for the next detected object.
2. The unmanned aerial vehicle inspection fine route automatic planning system of claim 1, wherein: after the unmanned aerial vehicle takes off, the unmanned aerial vehicle flies around a tower within a range of 3-10 meters, and the GPS is utilized for accurate positioning; the unmanned aerial vehicle swings within the range of 10-50 centimeters when hovering.
3. The unmanned aerial vehicle inspection fine route automatic planning system of claim 1, wherein: in the step (1), when the detection object is tracked, a linear matching algorithm is used, the first frame detection object and the second frame detection object generate a loss matrix, and loss values of the two detection objects between two frames are recorded in the loss matrix;
the greater the likelihood of a match between the test objects, the smaller the loss value; conversely, if the probability of matching between the detection objects is smaller, the loss value is larger; simultaneously generating two false detection objects so that all the detection objects cannot be matched with each other; the loss value between different detection objects of different frames is determined by the types, coordinates, aspect ratio, definition and colors of different detection objects, and the loss value is generated by different specific gravities.
4. The unmanned aerial vehicle inspection fine route automatic planning system of claim 1, wherein: after scanning all detection objects in the waypoint, recording GPS information, time, photographing times, adjusting times, stability and definition of the waypoint, enabling the unmanned aerial vehicle to fly to the next waypoint, photographing the insulator again at different angles, and finishing the route of recording production after inspection, namely, an accurate route capable of accurately photographing the target detection object under the same environment.
5. A method for fine route automatic planning based on the system of one of claims 1 to 4, characterized in that: the method comprises the following steps:
the unmanned aerial vehicle photographs the detected object at different waypoints according to the setting in the route;
the edge computing equipment collects information of video collecting units of different waypoints, simply filters video streams and then detects each frame of picture based on a neural network;
the controller collects the detection result of the edge computing equipment, controls the unmanned aerial vehicle and the video acquisition unit according to the detection result, shoots detection objects of different waypoints, records GPS information, time, shooting times, adjusting times, stability and definition of the waypoints, and finally obtains an accurate route which can accurately shoot a target detection object under the same environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010852227.4A CN112363526B (en) | 2020-08-21 | 2020-08-21 | System and method for automatically planning fine route for unmanned aerial vehicle inspection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010852227.4A CN112363526B (en) | 2020-08-21 | 2020-08-21 | System and method for automatically planning fine route for unmanned aerial vehicle inspection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112363526A CN112363526A (en) | 2021-02-12 |
CN112363526B true CN112363526B (en) | 2021-10-01 |
Family
ID=74516698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010852227.4A Active CN112363526B (en) | 2020-08-21 | 2020-08-21 | System and method for automatically planning fine route for unmanned aerial vehicle inspection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112363526B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114035614B (en) * | 2022-01-10 | 2022-05-17 | 成都奥伦达科技有限公司 | Unmanned aerial vehicle autonomous inspection method and system based on prior information and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10671066B2 (en) * | 2015-03-03 | 2020-06-02 | PreNav, Inc. | Scanning environments and tracking unmanned aerial vehicles |
CN109765930B (en) * | 2019-01-29 | 2021-11-30 | 理光软件研究所(北京)有限公司 | Unmanned aerial vehicle vision navigation |
CN110554704B (en) * | 2019-08-15 | 2022-04-29 | 成都优艾维智能科技有限责任公司 | Unmanned aerial vehicle-based fan blade autonomous inspection method |
CN110580717B (en) * | 2019-08-15 | 2022-10-21 | 成都优艾维智能科技有限责任公司 | Unmanned aerial vehicle autonomous inspection route generation method for electric power tower |
CN111044044B (en) * | 2019-12-06 | 2023-04-07 | 国网安徽省电力有限公司淮南供电公司 | Electric unmanned aerial vehicle routing inspection route planning method and device |
CN111294586B (en) * | 2020-02-10 | 2022-06-21 | Oppo广东移动通信有限公司 | Image display method and device, head-mounted display equipment and computer readable medium |
CN111401146A (en) * | 2020-02-26 | 2020-07-10 | 长江大学 | Unmanned aerial vehicle power inspection method, device and storage medium |
CN111552306A (en) * | 2020-04-10 | 2020-08-18 | 安徽继远软件有限公司 | Unmanned aerial vehicle path generation method and device supporting pole tower key component inspection |
-
2020
- 2020-08-21 CN CN202010852227.4A patent/CN112363526B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112363526A (en) | 2021-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111272148B (en) | Unmanned aerial vehicle autonomous inspection self-adaptive imaging quality optimization method for power transmission line | |
CN112164015B (en) | Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle | |
CN105979147A (en) | Intelligent shooting method of unmanned aerial vehicle | |
CN109765930A (en) | A kind of unmanned plane vision navigation system | |
CN107102647A (en) | Unmanned plane target tracking and controlling method based on image | |
CN112215860A (en) | Unmanned aerial vehicle positioning method based on image processing | |
CN112162565B (en) | Uninterrupted self-main-pole tower inspection method based on multi-machine collaborative operation | |
CN108549413A (en) | A kind of holder method of controlling rotation, device and unmanned vehicle | |
CN104883524B (en) | Moving target automatic tracking image pickup method and system in a kind of Online class | |
CN108377328A (en) | A kind of helicopter makes an inspection tour the target image pickup method and device of operation | |
CN108961276B (en) | Distribution line inspection data automatic acquisition method and system based on visual servo | |
CN112711267B (en) | Unmanned aerial vehicle autonomous inspection method based on RTK high-precision positioning and machine vision fusion | |
CN108898122A (en) | A kind of Intelligent human-face recognition methods | |
CN111123962A (en) | Rotor unmanned aerial vehicle repositioning photographing method for power tower inspection | |
CN112949478A (en) | Target detection method based on holder camera | |
CN112363526B (en) | System and method for automatically planning fine route for unmanned aerial vehicle inspection | |
CN110132060A (en) | A kind of method of the interception unmanned plane of view-based access control model navigation | |
CN110245592A (en) | A method of for promoting pedestrian's weight discrimination of monitoring scene | |
CN107911612A (en) | A kind of camera automatic focusing method and apparatus | |
CN114967731A (en) | Unmanned aerial vehicle-based automatic field personnel searching method | |
CN113271409B (en) | Combined camera, image acquisition method and aircraft | |
CN112733680B (en) | Model training method, extraction method, device and terminal equipment for generating high-quality face image based on monitoring video stream | |
CN113110446A (en) | Dynamic inspection method for autonomous mobile robot | |
CN112766033B (en) | Method for estimating common attention targets of downlinks in scene based on multi-view camera | |
CN111539919B (en) | Method and device for judging position and routing inspection of tower part |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |