CN116012728A - Track line identification tracking method and system applied to unmanned aerial vehicle inspection - Google Patents

Track line identification tracking method and system applied to unmanned aerial vehicle inspection Download PDF

Info

Publication number
CN116012728A
CN116012728A CN202310071009.0A CN202310071009A CN116012728A CN 116012728 A CN116012728 A CN 116012728A CN 202310071009 A CN202310071009 A CN 202310071009A CN 116012728 A CN116012728 A CN 116012728A
Authority
CN
China
Prior art keywords
track
unmanned aerial
aerial vehicle
remote sensing
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310071009.0A
Other languages
Chinese (zh)
Inventor
王志鹏
姜耀鹏
贾利民
秦勇
童磊
蒋栋柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202310071009.0A priority Critical patent/CN116012728A/en
Publication of CN116012728A publication Critical patent/CN116012728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a track line identification tracking method and a track line identification tracking system applied to unmanned aerial vehicle inspection, which belong to the technical field of railway operation inspection maintenance based on computer vision, and acquire track line remote sensing image data; processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image; calculating coordinates of a real track area; and according to the coordinates of the real track area, judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters, and realizing the real-time tracking of the track line. The invention realizes high-precision, high-timeliness and high-efficiency acquisition of the rail fault information, and ensures that the unmanned aerial vehicle efficiently and horizontally completes the rail inspection task aiming at the railway rail area; the method reduces the human resource consumption in the track inspection task and the negative influence on the quality and efficiency of the track inspection task caused by personal negligence, and improves the accuracy and uniformity of the track area fault information acquisition.

Description

Track line identification tracking method and system applied to unmanned aerial vehicle inspection
Technical Field
The invention relates to the technical field of railway operation inspection maintenance based on computer vision, in particular to a method and a system for autonomous identification and real-time tracking of a track line based on a Largest Connected-ERFNet model, which are applied to unmanned aerial vehicle inspection.
Background
The traditional inspection method of the railway mainly comprises manual inspection and rail inspection vehicle inspection, wherein the manual inspection is to detect disasters and hidden dangers in railway lines and auxiliary facilities by skilled inspection workers through naked eyes or handheld equipment, and the rail inspection vehicle inspection is to carry out inspection work along a rail through special rail inspection vehicle carrying equipment. Both have the problems of low efficiency, poor night inspection condition, low inspection frequency, narrow inspection area, low automation degree and the like, and are constrained by the comprehensive maintenance skylight. The inspection to the operation time can not be realized through the rail inspection vehicle, so that only manual inspection can be used, and the problems of lower inspection efficiency and insufficient inspection coverage rate become more remarkable.
In recent years, unmanned aerial vehicle technology development is rapid, and unmanned aerial vehicle has advantages such as the flight flexibility is high, single flight cost is low, flight coverage is big and operation is not restricted by train operation, if use unmanned aerial vehicle to carry out railway inspection can effectively solve current method drawback, and the advantage that forms complementation with current method. However, in the flight process of the unmanned aerial vehicle, the unmanned aerial vehicle is easily affected by factors such as high wind, and the like, and the situation that a shooting projection area of a camera load carried on a tripod head of the unmanned aerial vehicle deviates from an orbit line frequently occurs. If the real-time states of the unmanned aerial vehicle cradle head and the camera load are not judged and regulated, the acquired track area information is incomplete, fault information omission or non-compliance of patterns is easy to occur, and finally the track inspection efficiency and reliability are reduced. Therefore, it is necessary to autonomously identify and track the railway track line in real time to guide the unmanned aerial vehicle to automatically collect the fully-compliant track information.
The judgment of the compliance of the image information of the track area is generally divided into two kinds of judgment of the relative scale of the track area and judgment of the relative position of the track area. The relative scale of the track area is the relative picture of the whole image occupied by the track area in the remote sensing image, the richness of the detail information of the track area contained in the acquired remote sensing image is determined, and the factors affecting the relative scale of the track area are camera load parameters. The relative position of the track area is the relative position and direction trend of the track area in the remote sensing image, the uniformity and normalization of the track area in the acquired remote sensing image are determined, and the factors affecting the relative position of the track area are the attitude parameters of the cradle head.
The traditional railway track line identification and tracking method is that a flight crew of a field control unmanned aerial vehicle remotely acquires a real-time video stream image of a load of a camera of the unmanned aerial vehicle by using a remote controller, judges the position information of a current track line by using eyes, and remotely controls the flight attitude and the cradle head attitude parameters of the unmanned aerial vehicle by using the remote controller. In recent years, unmanned aerial vehicle function technologies are continuously developed, such as unmanned aerial vehicle video streaming is faster, and difficulty level of remote control unmanned aerial vehicle is reduced. However, in the railway track line identification and tracking process, the flyer is still needed to be used as a control main body, so that the cost of human resources is greatly increased, and on the other hand, the reliability of unmanned aerial vehicle acquisition fault information can be seriously affected by personal negligence of the flyer, so that the quality of a track inspection task is low. In addition, in some scenes with severe environments, flying hands which are difficult to move can restrict the flying speed and the flying range of the unmanned aerial vehicle, so that the failure information acquisition efficiency of the unmanned aerial vehicle is reduced, and the overall progress of the track inspection task is further influenced.
Disclosure of Invention
The invention aims to provide a Largest Connected-ERFNet model-based track line autonomous identification and real-time tracking method and system, which are used for solving at least one technical problem in the background technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in one aspect, the invention provides a track line identification and tracking method applied to unmanned aerial vehicle inspection, comprising the following steps:
acquiring remote sensing image data of a track line;
processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
Preferably, the pre-trained track recognition tracking model includes: the system comprises a segmentation network and a connected domain calibration network, wherein the segmentation network is used for carrying out track region segmentation on remote sensing images acquired by the unmanned aerial vehicle, and the connected domain calibration network is used for extracting the maximum connected domain with the most abundant information from a track region segmentation result and is a real track region.
Preferably, the connected domain calibration network is configured to extract a maximum connected domain with the most abundant information from the track region segmentation result, and includes:
traversing the pixels of the binary image obtained by segmentation, and finding out the pixel point with the first pixel value changed, namely the edge pixel point of the connected domain; judging the pixel points with neighborhood relation with the edge pixel points, and dividing the points with the same pixel value as the point into the same category, namely belonging to the same connected domain; transferring the calibration main body to adjacent pixel points of the same type, judging the pixel points with neighborhood relation again, repeating until no pixel point position capable of being calibrated exists, and forming a connected domain by the calibrated points together;
repeating the steps to finish the processing of all pixels in the image, so as to obtain all connected domains contained in the image; counting the number of pixel points contained in each connected domain, reserving the connected domain with the largest pixel number, identifying the connected domain as an effective connected domain representing a real track area, and changing the pixel values of the pixel points contained in the other connected domains which are interfered into 0 for removal.
Preferably, the condition judgment and automatic adjustment of the attitude of the unmanned aerial vehicle pan-tilt and the camera load parameter comprises: calculating relevant parameters of the track line in the remote sensing image by using the calculated track area coordinates; and judging whether the track area position and scale information acquired by the unmanned aerial vehicle at the current moment meets the track inspection task requirements or not by using a cloud deck posture and load parameter judging method, and if not, adjusting the cloud deck posture and load parameters of the unmanned aerial vehicle.
Preferably, the method for determining the posture and the load parameters of the cradle head comprises the following steps: if the average relative coordinate difference value of the starting point and the ending point of the two track lines at the current moment is smaller than the threshold value, judging that the drawing of the track area at the current moment is too small, and adjusting the zoom focal length of the load of the unmanned aerial vehicle camera;
calculating the relative coordinates of the central point of the current track area according to the relative coordinates of two track lines in the track area, and comparing the coordinates of the central point of the track area with the coordinates of the central point of the whole image; if the absolute value of the difference value between the coordinates of the central point of the track area and the coordinates of the central point of the whole image is larger than a threshold value, the track area at the current moment is judged to deviate from the center of the image, and the pitch angle value of the unmanned aerial vehicle holder is adjusted according to the deviation direction of the track area.
Preferably, the method for determining the posture and the load parameters of the pan-tilt further comprises: in the track inspection task of the unmanned aerial vehicle, a track line obtained by cradle head load shooting is presented as a transverse trend, the difference value of the starting and ending point of the track line in the Y direction is calculated, and trigonometric tangent operation is carried out on the image width; if the absolute value of the tangent value is larger than the threshold value, judging that the track area is inclined at the current moment, and adjusting the yaw angle value of the unmanned aerial vehicle holder according to the inclination direction of the track area.
Preferably, calculating coordinates of the real track area includes: the track area is represented by the endpoint coordinates of two track lines in the remote sensing image; firstly traversing pixels at the edge of an image perpendicular to the track line trend to obtain a position where the pixel value is suddenly changed; and judging the appearance sequence of the positions of the pixel points, identifying the pixel points with the abrupt change of the pixel values at the first time and the last time as the end points of two track lines, and calculating the relative coordinates of the two points in the image, namely the coordinates of the real track area.
In a second aspect, the present invention provides a track line identification tracking system applied to unmanned aerial vehicle inspection, including:
the acquisition module is used for acquiring the remote sensing image data of the track line;
the screening module is used for processing the acquired remote sensing image data of the track line by utilizing a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
the calculation module is used for calculating the coordinates of the real track area;
and the adjusting module is used for judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
In a third aspect, the present invention provides a non-transitory computer readable storage medium for storing computer instructions which, when executed by a processor, implement a track line identification tracking method as described above for use in unmanned aerial vehicle inspection.
In a fourth aspect, the invention provides a computer program product comprising a computer program for implementing a trajectory line identification tracking method as described above for use in drone inspection, when run on one or more processors.
In a fifth aspect, the present invention provides an electronic device, comprising: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory, so that the electronic device executes the instructions for implementing the track line identification tracking method applied to unmanned aerial vehicle inspection as described above.
The invention has the beneficial effects that: the unmanned aerial vehicle track fault information acquisition method has the advantages that the unmanned aerial vehicle track fault information acquisition method is applied to various complex railway scenes, high precision, high timeliness and high efficiency are achieved, and the unmanned aerial vehicle can efficiently and horizontally complete track inspection tasks aiming at railway track areas; the consumption of human resources in the track inspection task is reduced, the negative influence on the quality and efficiency of the track inspection task caused by personal negligence is avoided, and the accuracy and uniformity of track area fault information acquisition are greatly improved.
The advantages of additional aspects of the invention will be set forth in part in the description which follows, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of overall steps of a method for autonomous identification and real-time tracking of a track line according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a flight inspection method according to an embodiment of the invention.
Fig. 3 is a schematic diagram of a deep learning part of Largest Connected-ERFNet model according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a track area division result according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a screening and extracting result of a track area division result according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a pan-tilt gesture and a camera load gesture angle of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a method for acquiring a remote sensing image of a track line by an unmanned aerial vehicle according to an embodiment of the invention.
Fig. 8 is a diagram showing experimental results of autonomous identification and real-time tracking of a track line according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements throughout or elements having like or similar functionality. The embodiments described below by way of the drawings are exemplary only and should not be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or groups thereof.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
In order that the invention may be readily understood, a further description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings and are not to be construed as limiting embodiments of the invention.
It will be appreciated by those skilled in the art that the drawings are merely schematic representations of examples and that the elements of the drawings are not necessarily required to practice the invention.
Example 1
In this embodiment 1, first, a track line identifying and tracking system applied to unmanned aerial vehicle inspection is provided, including:
The acquisition module is used for acquiring the remote sensing image data of the track line;
the screening module is used for processing the acquired remote sensing image data of the track line by utilizing a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
the calculation module is used for calculating the coordinates of the real track area;
and the adjusting module is used for judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
In this embodiment 1, the track line identifying and tracking method applied to unmanned aerial vehicle inspection is implemented by using the system described above, and includes:
acquiring remote sensing image data of a track line;
processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
The pre-trained track recognition tracking model comprises: the system comprises a segmentation network and a connected domain calibration network, wherein the segmentation network is used for carrying out track region segmentation on remote sensing images acquired by the unmanned aerial vehicle, and the connected domain calibration network is used for extracting the maximum connected domain with the most abundant information from a track region segmentation result and is a real track region.
The connected domain calibration network is used for extracting the maximum connected domain with the most abundant information from the track region segmentation result, and comprises the following steps:
traversing the pixels of the binary image obtained by segmentation, and finding out the pixel point with the first pixel value changed, namely the edge pixel point of the connected domain; judging the pixel points with neighborhood relation with the edge pixel points, and dividing the points with the same pixel value as the point into the same category, namely belonging to the same connected domain; transferring the calibration main body to adjacent pixel points of the same type, judging the pixel points with neighborhood relation again, repeating until no pixel point position capable of being calibrated exists, and forming a connected domain by the calibrated points together;
repeating the steps to finish the processing of all pixels in the image, so as to obtain all connected domains contained in the image; counting the number of pixel points contained in each connected domain, reserving the connected domain with the largest pixel number, identifying the connected domain as an effective connected domain representing a real track area, and changing the pixel values of the pixel points contained in the other connected domains which are interfered into 0 for removal.
The condition judgment and automatic adjustment of the unmanned aerial vehicle holder posture and the camera load parameters comprise: calculating relevant parameters of the track line in the remote sensing image by using the calculated track area coordinates; and (3) utilizing a cloud deck posture and load parameter judging method to find out whether the track area position and scale information acquired by the unmanned aerial vehicle at the current moment meets the track inspection task requirement, and if not, adjusting the cloud deck posture and load parameters of the unmanned aerial vehicle.
The method for judging the posture and the load parameters of the cradle head comprises the following steps: if the average relative coordinate difference value of the starting point and the ending point of the two track lines at the current moment is smaller than the threshold value, judging that the drawing of the track area at the current moment is too small, and adjusting the zoom focal length of the load of the unmanned aerial vehicle camera;
calculating the relative coordinates of the central point of the current track area according to the relative coordinates of two track lines in the track area, and comparing the coordinates of the central point of the track area with the coordinates of the central point of the whole image; if the absolute value of the difference value between the coordinates of the central point of the track area and the coordinates of the central point of the whole image is larger than a threshold value, the track area at the current moment is judged to deviate from the center of the image, and the pitch angle value of the unmanned aerial vehicle holder is adjusted according to the deviation direction of the track area.
The method for judging the posture and the load parameters of the cradle head further comprises the following steps: in the track inspection task of the unmanned aerial vehicle, a track line obtained by cradle head load shooting is presented as a transverse trend, the difference value of the starting and ending point of the track line in the Y direction is calculated, and trigonometric tangent operation is carried out on the image width; if the absolute value of the tangent value is larger than the threshold value, judging that the track area is inclined at the current moment, and adjusting the yaw angle value of the unmanned aerial vehicle holder according to the inclination direction of the track area.
Calculating coordinates of the real track area, comprising: the track area is represented by the endpoint coordinates of two track lines in the remote sensing image; firstly traversing pixels at the edge of an image perpendicular to the track line trend to obtain a position where the pixel value is suddenly changed; and judging the appearance sequence of the positions of the pixel points, identifying the pixel points with the abrupt change of the pixel values at the first time and the last time as the end points of two track lines, and calculating the relative coordinates of the two points in the image, namely the coordinates of the real track area.
Example 2
In order to realize high-precision, high-timeliness and high-efficiency unmanned aerial vehicle track fault information acquisition which can be applied to various complex railway scenes, an autonomous railway track line identification and real-time tracking method aiming at unmanned aerial vehicle remote sensing images is provided in the embodiment 2. Through using unmanned aerial vehicle that carries visible light camera load to patrol and examine along railway line side top, acquire visual track remote sensing image data fast, high-efficient, wide area. After original remote sensing image data are collected, an improved Largest Connected-ERFNet semantic segmentation algorithm model is provided, after model training is completed, the model is deployed on a carrier end of an unmanned aerial vehicle, a real track area is extracted from a remote sensing image collected in real time when the unmanned aerial vehicle flies, on the basis, information such as relative coordinates of two track lines is automatically calculated to complete current moment track area image information compliance judgment, and therefore track line autonomous identification of a single frame image is completed. Continuous autonomous identification of the track line is completed through real-time video streams collected by the unmanned aerial vehicle, corresponding scheme adjustment is carried out on autonomous identification results aiming at abnormal conditions, and the function of real-time tracking of the track line is realized.
The method specifically comprises the following steps:
step 1: using an unmanned aerial vehicle carrying a camera load to patrol and examine along the side upper part of a railway line to obtain visible line remote sensing image data;
step 2: preprocessing original remote sensing image data;
step 3: setting and training super parameters of a Largest Connected-ERFNet model;
step 4: configuring an unmanned aerial vehicle carrying end of a Largest Connected-ERFNet model after training;
step 5: semantic segmentation of a track area in a remote sensing image and screening of a real track area;
step 6: calculating coordinates of a real track area in the remote sensing image;
step 7: and judging and automatically adjusting the states of the unmanned aerial vehicle holder posture and the camera load parameters.
The scheme for acquiring visible line remote sensing image data along the side upper part of the railway line by using the unmanned aerial vehicle carrying the visible light camera load in the step 1 is as follows:
the inspection method of the embodiment adopts the unmanned aerial vehicle with the visible light camera loading system to carry out flight inspection along the side upper part of the railway line. The flight track is parallel to the track direction, is 80m away from the line edge, is 60-80m away from the ground, and has a flight speed of 3-5m/s, so that the line can fly back and forth at two sides of the line to obtain relatively complete line remote sensing image original data. During operation, a ground base station is also required to be erected, and the control radius is 30km.
The preprocessing scheme of the original remote sensing image data in the step 2 is as follows:
and (3) carrying out data expansion on the original remote sensing image by combining with a data enhancement method, carrying out unified processing on the sizes of all the images, and calibrating a track area in the remote sensing image so as to meet the data set requirement of subsequent Largest Connected-ERFNet model training. In order to improve the robustness of the autonomous track line identification and real-time tracking method in complex and diverse scenes, the unmanned aerial vehicle can smoothly adapt to different scene conditions in actual flight inspection, and noise addition processing and scene coverage processing are performed on part of original remote sensing images.
The super-parameter setting and training scheme of the step 3Largest Connected-ERFNet model is as follows:
the remote sensing image obtained after preprocessing is used as a training set of a Largest Connected-ERFNet model, proper learning rate, iteration times, loss function and other relevant super parameters are set in a training program, the training rate and convergence of the model are guaranteed, and the situation of fitting is avoided.
The unmanned aerial vehicle-mounted terminal configuration scheme of the Largest Connected-ERFNet model trained in the step 4 is as follows:
the execution language of the unmanned aerial vehicle-mounted terminal environment program is C++, and the model is written and trained by adopting Python language, so that environment and language adaptability configuration is required in actual use. The libtorch dependency library provided by Pytorch can convert a deep learning model obtained by training Pytorch into a form that C++ language can be called for operation. The method comprises the steps of firstly completing form conversion of a Largest Connected-ERFNet model through a model conversion program, then completing initialization configuration of an unmanned aerial vehicle control program, deploying a real-time video stream acquisition program of a cloud platform load of the unmanned aerial vehicle, embedding the Largest Connected-ERFNet model into a single frame image track area division recognition program, and completing configuration and deployment of the model in an environment of a loading end of the unmanned aerial vehicle.
The semantic segmentation of the track area in the remote sensing image and the screening scheme of the real track area in the step 5 are as follows:
firstly, semantic segmentation is carried out on a remote sensing image acquired by a visible light camera load by utilizing an ERFNet part of a Largest Connected-ERFNet model configured on an unmanned aerial vehicle carrying end, an initially divided binarized image is obtained, then a maximum connected domain calibration part extracts a track area with the maximum gauge from the divided track area, the area is considered to be the track area most conforming to the real situation, and the rest is an irrelevant noise area.
The coordinate calculation scheme of the real track area in the remote sensing image in the step 6 is as follows:
railway lines often have at least two parallel lines, and the track area is represented by the end coordinates of two track lines in the remote sensing image. Firstly, traversing pixels at the edge of an image perpendicular to the track line trend to obtain a position where the pixel value is suddenly changed. And judging the appearance sequence of the positions of the pixel points, identifying the pixel points with the abrupt change of the pixel values at the first time and the last time as the end points of two track lines, and calculating the relative coordinates of the two points in the image.
The condition judgment and automatic adjustment scheme of the unmanned aerial vehicle pan-tilt gesture and the camera load parameter in the step 7 is as follows:
firstly, calculating relevant parameters of a track line in a remote sensing image by using the calculated track area coordinates, and then, using a holder posture and load parameter judging system designed for the remote sensing image acquired by unmanned aerial vehicle inspection to find out whether track area position and scale information acquired by the unmanned aerial vehicle at the current moment meet the track inspection task requirements or not, and adopting corresponding preset schemes to automatically adjust holder posture and load parameters of the unmanned aerial vehicle according to different conditions.
In the remote sensing image acquired by the unmanned aerial vehicle, abnormal conditions that the track area position and the scale information do not meet the requirements of the track inspection task are specifically classified into three types: the relative frame occupied by the track area is too small, the track area deviates from the image center and the track area trend to incline, and for three conditions, whether the cradle head posture and the load parameters of the unmanned aerial vehicle need to be adjusted or not is judged according to different parameter values. The basic flow of the cloud deck posture and load parameter judging system designed for the unmanned aerial vehicle inspection acquisition remote sensing image is as follows:
and taking the starting and ending positions of the two track lines in the remote sensing image as a first-stage judging characteristic. If the average relative coordinate difference value of the starting point and the ending point of the two track lines at the current moment is smaller than the threshold value, the fact that the drawing of the track area at the current moment is too small is judged, and the zoom focal length of the load of the unmanned aerial vehicle camera is adjusted. Taking a track line with a horizontal trend in a remote sensing image as an example, the calculation formula of the average relative coordinate difference value of the starting point and the end point of the two track lines is as follows:
Figure BDA0004064759990000131
In which L 1 Representing the average relative coordinate difference between the start point and the end point of the two track lines,
Figure BDA0004064759990000132
and->
Figure BDA0004064759990000133
Two-point relative coordinates of the starting and ending point of a track line,/->
Figure BDA0004064759990000134
And->
Figure BDA0004064759990000135
Is the relative coordinates of two points of the starting and ending points of the other track line.
And taking the central point position of the track area in the remote sensing image as a second-stage judging characteristic. And calculating the relative coordinates of the central point of the current track area according to the relative coordinates of the two track lines in the track area, and comparing the coordinates of the central point of the track area with the coordinates of the central point of the whole image. If the absolute value of the difference value between the coordinates of the central point of the track area and the coordinates of the central point of the whole image is larger than a threshold value, the track area at the current moment is judged to deviate from the center of the image, and the pitch angle value of the unmanned aerial vehicle holder is adjusted according to the deviation direction of the track area. Taking a track line with a horizontal trend in a remote sensing image as an example, a relative coordinate calculation formula of a central point of a track area is as follows:
Figure BDA0004064759990000136
and taking the slope of the track line in the remote sensing image as a third-stage judging characteristic. In the track inspection task of the unmanned aerial vehicle, a track line obtained by cradle head load shooting is in a transverse trend, so that the difference value of the starting and ending point of the track line in the Y direction is calculated, and trigonometric tangent operation is carried out on the image width. If the absolute value of the tangent value is larger than the threshold value, judging that the track area is inclined at the current moment, and adjusting the yaw angle value of the unmanned aerial vehicle holder according to the inclination direction of the track area. Taking a track line with a horizontal trend in a remote sensing image as an example, the slope L of the track line 3 The calculation formula is as follows:
Figure BDA0004064759990000137
and the condition judgment and automatic adjustment of the unmanned aerial vehicle cradle head posture and the camera load parameters are completed by using the cradle head posture and load parameter judgment system.
In summary, in this embodiment 2, the unmanned aerial vehicle track fault information collection that has realized high accuracy, high ageing, high efficiency, can be applied to multiple complicated railway scene, guarantee unmanned aerial vehicle can high-efficient high level completion to the track task of patrolling and examining in the railway track region. Compared with an artificial track line identification tracking method, the method has the advantages that the onboard end program is used for replacing the flying hand, so that the consumption of human resources in the track inspection task is reduced, the main task of the flying hand is changed from controlling the unmanned aerial vehicle to supervising the unmanned aerial vehicle to fly, the same flying hand can bear the unmanned aerial vehicle inspection flight task for a long time, the negative influence on the quality and efficiency of the track inspection task caused by personal negligence is avoided, and the accuracy and uniformity of track area fault information acquisition are greatly improved. Compared with the existing track line identification tracking algorithm based on edge detection, the method adopts an image semantic segmentation method in a deep learning method, has strong adaptability and precision to the surrounding environment, for example, equipment such as guardrails or contact networks around the track cannot cause large interference to the autonomous identification result of the track line, and the adaptability to the environment enables the method to be applied to a large range on the railway. The method is hardly influenced by complex environment scenes, and can work even when the track line is partially blocked, which is difficult to achieve by the target recognition technology based on the edge detection method, so that the range of feasible detection conditions is further reduced, and the flexibility of application of the track inspection maintenance work scenes is increased. In addition, the method can be realized during the running of the train, has no influence on the running of the train, can greatly release the running capacity of the train, and has important significance on high-speed railways and high-running-density railways.
Example 3
As shown in fig. 1, in embodiment 3, a track line identifying and tracking method of a patrol unmanned aerial vehicle is provided, which includes the following steps:
step 1: and (3) using the unmanned aerial vehicle carrying the camera load to patrol and obtain visible line remote sensing image data along the side upper part of the railway line.
Step 2: and preprocessing original remote sensing image data.
Step 3: hyper-parameter setup and training of Largest Connected-ERFNet model.
Step 4: and training the unmanned aerial vehicle-mounted end configuration of the completed Largest Connected-ERFNet model.
Step 5: semantic segmentation of track areas in remote sensing images and real track area screening.
Step 6: and calculating coordinates of a real track area in the remote sensing image.
Step 7: and judging and automatically adjusting the states of the unmanned aerial vehicle holder posture and the camera load parameters.
As shown in fig. 2, the present inspection method uses an unmanned aerial vehicle equipped with a visible light camera load system to perform flight inspection along the side upper side of the railway line. The flight track is parallel to the track direction, is 80m away from the line edge, is 60-80m away from the ground, and has a flight speed of 3-5m/s, so that the line can fly back and forth at two sides of the line to obtain relatively complete line remote sensing image original data. During operation, a ground base station is also required to be erected, and the control radius is 30km.
And (3) carrying out data expansion on the original remote sensing image by combining with a data enhancement method, carrying out unified processing on the sizes of all the images, and calibrating a track area in the remote sensing image so as to meet the data set requirement of subsequent Largest Connected-ERFNet model training. In order to improve the robustness of the autonomous track line identification and real-time tracking method in complex and diverse scenes, the unmanned aerial vehicle can smoothly adapt to different scene conditions in actual flight inspection, and noise addition processing and scene coverage processing are performed on part of original remote sensing images.
The remote sensing image obtained after preprocessing is used as a training set of a Largest Connected-ERFNet model, proper learning rate, iteration times, loss function and other relevant super parameters are set in a training program, the training rate and convergence of the model are guaranteed, and the situation of fitting is avoided. Largest Connected-ERFNet model itself and training program are written in Python language, and deep learning training is completed by using Pytorch frame and Nvidia3090 display card.
Largest Connected-ERFNet model frame structure includes: largest Connected-ERFNet model is designed aiming at the unmanned aerial vehicle railway track inspection scene and the self characteristic architecture of the remote sensing image, and consists of two parts, namely ERFNet and Largest Connected Component.
The ERFNet part is the deep learning content of the Largest Connected-ERFNet model and is responsible for carrying out track region segmentation on the remote sensing image acquired by the unmanned aerial vehicle so as to realize rough track region segmentation; the maximum connected domain calibration (Largest Connected Component) part is Largest Connected-ERFNet model result optimization content, and aims to solve the problem of irrelevant background interference caused by small target objects in a remote sensing image acquired by an unmanned aerial vehicle.
As shown in fig. 3, the basic framework of the ERFNet deep learning part of the Largest Connected-ERFNet model can be summarized as:
and randomly extracting a corresponding number of remote sensing images from the original training data set, and firstly encoding the images of the input model. The coding part consists of a downsampling module and a residual module, and 16 layers are formed. The downsampling module is realized by adopting a 3x3 convolution operation and a 2x2 maximum pooling operation, is responsible for carrying out rough extraction on the characteristics of a sample image, and has 3 downsampling operations in total in the coding part;
The residual error module of the coding part does not adopt a conventional residual error module, but adopts a brand new residual error module called Non-bottleneck-1D, and in order to improve the accuracy of network extraction characteristic information, the last two blocks in the Non-bottleneck-1D are uniformly replaced by one-dimensional cavity convolution, so that the refined extraction of sample image characteristics is realized;
and after the characteristic extraction of the sample image is finished through the coding part, decoding the characteristic image. The decoding part consists of an up-sampling module and a residual module, and has 7 layers in total. The up-sampling part only has the function of adjusting the fineness and matching with the input, and does not adopt the cavity convolution or the maximum anti-pooling, but adopts the simple deconvolution with the step length of 2. The residual error module is consistent with the coding part and adopts Non-bottleneck-1D to refine the image characteristics of the up-sampling conversion;
through up-sampling and residual operation of the decoding part, the resolution of the generated output image is returned to the image level of the initial input, and different types of areas in the image are marked with different colors to be distinguished, so that the semantic segmentation purpose of different types of objects in the image is realized.
After training, the original remote sensing image is subjected to track area division, and the result is shown in fig. 4. Fig. 4 (a) is an original remote sensing image, and fig. 4 (b) is an image after region division.
The unmanned aerial vehicle-mounted terminal environment is Ubuntu18.04, the program execution language is C++, and the model is written and trained by adopting Python language, so that environment and language adaptive configuration is required during actual use. The libtorch dependency library provided by Pytorch can convert a deep learning model obtained by training Pytorch into a form that C++ language can be called for operation.
The method comprises the steps of firstly completing form conversion of a Largest Connected-ERFNet model through a model conversion program, then completing initialization configuration of an unmanned aerial vehicle control program, deploying a real-time video stream acquisition program of a cloud platform load of the unmanned aerial vehicle, embedding the Largest Connected-ERFNet model into a single frame image track area division recognition program, and completing configuration and deployment of the model in an environment of a loading end of the unmanned aerial vehicle. The configuration information of the unmanned plane equipment and the airborne terminal environment is shown in table 1.
TABLE 1
Figure BDA0004064759990000171
The step 5 comprises the following steps: firstly, an ERFNT part in a Largest Connected-ERFNT model is utilized to roughly divide a track area from a remote sensing image, then, a Largest Connected Component part in a Largest Connected-ERFNT model is utilized to extract the maximum connected domain in the segmentation result, the maximum connected domain can be identified as a real track area, and the rest connected domains are noise interference. The basic flow of Largest Connected Component part extraction of the maximum connected domain is as follows:
1) The image is regarded as an overall area, and adjacent pixels having the same pixel value in the area may constitute a small area, which is called a connected area. The remote sensing image is changed into a binarization image with a pixel value of 0 or 255 after being divided into track areas, and the neighborhood relation of the pixels is provided with four adjacent pixels and eight adjacent pixels;
2) Traversing pixels of the binarized image, and finding out a pixel point with a first pixel value changed, namely an edge pixel point of the connected domain. And judging the pixel points with neighborhood relation with the point, and dividing the points with the same pixel value as the point into the same category, namely belonging to the same connected domain. Transferring the calibration main body to adjacent pixel points of the same type, judging the pixel points with neighborhood relation again, repeating until no pixel point position capable of being calibrated exists, and forming a small connected domain by the calibrated points;
3) And repeating the process 2) to finish the processing of all pixels in the image, and obtaining all connected domains contained in the image. Counting the number of pixel points contained in each connected domain, reserving the connected domain with the largest pixel number, identifying the connected domain as an effective connected domain representing a real track area, and changing the pixel values of the pixel points contained in the other connected domains which are interfered into 0 for removal.
And 5, extracting the maximum connected domain by using a Largest Connected Component part, and completing the region screening extraction after roughly dividing the real track region in the remote sensing image, wherein the screening extraction result is shown in fig. 5.
The step 6 comprises the following steps: the railway line usually has at least two parallel lines, and because the picture of the track line is relatively large in the picture acquired by unmanned aerial vehicle inspection, the curve degree is not obvious, and the railway line can be regarded as a straight line track line. The two track line end point coordinates of the track area are calculated as follows:
1) And traversing pixels at the edge of the image perpendicular to the track line trend to obtain the position where the pixel value is suddenly changed. Taking a track line with a horizontal trend as an example, traversing pixels at two vertical edges of a remote sensing image because the track line is in a horizontal direction;
2) Because the region result extracted by the track division is not necessarily a regular rectangle, more than two pixel points with abrupt change of pixel values obtained by traversing the image edge are present, the pixel points with abrupt change of pixel values in the first time and the last time are considered as one side end point of two different track lines, and the searching of the other side end point is the same;
3) The relative coordinates of the end points of the two track lines in the image are calculated, and since the end points exist at the edges of the image, a fixed value exists in the coordinates, and the other value needs to be obtained through calculation. Taking a track line in a horizontal trend as an example, two track line end points exist at the vertical edge of a remote sensing image, the X value of the relative coordinates of the two track line end points is 0 or W, and W is the width of the image; the Y value of the relative coordinates is calculated by the ratio, and the calculation formula is as follows:
Figure BDA0004064759990000181
where y is the coordinate value of the end point obtained by traversing, and H is the height of the remote sensing image.
And completing the transformation from visual image information to numerical information of the track area in the remote sensing image through the coordinate calculation of the real track area in the remote sensing image.
The step 7 comprises the following steps: firstly, calculating relevant parameters of a track line in a remote sensing image by using the calculated track area coordinates, and then, using a holder posture and load parameter judging system designed for the remote sensing image acquired by unmanned aerial vehicle inspection to find out whether track area position and scale information acquired by the unmanned aerial vehicle at the current moment meet the track inspection task requirements or not, and adopting corresponding preset schemes to automatically adjust holder posture and load parameters of the unmanned aerial vehicle according to different conditions. The unmanned aerial vehicle pan-tilt attitude and the camera load attitude angle are shown in fig. 6.
In the remote sensing image acquired by the unmanned aerial vehicle, abnormal conditions that the track area position and the scale information do not meet the requirements of the track inspection task are specifically classified into three types: the relative frame occupied by the track area is too small, the track area deviates from the image center and the track area trend to incline, and for three conditions, whether the cradle head posture and the load parameters of the unmanned aerial vehicle need to be adjusted or not is judged according to different parameter values. The basic flow of the proposed cradle head posture and load parameter judging system designed for unmanned aerial vehicle inspection acquisition remote sensing images can be summarized as follows:
1) And taking the starting and ending positions of the two track lines in the remote sensing image as a first-stage judging characteristic. If the average relative coordinate difference value of the starting point and the ending point of the two track lines at the current moment is smaller than the threshold value, the fact that the drawing of the track area at the current moment is too small is judged, and the zoom focal length of the load of the unmanned aerial vehicle camera is adjusted.
2) And taking the central point position of the track area in the remote sensing image as a second-stage judging characteristic. And calculating the relative coordinates of the central point of the current track area according to the relative coordinates of the two track lines in the track area, and comparing the coordinates of the central point of the track area with the coordinates of the central point of the whole image. If the absolute value of the difference value between the coordinates of the central point of the track area and the coordinates of the central point of the whole image is larger than a threshold value, the track area at the current moment is judged to deviate from the center of the image, and the pitch angle value of the unmanned aerial vehicle holder is adjusted according to the deviation direction of the track area.
3) And taking the slope of the track line in the remote sensing image as a third-stage judging characteristic. In the track inspection task of the unmanned aerial vehicle, a track line obtained by cradle head load shooting is in a transverse trend, so that the difference value of the starting and ending point of the track line in the Y direction is calculated, and trigonometric tangent operation is carried out on the image width. If the absolute value of the tangent value is larger than the threshold value, judging that the track area is inclined at the current moment, and adjusting the yaw angle value of the unmanned aerial vehicle holder according to the inclination direction of the track area.
And the condition judgment and automatic adjustment of the unmanned aerial vehicle cradle head posture and the camera load parameters are completed by using the cradle head posture and load parameter judgment system.
In the embodiment, the method is applied to a certain section of the railway line section, so that the proposed autonomous track line identification and real-time tracking method based on the Largest Connected-ERFNet model is verified. The method comprises the following specific steps:
step 1: and (3) using the unmanned aerial vehicle carrying the camera load to patrol and obtain visible line remote sensing image data along the side upper part of the railway line. The flight track is parallel to the track direction, is 80m away from the line edge, is 80m away from the ground, has the flight speed of 4m/s, and flies back and forth at two sides of the line to obtain line point cloud original data. When in operation, the ground base station is erected, and the control radius is 30km. In the method for acquiring the track line remote sensing image by the unmanned aerial vehicle, as shown in fig. 7, in the process of acquiring the track line remote sensing image by the unmanned aerial vehicle, the spatial positions of the unmanned aerial vehicle and the ground track are expressed by a northeast coordinate system (NED), and because the posture of a camera itself is inclined when the track line is shot by a camera carried by a tripod head of the unmanned aerial vehicle, the relative positions of two captured track lines are changed compared with the vertical shooting under the camera coordinate system taking a camera lens as an object. The method comprises the steps of setting a starting point of a ground track line captured by a camera as P1, setting a final point as P2, respectively setting the projection of two points on an imaging plane at a focal distance as P1 and P2, wherein a shadow S1 below the camera in the figure is the projection of the camera on the ground, and a shadow S2 on the right side is a ground track line area captured by the projection of the imaging plane of the camera.
Step 2: and (3) carrying out data expansion on the original remote sensing image by combining with a data enhancement method, carrying out unified processing on the sizes of all the images, and calibrating a track area in the remote sensing image so as to meet the data set requirement of subsequent Largest Connected-ERFNet model training. In order to improve the robustness of the autonomous track line identification and real-time tracking method in complex and diverse scenes, the unmanned aerial vehicle can smoothly adapt to different scene conditions in actual flight inspection, and noise addition processing and scene coverage processing are performed on part of original remote sensing images.
Step 3: the remote sensing image obtained after preprocessing is used as a training set of a Largest Connected-ERFNet model, proper learning rate, iteration times, loss function and other relevant super parameters are set in a training program, the Largest Connected-ERFNet model and the training program are written in Python language, and deep learning training is completed by using a Pytorch frame and an Nvidia3090 display card.
Step 4: the unmanned aerial vehicle loading end environment is Ubuntu18.04, the program execution language is C++, and because the model is written and trained by adopting Python language, a libtorch dependency library provided by Pytorch is used for converting a deep learning model obtained by Pytorch training into a form that the C++ language can be called for operation, then the initialization configuration of an unmanned aerial vehicle control program is completed, a real-time video stream acquisition program of the unmanned aerial vehicle cloud deck load is deployed, and a Largest Connected-ERFNet model is embedded into a single-frame image track region division recognition program, so that the configuration deployment of the model in the unmanned aerial vehicle loading end environment is completed.
Step 5: firstly, semantic segmentation is carried out on a remote sensing image acquired by a visible light camera load by utilizing an ERFNet part of a Largest Connected-ERFNet model configured on an unmanned aerial vehicle carrying end, an initially divided binarized image is obtained, then a maximum connected domain calibration part extracts a track area with the maximum gauge from the divided track area, the area is considered to be the track area most conforming to the real situation, and the rest is an irrelevant noise area.
Step 6: firstly, traversing pixels at the edge of an image perpendicular to the track line trend to obtain a position where the pixel value is suddenly changed. And judging the appearance sequence of the positions of the pixel points, identifying the pixel points with the abrupt change of the pixel values at the first time and the last time as the end points of two track lines, and calculating the relative coordinates of the two points in the image.
Step 7: firstly, calculating relevant parameters of a track line in a remote sensing image by using the calculated track area coordinates, and then, using a holder posture and load parameter judging system designed for the remote sensing image acquired by unmanned aerial vehicle inspection to find out whether track area position and scale information acquired by the unmanned aerial vehicle at the current moment meet the track inspection task requirements or not, and adopting corresponding preset schemes to automatically adjust holder posture and load parameters of the unmanned aerial vehicle according to different conditions.
Taking the starting and ending point positions of two track lines in the remote sensing image as first-stage judging characteristics, taking a track line in a horizontal trend in the remote sensing image as an example, and calculating an average relative coordinate difference value of the starting points and the ending points of the two track lines as follows:
Figure BDA0004064759990000211
in which L 1 Mean relative coordinate difference representing start and end points of two track lines, +.>
Figure BDA0004064759990000212
And
Figure BDA0004064759990000213
two-point relative coordinates of the starting and ending point of a track line,/->
Figure BDA0004064759990000214
And->
Figure BDA0004064759990000215
Is the relative coordinates of two points of the starting and ending points of the other track line.
Taking the central point position of a track area in the remote sensing image as a second-stage judging feature, taking a track line in the horizontal trend in the remote sensing image as an example, and the relative coordinate L of the central point of the track area 2 The calculation formula of (2) is as follows:
Figure BDA0004064759990000221
taking the slope of a track line in the remote sensing image as a third-level judging characteristic, taking a track line in the horizontal trend in the remote sensing image as an example, and taking the slope L of the track line as an example 3 The calculation formula is as follows:
Figure BDA0004064759990000222
the partial experimental results of autonomous track line identification and real-time tracking in the method are shown in fig. 8, wherein the left side images in fig. 8 (a), 8 (b) and 8 (c) are real remote sensing images, and the right side images are track line images for tracking identification. The results of the evaluation and comparison of the operation indexes of the conventional track line identification tracking algorithm based on edge detection are shown in table 2.
TABLE 2
Figure BDA0004064759990000223
Experimental results show that the Largest Connected-ERFNet model-based track line autonomous identification and real-time tracking method provided by the embodiment 3 is more dominant in identification precision and smoothness than the existing edge detection-based track line identification and tracking algorithm, has stronger adaptability and precision to surrounding environments, for example, equipment such as guardrails or contact networks on the periphery of a track cannot cause larger interference to the track line autonomous identification result, and the adaptability to the environment enables the method to be applied to a large range on a railway. Therefore, from the practical application point of view, the method provided by the invention meets the actual inspection requirement, verifies the feasibility and effectiveness of the autonomous track line identification and real-time tracking algorithm based on the Largest Connected-ERFNet model under the condition of low calculation force of the unmanned aerial vehicle carrying end, and does not have the condition of algorithm operation stagnation. The unmanned aerial vehicle track fault information acquisition system has the advantages that the unmanned aerial vehicle track fault information acquisition system is high in precision, aging and efficiency, and can be applied to various complex railway scenes, and the unmanned aerial vehicle can be guaranteed to efficiently and horizontally finish track inspection tasks aiming at railway track areas. The method is hardly influenced by complex environment scenes, and can work even when the track line is partially blocked, which is difficult to achieve by the target recognition technology based on the edge detection method, so that the range of feasible detection conditions is further reduced, and the flexibility of application of the track inspection maintenance work scenes is increased. In addition, the method can be realized during the running of the train, has no influence on the running of the train, can greatly release the running capacity of the train, and has important significance on high-speed railways and high-running-density railways.
Example 4
Embodiment 4 provides a non-transitory computer readable storage medium for storing computer instructions that, when executed by a processor, implement the track line identification tracking method applied to unmanned aerial vehicle inspection as described above, the method comprising:
acquiring remote sensing image data of a track line;
processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
Example 5
Embodiment 5 provides a computer program product comprising a computer program for implementing a trajectory line identification tracking method as described above for use in drone inspection, when run on one or more processors, the method comprising:
acquiring remote sensing image data of a track line;
processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
Calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
Example 6
Embodiment 6 provides an electronic device including: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory, so that the electronic device executes the instructions for implementing the track line identification tracking method applied to unmanned aerial vehicle inspection as described above, and the method comprises:
acquiring remote sensing image data of a track line;
processing the acquired remote sensing image data of the track line by using a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it should be understood that various changes and modifications could be made by one skilled in the art without the need for inventive faculty, which would fall within the scope of the invention.

Claims (10)

1. The track line identification tracking method applied to unmanned aerial vehicle inspection is characterized by comprising the following steps of:
acquiring remote sensing image data of a track line;
processing the acquired track line remote sensing image data by utilizing a pre-trained track identification tracking model to obtain a real track area in the remote sensing image, and calculating coordinates of the real track area;
and judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
2. The track line identification tracking method applied to unmanned aerial vehicle inspection according to claim 1, wherein the pre-trained track identification tracking model comprises: the system comprises a segmentation network and a connected domain calibration network, wherein the segmentation network is used for carrying out track region segmentation on remote sensing images acquired by the unmanned aerial vehicle, and the connected domain calibration network is used for extracting a real track region from a track region segmentation result.
3. The track line identification tracking method applied to unmanned aerial vehicle inspection according to claim 2, wherein the connected domain calibration network is used for extracting true track pleasure from a track region segmentation result, and comprises the following steps:
Traversing the pixels of the binary image obtained by segmentation, and finding out the pixel point with the first pixel value changed, namely the edge pixel point of the connected domain; judging the pixel points with neighborhood relation with the edge pixel points, and dividing the points with the same pixel value as the point into the same category, namely belonging to the same connected domain; transferring the calibration main body to adjacent pixel points of the same type, judging the pixel points with neighborhood relation again, repeating until no pixel point position capable of being calibrated exists, and forming a connected domain by the calibrated points together;
repeating the steps to finish the processing of all pixels in the image, so as to obtain all connected domains contained in the image; counting the number of pixel points contained in each connected domain, reserving the connected domain with the largest pixel number, identifying the connected domain as an effective connected domain representing a real track area, and changing the pixel values of the pixel points contained in the other connected domains which are interfered into 0 for removal.
4. The track line identification tracking method applied to unmanned aerial vehicle inspection according to claim 3, wherein the condition judgment and automatic adjustment of unmanned aerial vehicle pan-tilt posture and camera load parameters comprises: calculating relevant parameters of the track line in the remote sensing image by using the calculated track area coordinates; and (3) utilizing a cloud deck posture and load parameter judging method to find out whether the track area position and scale information acquired by the unmanned aerial vehicle at the current moment meets the track inspection task requirement, and if not, adjusting the cloud deck posture and load parameters of the unmanned aerial vehicle.
5. The track line identification tracking method applied to unmanned aerial vehicle inspection according to claim 4, wherein the cradle head posture and load parameter judging method comprises the following steps:
if the average relative coordinate difference value of the starting point and the ending point of the two track lines at the current moment is smaller than the threshold value, judging that the drawing of the track area at the current moment is too small, and adjusting the zoom focal length of the load of the unmanned aerial vehicle camera;
calculating the relative coordinates of the central point of the current track area according to the relative coordinates of two track lines in the track area, and comparing the coordinates of the central point of the track area with the coordinates of the central point of the whole image; if the absolute value of the difference value between the coordinates of the central point of the track area and the coordinates of the central point of the whole image is larger than a threshold value, judging that the track area deviates from the center of the image at the current moment, and adjusting the pitch angle value of the unmanned aerial vehicle holder according to the deviation direction of the track area;
in the track inspection task of the unmanned aerial vehicle, a track line obtained by cradle head load shooting is presented as a transverse trend, the difference value of the starting and ending point of the track line in the Y direction is calculated, and trigonometric tangent operation is carried out on the image width; if the absolute value of the tangent value is larger than the threshold value, judging that the track area is inclined at the current moment, and adjusting the yaw angle value of the unmanned aerial vehicle holder according to the inclination direction of the track area.
6. The track line identification tracking method applied to unmanned aerial vehicle inspection according to claim 5, wherein calculating coordinates of the real track area comprises: the track area is represented by the endpoint coordinates of two track lines in the remote sensing image; firstly traversing pixels at the edge of an image perpendicular to the track line trend to obtain a position where the pixel value is suddenly changed; and judging the appearance sequence of the positions of the pixel points, identifying the pixel points with the abrupt change of the pixel values at the first time and the last time as the end points of two track lines, and calculating the relative coordinates of the two points in the image, namely the coordinates of the real track area.
7. Be applied to track circuit discernment tracking system that unmanned aerial vehicle patrolled and examined, characterized in that includes:
the acquisition module is used for acquiring the remote sensing image data of the track line;
the screening module is used for processing the acquired remote sensing image data of the track line by utilizing a pre-trained track recognition tracking model to obtain a real track area in the remote sensing image;
the calculation module is used for calculating the coordinates of the real track area;
and the adjusting module is used for judging and automatically adjusting the states of the unmanned aerial vehicle cradle head gesture and the camera load parameters according to the coordinates of the real track area, so as to realize the real-time tracking of the track line.
8. A non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1-6 for identifying and tracking a trajectory line for use in unmanned aerial vehicle inspection.
9. A computer program product comprising a computer program for implementing the method of identifying and tracking a trajectory line for unmanned aerial vehicle inspection as claimed in any one of claims 1 to 6 when run on one or more processors.
10. An electronic device, comprising: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory, so that the electronic device executes the instructions for implementing the track line identification tracking method applied to unmanned aerial vehicle inspection according to any one of claims 1 to 6.
CN202310071009.0A 2023-02-07 2023-02-07 Track line identification tracking method and system applied to unmanned aerial vehicle inspection Pending CN116012728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310071009.0A CN116012728A (en) 2023-02-07 2023-02-07 Track line identification tracking method and system applied to unmanned aerial vehicle inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310071009.0A CN116012728A (en) 2023-02-07 2023-02-07 Track line identification tracking method and system applied to unmanned aerial vehicle inspection

Publications (1)

Publication Number Publication Date
CN116012728A true CN116012728A (en) 2023-04-25

Family

ID=86037200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310071009.0A Pending CN116012728A (en) 2023-02-07 2023-02-07 Track line identification tracking method and system applied to unmanned aerial vehicle inspection

Country Status (1)

Country Link
CN (1) CN116012728A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116922448A (en) * 2023-09-06 2023-10-24 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot
CN117789066A (en) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 Unmanned aerial vehicle multi-view image steel rail line automatic extraction method based on deep learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509290A (en) * 2011-10-25 2012-06-20 西安电子科技大学 Saliency-based synthetic aperture radar (SAR) image airfield runway edge detection method
CN104881861A (en) * 2015-03-11 2015-09-02 西南交通大学 High-speed rail contact net suspension device failure state detecting method based on primitive classification
CN106940784A (en) * 2016-12-26 2017-07-11 无锡高新兴智能交通技术有限公司 A kind of bus detection and recognition methods and system based on video
CN108919838A (en) * 2018-08-27 2018-11-30 河海大学常州校区 A kind of unmanned plane transmission line of electricity automatic tracking method based on binocular vision
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A kind of patrol unmanned machine system in tunnel based on machine vision and air navigation aid
CN109978913A (en) * 2019-03-01 2019-07-05 北京航空航天大学 A kind of unmanned machine head automatically tracks the method and system in railway region
CN112640422A (en) * 2020-04-24 2021-04-09 深圳市大疆创新科技有限公司 Photographing method, movable platform, control device, and storage medium
CN113115008A (en) * 2021-05-17 2021-07-13 哈尔滨商业大学 Pipe gallery master-slave operation inspection system and method based on rapid tracking registration augmented reality technology
CN114035614A (en) * 2022-01-10 2022-02-11 成都奥伦达科技有限公司 Unmanned aerial vehicle autonomous inspection method and system based on prior information and storage medium
CN114332650A (en) * 2022-03-09 2022-04-12 苏州大学 Remote sensing image road identification method and system
CN114373138A (en) * 2021-12-28 2022-04-19 北京交通大学 Full-automatic unmanned aerial vehicle inspection method and system for high-speed railway

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509290A (en) * 2011-10-25 2012-06-20 西安电子科技大学 Saliency-based synthetic aperture radar (SAR) image airfield runway edge detection method
CN104881861A (en) * 2015-03-11 2015-09-02 西南交通大学 High-speed rail contact net suspension device failure state detecting method based on primitive classification
CN106940784A (en) * 2016-12-26 2017-07-11 无锡高新兴智能交通技术有限公司 A kind of bus detection and recognition methods and system based on video
CN108919838A (en) * 2018-08-27 2018-11-30 河海大学常州校区 A kind of unmanned plane transmission line of electricity automatic tracking method based on binocular vision
CN109753081A (en) * 2018-12-14 2019-05-14 中国矿业大学 A kind of patrol unmanned machine system in tunnel based on machine vision and air navigation aid
CN109978913A (en) * 2019-03-01 2019-07-05 北京航空航天大学 A kind of unmanned machine head automatically tracks the method and system in railway region
CN112640422A (en) * 2020-04-24 2021-04-09 深圳市大疆创新科技有限公司 Photographing method, movable platform, control device, and storage medium
CN113115008A (en) * 2021-05-17 2021-07-13 哈尔滨商业大学 Pipe gallery master-slave operation inspection system and method based on rapid tracking registration augmented reality technology
CN114373138A (en) * 2021-12-28 2022-04-19 北京交通大学 Full-automatic unmanned aerial vehicle inspection method and system for high-speed railway
CN114035614A (en) * 2022-01-10 2022-02-11 成都奥伦达科技有限公司 Unmanned aerial vehicle autonomous inspection method and system based on prior information and storage medium
CN114332650A (en) * 2022-03-09 2022-04-12 苏州大学 Remote sensing image road identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王春迪: "基于机器视觉的非结构化道路区域检测与类型识别", 《中国有些硕士学位论文全文数据库 工程科技Ⅱ辑》, 基于机器视觉的非结构化道路区域检测与类型识别, pages 1 - 64 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116922448A (en) * 2023-09-06 2023-10-24 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot
CN116922448B (en) * 2023-09-06 2024-01-02 湖南大学无锡智能控制研究院 Environment sensing method, device and system for high-speed railway body-in-white transfer robot
CN117789066A (en) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 Unmanned aerial vehicle multi-view image steel rail line automatic extraction method based on deep learning
CN117789066B (en) * 2024-02-26 2024-05-28 中国铁路设计集团有限公司 Unmanned aerial vehicle multi-view image steel rail line automatic extraction method based on deep learning

Similar Documents

Publication Publication Date Title
CN116012728A (en) Track line identification tracking method and system applied to unmanned aerial vehicle inspection
CN107253485B (en) Foreign matter invades detection method and foreign matter invades detection device
CN109753929B (en) High-speed rail insulator inspection image recognition method
CN106934803B (en) method and device for detecting surface defects of electronic device
EP2713310A2 (en) System and method for detection and tracking of moving objects
CN104951775B (en) Railway highway level crossing signal region security intelligent identification Method based on video technique
CN111401150B (en) Multi-lane line detection method based on example segmentation and self-adaptive transformation algorithm
CN111951212A (en) Method for identifying defects of contact network image of railway
CN112650298B (en) Unmanned aerial vehicle tracking landing method and system
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
US20130094759A1 (en) Moving object detection device
CN113313703B (en) Unmanned aerial vehicle power transmission line inspection method based on deep learning image recognition
US20050190975A1 (en) Traffic event detection in compressed videos
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
CN116719339A (en) Unmanned aerial vehicle-based power line inspection control method and system
CN111079724B (en) Unmanned aerial vehicle-based sea floating garbage identification method
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
CN114038193B (en) Intelligent traffic flow data statistics method and system based on unmanned aerial vehicle and multi-target tracking
Liu et al. Extended faster R-CNN for long distance human detection: Finding pedestrians in UAV images
CN109993089A (en) A kind of video object removal and background recovery method based on deep learning
CN113066050A (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN115965934A (en) Parking space detection method and device
CN115409789A (en) Power transmission line engineering defect detection method based on image semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230425