CN111950456A - Intelligent FOD detection method and system based on unmanned aerial vehicle - Google Patents

Intelligent FOD detection method and system based on unmanned aerial vehicle Download PDF

Info

Publication number
CN111950456A
CN111950456A CN202010807397.0A CN202010807397A CN111950456A CN 111950456 A CN111950456 A CN 111950456A CN 202010807397 A CN202010807397 A CN 202010807397A CN 111950456 A CN111950456 A CN 111950456A
Authority
CN
China
Prior art keywords
fod
image
unmanned aerial
aerial vehicle
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010807397.0A
Other languages
Chinese (zh)
Inventor
阳巍
王宏吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Chengshe Aviation Technology Co ltd
Original Assignee
Chengdu Chengshe Aviation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Chengshe Aviation Technology Co ltd filed Critical Chengdu Chengshe Aviation Technology Co ltd
Priority to CN202010807397.0A priority Critical patent/CN111950456A/en
Publication of CN111950456A publication Critical patent/CN111950456A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/257Belief theory, e.g. Dempster-Shafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of airport runway safety detection, and discloses an intelligent FOD detection method based on an unmanned aerial vehicle, which comprises the following steps: 1) data acquisition, data sharpening, filtering and the like; 2) performing feature fusion on the image by using a D-S evidence theory, removing background speckle noise, performing energy accumulation on the image by matching with an image splicing technology, and eliminating random noise, thereby screening an FOD image; 3) and comparing the screened image with an FOD model in a model library to determine the FOD type. The detection system comprises the unmanned aerial vehicle with the camera, an FOD model storage unit and an image processing system. The detection system is simple in structure, and the detection method can quickly, accurately and efficiently detect the FOD on the runway regardless of the size, so that the flight safety is ensured.

Description

Intelligent FOD detection method and system based on unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of airport runway safety detection, and particularly relates to an intelligent FOD detection method and system based on an unmanned aerial vehicle.
Background
Runway surveillance is very important for airport operations. Runways are constantly subject to damage, such as potholes, caused by wear of the aircraft or other vehicles using the runway. Sometimes, debris or foreign objects may appear on the runway, which may be due to jet, airplane takeoff/landing, natural causes, and the like. On an active runway involving aircraft movement, the presence of foreign objects, debris or damage (FOD) can lead to the loss of the aircraft and its resulting safety hazard.
Some airports use automated systems that employ radar to detect damage, debris, and other hazards on the airport runways and their nearby areas. In systems using radar, microwave signals are typically transmitted over a runway and reflected signals from any foreign objects are detected and analyzed. Since the microwave signal is pulsed or structured, the time taken for the signal to reach the receiver is calculated, from which the distance to the foreign object is derived. By using radar sensors with smaller wavelengths and higher pulse repetition frequencies, it is possible to achieve higher range resolution, which in turn can reduce background clutter, but the radar's surveillance system lacks "intelligence" and cannot provide visual images of objects for verification and characterization by the operator.
Some airports utilize infrared or thermal imaging systems to detect objects, crack voids, etc. on the runway. However, systems employing infrared or thermal imaging systems can only sense infrared radiation (emitted from objects) that exceeds the thermal equilibrium of the environment, i.e., infrared or thermal imaging systems can only detect objects with sufficient thermal contrast (e.g., a piece of hot metal debris on a cool runway). Small objects with poor thermal contrast can present a significant challenge to infrared/thermal imaging systems. In addition, the performance of such systems is unpredictable in adverse weather (e.g., cold weather) conditions. In addition, infrared/thermal imaging systems also lack the resolution required to detect, characterize, and classify objects.
More recently, surveillance using one or more video cameras placed near the runway has been proposed. The operator visually monitors the video signal obtained from the camera at the control console of the airport control room. However, some small dangerous foreign objects (such as nuts and rivets) are affected by the camera pixels and are difficult to clearly shoot, so that the dangerous foreign objects are often overlooked, and accidents are caused. Therefore, how to solve the above technical problems becomes a focus of research by those skilled in the art.
Disclosure of Invention
The invention aims to provide an intelligent FOD detection method and system based on an unmanned aerial vehicle, which overcome the defects in the prior art.
The embodiment of the invention is realized by the following steps:
an intelligent FOD detection method based on an unmanned aerial vehicle comprises the following steps:
1) acquiring data and carrying out image processing on the acquired image data;
2) performing feature fusion on the image by using a D-S evidence theory, removing background speckle noise, performing energy accumulation on the image by matching with an image splicing technology, and eliminating random noise, thereby screening an FOD image;
3) and comparing the screened FOD image with the FOD model in the model library to determine the FOD type.
Further, the step 2) of performing feature fusion on the image by using the D-S evidence theory includes content construction basic probability distribution and Dempster fusion.
Further, the construction of the basic probability distribution comprises the following steps:
1) learning the sample set by using SVM, and constructing a probability model Mi by using a formula (1), wherein the formula (1) is as follows:
Figure BDA0002629640900000021
in formula (1), f (X) is the standard output value of X in the SVM sample, A, B is obtained by the maximum likelihood problem of formula (2), where formula (2) is:
Figure BDA0002629640900000031
in the formula (2), the first and second groups,
Figure BDA0002629640900000032
i-1, 2 … … (N + and N-are the number of items in the positive and negative classes, respectively)
2) After the probability model Mi is obtained, testing the SVMi, wherein the testing sample is a learning sample of the SVMi, and the identification accuracy rate ri is obtained;
3) constructing a basic probability distribution function:
Figure BDA0002629640900000033
where j is 1,2 …, K …, and K is the number of elements in the recognition frame ω.
Further, Dempster fusion comprises the following steps:
1) fusing the gray feature and the edge feature to determine the basic probability distribution of the target object;
2) and fusing the result of fusing the gray characteristic and the edge characteristic with the brightness characteristic to obtain a final fusion result.
Further, the image stitching technology in the step 2) includes image registration and image fusion.
Further, the image registration comprises the steps of:
the method comprises the following steps of I, extracting common characteristics of an original image and an acquired image of a runway;
II, matching the characteristic structures of the two images to be registered based on the similarity measurement;
III, obtaining related parameters based on the characteristic structure matching in the step II;
and IV, carrying out coordinate transformation according to the parameters in the III to finish matching.
Further, the intelligent FOD detection method based on the unmanned aerial vehicle further comprises the following steps:
1-1) arranging a plurality of route points along an airport runway in advance;
2-1), calling an original image of the airport runway, fusing the edge characteristic and the gray characteristic of the airport runway, and removing the interference of the runway marker line;
3-1), manually operating an unmanned aerial vehicle to fly to a camera calibration plate, and automatically completing calibration of internal parameters of the camera by a camera calibration processing module;
4-1), manually operating the unmanned aerial vehicle to fly to a first waypoint of the waypoints and then switching to an autonomous flight mode, wherein the unmanned aerial vehicle flies along the preset waypoint, acquires images of the machine tool runway through the camera and transmits the images back to the data processing system in real time.
Further, the method for fusing the edge features and the gray-scale features of the airport runway in the step 2-1) comprises the following steps:
2.1) dividing the original runway image into a plurality of areas by using a Hough transformation method;
2.2) in each area range, calculating the average gray value of the pixels in the area;
2.3) comparing the average gray values obtained in each region, marking the maximum value as Mean1, and setting the region range where Mean1 is located as white;
2.4) comparing the average gray values of other regions except the Mean1, and marking the maximum value as Mean 2;
2.5) setting a threshold value T, comparing the threshold value T with the difference value of Mean1 and Mean2, setting the area to be white if the difference value is within the range of the threshold value T, continuing to step 2.4), and continuing to step 2.6 if the difference value is not within the range of the threshold value T);
2.6) set the remaining non-marked areas to black.
A detection system for realizing the intelligent FOD detection method based on the unmanned aerial vehicle is characterized by comprising the following steps:
an unmanned aerial vehicle carrying one or more cameras for capturing images of a runway;
an FOD model storage unit for storing different FOD image models;
the data processing system is connected with the camera through a wireless network to realize information interaction between the camera and the camera, and comprises a camera calibration processing module, an image processing module, an FOD detection module and an FOD automatic identification module;
the camera calibration processing module is used for calibrating the internal parameters of the camera and comprises a camera calibration plate;
the image processing module is used for processing the image collected by the camera;
the FOD detection module is used for fusing image characteristics based on a D-S evidence theory and realizing FOD detection based on an image splicing technology;
the FOD automatic identification module is used for comparing the detected FOD with the FOD model stored in the FOD model storage unit so as to detect the FOD type.
Furthermore, the data processing system also comprises a data management module, wherein the data management module is used for realizing query, storage, deletion and log recording of the whole original data, the processing process data and the processing result.
The invention has the beneficial effects that:
the detection system disclosed by the invention is simple in structure, the unmanned aerial vehicle carries a high-definition camera to acquire image data, the image is subjected to characteristic fusion through a D-S evidence theory, background speckle noise is removed, energy accumulation is carried out on the image by matching with an image splicing technology, random noise is eliminated, the FOD (form of the shot) on the runway regardless of the size can be detected quickly, accurately and efficiently, and the flight safety is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a detection system provided by the present invention.
Icon: the system comprises an unmanned aerial vehicle 1, a camera 2, a camera 3, an image processing module 4, an FOD detection module 5, an FOD recognition module 6, a data management module 7 and an FOD model storage unit 8.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings or the orientations or positional relationships that the products of the present invention are conventionally placed in use, and are only used for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
Referring to fig. 1, the present embodiment provides a detection system for implementing an intelligent FOD detection method based on an unmanned aerial vehicle, including an unmanned aerial vehicle 1, a FOD model storage unit 8, and a data processing system.
One or more cameras 2 for acquiring runway images are mounted on the unmanned aerial vehicle 1. The unmanned aerial vehicle 1 can be a matrix 600pro six-rotor unmanned aerial vehicle manufactured by the Dajiang company, and the camera 2 can be a Sony UMC-S3CA type digital camera.
FOD model memory cell 8 and data processing system are all set up in the control and are carried the car, still are provided with two DELL servers, UPS power and two 4K LCD on the control carries the car, and the control is carried the car and is used for presenting unmanned aerial vehicle in real time and shoots the FOD in the image and handle, discernment image. The camera 2 collects images and transmits the data to a data processing system for processing through a wireless local area network, and the data processing system comprises a camera calibration processing module 3, an image processing module 4, an FOD detection module 5, an FOD automatic identification module 6 and a data management module 7. The camera calibration processing module 3 is used for calibrating internal parameters of the camera and comprises a camera calibration plate, and the camera calibration plate and the roof of the monitoring vehicle form an included angle of 120 degrees; the image processing module 4 is used for processing the image collected by the camera; the FOD detection module 5 is used for realizing FOD detection based on D-S evidence theory fusion image characteristics and an image splicing technology; the FOD automatic identification module 6 is used for comparing the detected FOD with the FOD model stored in the FOD model storage unit to detect the FOD type; the data management module 7 is used for realizing query, storage, deletion and log recording of the whole original data, the processing process data and the processing result.
Meanwhile, the detection system is also provided with a D-RTK GNSS positioning system matched with the matrix 600pro six-rotor unmanned aerial vehicle, so that the position of the unmanned aerial vehicle can be positioned at high precision, and the FOD position can be accurately found.
The embodiment also provides an intelligent FOD detection method based on the unmanned aerial vehicle, which comprises the following steps of early preparation, camera internal parameter calibration, image acquisition, image processing, FOD detection and FOD identification in sequence.
The early preparation comprises the steps of planning an air route and removing an airport runway marking line, wherein the air route and the airport runway marking line are not processed in sequence.
The specific contents and operation method of the route planning are as follows: taking an airport runway 2500 m long and 60 m wide as an example, the resolution of the runway is 0.5cm (each pixel corresponds to 0.5cm of the length of the runway), and each frame covers an area 19m long and 10m wide according to the selected pixels of the Sony camera, so that the whole runway needs at least 790 pictures to cover completely. The drone flies along the airport runway at 36km/h during operation, taking 6 seconds to traverse the runway once. Thus, every 19m waypoint, at least 132 waypoints, are located along the airport runway. The flying height of the unmanned aerial vehicle is set between 10m and 31 m.
Because the gray scale, the edge, the texture and the shape characteristic shown by the mark line can influence the FOD detection, the noise influence brought by the mark line needs to be removed in the detection process, the method is to fuse the edge characteristic and the gray scale characteristic of the runway mark line and segment the mark line, and the specific steps are as follows: (1) calling an original runway image, and dividing the original runway image into a plurality of areas by using a Hough transformation method; (2) in each region range, calculating the average gray value of the pixels in the region; (3) comparing the average gray values obtained in each region, marking the maximum value as Mean1, and setting the region range where Mean1 is located as white; (4) comparing the average gray values of other regions except the Mean1, and marking the maximum value as Mean 2; (5) setting a threshold value T, wherein the threshold value T depends on camera pixels of the shot image, comparing the threshold value T with the difference value of Mean1 and Mean2, setting the area to be white if the difference value is within the range of the threshold value T, continuing to step (4), and continuing to step (6) if the difference value is not within the range of the threshold value T; (6) the remaining non-mark areas are set to black. The marked white area is the runway marker line because the gray level of the marker line is higher, so that the marker line is screened from the image.
II) specific content and operation method of camera internal parameter calibration are as follows: after the preparation in earlier stage is completed, at first, the unmanned aerial vehicle is manually operated to fly to the camera calibration plate at the top of the monitoring vehicle, the image data of the calibration plate is collected by the camera and then is transmitted to a data system, and the camera calibration processing module 3 automatically sets the internal parameters of the camera to complete the calibration of the internal parameters of the camera. The calibration process here is the prior art in the industry and is not described in detail.
III) the specific content and operation method of image acquisition are as follows: after the calibration of the internal parameters of the camera is completed, the unmanned aerial vehicle is manually flown to a first waypoint, then the autonomous flight mode is changed, so that the unmanned aerial vehicle flies along the set waypoint, airport runway images are collected, and the images are transmitted to a data processing system through a wireless local area network in real time.
IV) image processing: after the data processing system receives the picture, the image processing module 4 performs preprocessing, sharpening, enhancing and filtering on the picture.
V) FOD detection: after the images are processed, the data processing system performs feature fusion on the images by using a D-S evidence theory through the FOD detection module 5, removes background speckle noise, performs energy accumulation on the images by matching with an image splicing technology, and eliminates random noise, thereby screening the FOD images.
The image feature fusion comprises construction basic probability distribution and Dempster fusion, wherein the construction basic probability distribution comprises the following steps:
1) learning the sample set by using SVM, and constructing a probability model Mi by using a formula (1), wherein the formula (1) is as follows:
Figure BDA0002629640900000091
in formula (1), f (X) is the standard output value of X in the SVM sample, A, B is obtained by the maximum likelihood problem of formula (2), where formula (2) is:
Figure BDA0002629640900000092
in the formula (2), the first and second groups,
Figure BDA0002629640900000093
i-1, 2 … … (N + and N-are the number of items in the positive and negative classes, respectively)
2) After the probability model Mi is obtained, testing the SVMi, wherein the testing sample is a learning sample of the SVMi, and the identification accuracy rate ri is obtained;
3) constructing a basic probability distribution function:
Figure BDA0002629640900000094
where j is 1,2 …, K …, and K is the number of elements in the recognition frame ω.
During the detection process, the foreign object target is represented by A, the noise is represented by B, and the condition which cannot be confirmed is represented by [ A, B ]. The identification frame comprises three power set elements of { [ A ], [ B ], [ A, B ] }, and the calculated basic probability distribution of the gray level, the edge and the brightness characteristics is shown in the following table 1-1.
TABLE 1-1 basic probability assignment
Feature(s) A (foreign matter) B (noise) A, B (uncertain)
Grey scale m1(A)=0.63 m1(B)=0.24 m1(A,B)=0.13
Edge of a container m2(A)=0.55 m2(B)=0.27 m2(A,B)=0.18
Brightness of light m3(A)=0.74 m3(B)=0.16 m3(A,B)=0.10
The specific steps of Dempster fusion are:
1) fusing the gray scale features and the edge features to determine the basic probability distribution of the target object, wherein the target object is A (foreign matter), B (noise), A, B (uncertain);
K*=m1(A)m2(B)+m1(B)m2(A)=0.63×0.27+0.24×0.55=0.3021
Figure BDA0002629640900000101
Figure BDA0002629640900000102
Figure BDA0002629640900000103
2) fusing the result of fusing the gray characteristic and the edge characteristic with the brightness characteristic to obtain a final fusion result;
κ=m′(A)m3(B)+m′(B)m3(A)=0.7614×0.16+0.2050×0.74=0.2735
Figure BDA0002629640900000104
Figure BDA0002629640900000105
Figure BDA0002629640900000106
Figure BDA0002629640900000107
the final results after fusion are shown in tables 1-2,
TABLE 1-2 Final results after fusion
Feature fusion A (foreign matter) B (noise) A, B (uncertain)
Gray scale and edge m/(A)=0.7614 m/(B)=0.2050 m/(A,B)=0.0336
Gray scale, edge and brightness m(A)=0.9146 m(B)=0.0808 m(A,B)=0.0046
As can be seen from the table 1-2, after the feature fusion, the background speckle noise is obviously reduced, which is more favorable for accurately detecting FOD;
further, since some FODs with smaller sizes occupy fewer pixels in an image and are easily submerged in a large amount of noise, the method uses image stitching techniques to enhance the enhancement of the target capability by averaging multiple frames of images, while random noise is distinguished because of slow accumulation or relative weakness, and in the specific implementation, the image stitching techniques include image registration and image fusion.
Image matchingIt can be understood as the mapping relationship between space and gray scale of two images, for which I is used after processing1(x, y) and I2(x, y) represents the gray value of a certain pixel point (x, y) in two images, and the mapping relationship is as follows: i is1(x,y)=g(f(I2(x, y))), wherein f is coordinate transformation and g is gray scale transformation. Generally, only the space geometric transformation of two matched images is needed to be obtained, i.e. the transformation relation can be simplified into I1(x,y)=f(I2(x, y)), based on which the basic process of image registration comprises the following steps: the method comprises the following steps of I, extracting common characteristics of an original image and an acquired image of a runway; II, matching the characteristic structures of the two images to be registered based on the similarity measurement; III, obtaining related parameters based on the characteristic structure matching in the step II; and IV, carrying out coordinate transformation according to the parameters in the III to finish matching.
Image fusion, i.e. superimposing the images a suitable number of times, assuming that the push signal is SP(n) the other interference signal is Snoise(n) then its mixed signal Smix(n) is:
Smix(n)=SP(n)+Snoise(n) wherein n is 0,1,2 …,
for interference signal Snoise(n), which is desirably zero, i.e.:
Figure BDA0002629640900000121
after k times of superposition averaging, the signal is Saverage(n), then:
Figure BDA0002629640900000122
when k → ∞, the formula (2.18) holds, that is:
Figure BDA0002629640900000123
thus:
Saverage(n)=SP(n);
in practice, however, the expected value of the noise is not zero, so that
Figure BDA0002629640900000124
The image signal after the superposition averaging is:
Saverage(n)=SP(n)+C(n)……(2.19)
the surface of the formula (2.23) and the unnecessary signal C (n) still exist in the average superposed signal, if the value is a constant value, the surface contains direct current components, and if the value is a function variable, the signal contains other frequency components, but in the practical application process, C (n) is small enough not to influence Saverage(n) analysis and judgment.
VI), after the FOD detection is finished, the FOD automatic identification module 6 is controlled by the data processing system to compare the screened FOD image with the FOD model stored in the FOD model storage unit, and the FOD type is determined. Here, the final result determined by comparison belongs to the prior art means in the field, and is not described in detail.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent FOD detection method based on an unmanned aerial vehicle is characterized by comprising the following steps:
1) acquiring data and carrying out image processing on the acquired image data;
2) performing feature fusion on the image by using a D-S evidence theory, removing background speckle noise, performing energy accumulation on the image by matching with an image splicing technology, and eliminating random noise, thereby screening an FOD image;
3) and comparing the screened FOD image with the FOD model in the model library to determine the FOD type.
2. The intelligent FOD detection method based on unmanned aerial vehicle of claim 1, wherein: and 2) performing feature fusion on the image by using the D-S evidence theory in the step 2), wherein the feature fusion comprises content constructed basic probability distribution and Dempster fusion.
3. The intelligent FOD detection method based on unmanned aerial vehicle of claim 2, wherein: the construction of the basic probability distribution comprises the following steps:
1) learning the sample set by using SVM, and constructing a probability model Mi by using a formula (1), wherein the formula (1) is as follows:
Figure FDA0002629640890000011
in formula (1), f (X) is the standard output value of X in the SVM sample, A, B is obtained by the maximum likelihood problem of formula (2), where formula (2) is:
Figure FDA0002629640890000012
in the formula (2), the first and second groups,
Figure FDA0002629640890000013
i-1, 2 … … (N + and N-are the number of items in the positive and negative classes, respectively)
2) After the probability model Mi is obtained, testing the SVMi, wherein the testing sample is a learning sample of the SVMi, and the identification accuracy rate ri is obtained;
3) constructing a basic probability distribution function:
Figure FDA0002629640890000021
where j is 1,2 …, K …, and K is the number of elements in the recognition frame ω.
4. The intelligent FOD detection method based on unmanned aerial vehicle of claim 2, wherein: dempster fusion comprises the following steps:
1) fusing the gray feature and the edge feature to determine the basic probability distribution of the target object;
2) and fusing the result of fusing the gray characteristic and the edge characteristic with the brightness characteristic to obtain a final fusion result.
5. The intelligent FOD detection method based on unmanned aerial vehicle of claim 1, wherein: the image splicing technology in the step 2) comprises image registration and image fusion.
6. The intelligent FOD detection method based on unmanned aerial vehicle of claim 5, wherein: the image registration comprises the steps of:
the method comprises the following steps of I, extracting common characteristics of an original image and an acquired image of a runway;
II, matching the characteristic structures of the two images to be registered based on the similarity measurement;
III, obtaining related parameters based on the characteristic structure matching in the step II;
and IV, carrying out coordinate transformation according to the parameters in the III to finish matching.
7. The intelligent FOD detection method based on unmanned aerial vehicle of claim 1, further comprising the steps of:
1-1) arranging a plurality of route points along an airport runway in advance;
2-1), calling an original image of the airport runway, fusing the edge characteristic and the gray characteristic of the airport runway, and removing the interference of the runway marker line;
3-1), manually operating an unmanned aerial vehicle to fly to a camera calibration plate, and automatically completing calibration of internal parameters of the camera by a camera calibration processing module;
4-1), manually operating the unmanned aerial vehicle to fly to a first waypoint of the waypoints and then switching to an autonomous flight mode, wherein the unmanned aerial vehicle flies along the preset waypoint, acquires images of the machine tool runway through the camera and transmits the images back to the data processing system in real time.
8. The intelligent FOD detection method based on unmanned aerial vehicle of claim 7, wherein: the method for fusing the edge features and the gray features of the airport runway in the step 2-1) comprises the following steps:
2.1) dividing the original runway image into a plurality of areas by using a Hough transformation method;
2.2) in each area range, calculating the average gray value of the pixels in the area;
2.3) comparing the average gray values obtained in each region, marking the maximum value as Mean1, and setting the region range where Mean1 is located as white;
2.4) comparing the average gray values of other regions except the Mean1, and marking the maximum value as Mean 2;
2.5) setting a threshold value T, comparing the threshold value T with the difference value of Mean1 and Mean2, setting the area to be white if the difference value is within the range of the threshold value T, continuing to step 2.4), and continuing to step 2.6 if the difference value is not within the range of the threshold value T);
2.6) set the remaining non-marked areas to black.
9. A detection system for implementing the intelligent FOD detection method based on unmanned aerial vehicle of any one of claims 1 to 8, comprising:
an unmanned aerial vehicle carrying one or more cameras for capturing images of a runway;
an FOD model storage unit for storing different FOD image models;
the data processing system is connected with the camera through a wireless network to realize information interaction between the camera and the camera, and comprises a camera calibration processing module, an image processing module, an FOD detection module and an FOD automatic identification module;
the camera calibration processing module is used for calibrating the internal parameters of the camera and comprises a camera calibration plate;
the image processing module is used for processing the image collected by the camera;
the FOD detection module is used for fusing image characteristics based on a D-S evidence theory and realizing FOD detection based on an image splicing technology;
the FOD automatic identification module is used for comparing the detected FOD with the FOD model stored in the FOD model storage unit so as to detect the FOD type.
10. The detection system for realizing the intelligent FOD detection method based on the unmanned aerial vehicle according to claim 9, wherein: the data processing system also comprises a data management module, wherein the data management module is used for realizing query, storage, deletion and log recording of the whole original data, the processing process data and the processing result.
CN202010807397.0A 2020-08-12 2020-08-12 Intelligent FOD detection method and system based on unmanned aerial vehicle Pending CN111950456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010807397.0A CN111950456A (en) 2020-08-12 2020-08-12 Intelligent FOD detection method and system based on unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010807397.0A CN111950456A (en) 2020-08-12 2020-08-12 Intelligent FOD detection method and system based on unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN111950456A true CN111950456A (en) 2020-11-17

Family

ID=73332355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010807397.0A Pending CN111950456A (en) 2020-08-12 2020-08-12 Intelligent FOD detection method and system based on unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN111950456A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686172A (en) * 2020-12-31 2021-04-20 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and device for detecting foreign matters on airport runway and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315698A (en) * 2008-06-25 2008-12-03 中国人民解放军国防科学技术大学 Characteristic matching method based on straight line characteristic image registration
CN102928877A (en) * 2012-11-14 2013-02-13 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic property combination method based on Dempster/Shafe (D-S) evidence theory
CN103733234A (en) * 2011-02-21 2014-04-16 斯特拉特克***有限公司 A surveillance system and a method for detecting a foreign object, debris, or damage in an airfield
CN105930791A (en) * 2016-04-19 2016-09-07 重庆邮电大学 Road traffic sign identification method with multiple-camera integration based on DS evidence theory
CN107341455A (en) * 2017-06-21 2017-11-10 北京航空航天大学 A kind of detection method and detection means to the region multiple features of exotic on night airfield runway road surface
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN111060076A (en) * 2019-12-12 2020-04-24 南京航空航天大学 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315698A (en) * 2008-06-25 2008-12-03 中国人民解放军国防科学技术大学 Characteristic matching method based on straight line characteristic image registration
CN103733234A (en) * 2011-02-21 2014-04-16 斯特拉特克***有限公司 A surveillance system and a method for detecting a foreign object, debris, or damage in an airfield
CN102928877A (en) * 2012-11-14 2013-02-13 中国石油集团川庆钻探工程有限公司地球物理勘探公司 Seismic property combination method based on Dempster/Shafe (D-S) evidence theory
CN105930791A (en) * 2016-04-19 2016-09-07 重庆邮电大学 Road traffic sign identification method with multiple-camera integration based on DS evidence theory
CN107341455A (en) * 2017-06-21 2017-11-10 北京航空航天大学 A kind of detection method and detection means to the region multiple features of exotic on night airfield runway road surface
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN111060076A (en) * 2019-12-12 2020-04-24 南京航空航天大学 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
权文 等: "基于SVM概率输出与证据理论的多分类方法", 《计算机工程》, vol. 28, no. 5, pages 167 - 169 *
牛犇: "机场跑道视频异物检测与识别关键技术研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》, pages 031 - 70 *
陆三兰: "基于D-S证据理论的组合数据融合算法", 《微电子学与计算机》, vol. 28, no. 1, pages 95 - 98 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686172A (en) * 2020-12-31 2021-04-20 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and device for detecting foreign matters on airport runway and storage medium

Similar Documents

Publication Publication Date Title
CN109765930B (en) Unmanned aerial vehicle vision navigation
US10377485B2 (en) System and method for automatically inspecting surfaces
JP7030431B2 (en) Inspection support system and inspection support control program
CN106225787B (en) Unmanned aerial vehicle visual positioning method
Rudol et al. Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery
CN102253381B (en) System and method for automatically detecting foreign object debris (FOD) on airfield runways
US10891483B2 (en) Texture classification of digital images in aerial inspection
CN114373138A (en) Full-automatic unmanned aerial vehicle inspection method and system for high-speed railway
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN112348034A (en) Crane defect detection system based on unmanned aerial vehicle image recognition and working method
Miranda et al. UAV-based Inspection of Airplane Exterior Screws with Computer Vision.
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
CN110110112A (en) It is a kind of based on liftable trolley around machine check method and system
CN109946751A (en) A kind of automatic detection method of airfield runway FOD of unmanned plane
Vijayanandh et al. Numerical study on structural health monitoring for unmanned aerial vehicle
Rice et al. Automating the visual inspection of aircraft
EP4063279B1 (en) Automated assessment of aircraft structure damage
WO2022247597A1 (en) Papi flight inspection method and system based on unmanned aerial vehicle
CN113643254A (en) Efficient collection and processing method for laser point cloud of unmanned aerial vehicle
CN111950456A (en) Intelligent FOD detection method and system based on unmanned aerial vehicle
CN113378754B (en) Bare soil monitoring method for construction site
CN108225735B (en) Precision approach indicator flight verification method based on vision
CN105447431A (en) Docking airplane tracking and positioning method and system based on machine vision
CN113706496A (en) Aircraft structure crack detection method based on deep learning model
Majidi et al. Real time aerial natural image interpretation for autonomous ranger drone navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination