CN112686172A - Method and device for detecting foreign matters on airport runway and storage medium - Google Patents

Method and device for detecting foreign matters on airport runway and storage medium Download PDF

Info

Publication number
CN112686172A
CN112686172A CN202011639474.2A CN202011639474A CN112686172A CN 112686172 A CN112686172 A CN 112686172A CN 202011639474 A CN202011639474 A CN 202011639474A CN 112686172 A CN112686172 A CN 112686172A
Authority
CN
China
Prior art keywords
image
airport runway
detection
initial
foreign matter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011639474.2A
Other languages
Chinese (zh)
Other versions
CN112686172B (en
Inventor
许学凡
陈晓林
姜官男
谭越
龚成
刘峰
黄高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microwave Technology CETC 50 Research Institute
Original Assignee
Shanghai Institute of Microwave Technology CETC 50 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microwave Technology CETC 50 Research Institute filed Critical Shanghai Institute of Microwave Technology CETC 50 Research Institute
Priority to CN202011639474.2A priority Critical patent/CN112686172B/en
Publication of CN112686172A publication Critical patent/CN112686172A/en
Application granted granted Critical
Publication of CN112686172B publication Critical patent/CN112686172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting foreign matters on an airport runway and a storage medium, wherein the method for detecting the foreign matters on the airport runway comprises the following steps: acquiring a first image, wherein the target image is an image of a non-first frame in an initial image stream; performing super-resolution processing on the first image to obtain a second image; and inputting the second image into a trained airport runway foreign matter detection model to obtain a first detection result. The FOD detection method and the FOD detection device can effectively solve the problems of difficult identification of FOD, high FOD false detection rate and low real-time property caused by image blurring in some cases, improve the speed and accuracy of FOD detection, and effectively improve the capability of detecting the FOD target under the blurred image frame.

Description

Method and device for detecting foreign matters on airport runway and storage medium
Technical Field
The invention relates to the technical field of deep learning, in particular to a method and a device for detecting foreign matters on an airport runway and a storage medium.
Background
Airport runway Foreign Objects (FOD) generally refer to certain Foreign objects that can damage an aircraft or system. Typical FOD targets are: concrete asphalt fragments, metal parts, rubber fragments, plastic products, bird carcasses and the like. In recent years, the occurrence of airplane take-off and landing accidents at airports caused by FOD is frequent, and casualties are serious. The safety of taking off and landing of an airport airplane is always a core part of airport safety management, and the rapid and accurate detection of FOD becomes an important and effective technology for guaranteeing the flight safety of the airplane.
Generally, FOD can be detected based on an image, however, since the environment for image acquisition of an airport runway may be complicated, for example, images may be acquired under conditions of strong wind, night or backlight, and thus, the acquired images may be blurred, and thus, the FOD detection may be poor.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a device for detecting foreign matters on an airport runway and a storage medium.
The method for detecting the foreign matters on the airport runway comprises the steps of obtaining a first image, wherein the target image is an image of a non-first frame in an initial image flow;
performing super-resolution processing on the first image to obtain a second image;
and inputting the second image into a trained airport runway foreign matter detection model to obtain a first detection result.
The embodiment of the invention also provides an airport runway foreign matter detection device, which comprises:
the first acquisition module is used for acquiring a first image, and the target image is an image of a non-first frame in the initial image stream;
the processing module is used for carrying out super-resolution processing on the first image to obtain a second image;
and the detection module is used for inputting the second image into the trained airport runway foreign matter detection model to obtain a first detection result.
The embodiment of the invention also provides a readable storage medium, wherein a program or an instruction is stored on the readable storage medium, and the program or the instruction is executed by a processor to realize the steps of the airport runway foreign matter detection method.
In the embodiment of the application, a first image is obtained, super-resolution processing is performed on the first image to obtain a second image, and the second image is input into a trained airport runway foreign matter detection model to obtain a first detection result. In the embodiment of the application, combine super-resolution processing and airport runway foreign matter detection model to realize the detection to the FOD in the first image, can effectively solve the image blurring under some circumstances and lead to FOD difficult to discern, the FOD false retrieval rate is high and the real-time is low problem, improves the speed and the rate of accuracy that FOD detected, has effectively improved the ability that detects the FOD target under the fuzzy image frame.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic flow chart of a method for detecting foreign objects on an airport runway according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating super-resolution processing performed on a first image according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a process of obtaining a trained airfield runway foreign object detection model according to an embodiment of the present invention;
FIG. 4 is a diagram of an exemplary model framework for an initial airport runway foreign object detection model in accordance with an embodiment of the present invention;
fig. 5 is a schematic flow chart of a method for detecting foreign objects on an airport runway according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an airport runway foreign object detection device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
As shown in fig. 1, the method for detecting a foreign object on an airport runway according to an embodiment of the present invention includes:
step 101, acquiring a first image, wherein the target image is an image of a non-first frame in an initial image stream;
102, performing super-resolution processing on the first image to obtain a second image;
and 103, inputting the second image into the trained airport runway foreign matter detection model to obtain a first detection result.
The initial image stream can be a video acquired by optical video image acquisition equipment such as a camera, and the optical video image acquisition equipment is usually arranged in a short outdoor environment of an airport runway in a range of about 300 meters by combining with an actual application scene, and the optical video image is greatly shaken and blurred under the condition of the longest focal length due to vibration of the optical video image acquisition equipment caused by wind, take-off and landing of an airplane and the like. In other words, a blurred image may often exist in the initial image stream obtained by capturing for an airport runway Foreign Object (FOD).
In this embodiment, the first image may be an image of a non-first frame in the initial image stream, and considering that the first image may be blurred, the first image may be subjected to super resolution processing to obtain the second image. The super-resolution processing can be performed based on a deep learning model, and in general, a low-definition or blurred image can be reconstructed into a high-definition image through the super-resolution processing, so that the detection accuracy of FOD in a blurred image frame can be improved.
The detection of the FOD in the second image can be specifically realized by a trained airport runway foreign matter detection model; the airport runway foreign object detection model may also be a deep learning model, such as a neural network, a support vector machine, a random forest, etc., and is not limited herein. For simplicity, the deep learning model is mainly used as a neural network as an example for description.
Inputting the second image into the trained airport runway foreign matter detection model to obtain a first detection result, wherein the first detection result can be the position and type of FOD in the second image; for example, from the viewpoint of visualization, it may be that a prediction box for framing the FOD is displayed in the second image, and a specific type, such as concrete asphalt fragments, metal parts, or plastic products, is predicted for the FOD in the prediction box.
In the embodiment of the application, a first image is obtained, super-resolution processing is performed on the first image to obtain a second image, and the second image is input into a trained airport runway foreign matter detection model to obtain a first detection result. In the embodiment of the application, combine super-resolution processing and airport runway foreign matter detection model to realize the detection to the FOD in the first image, can effectively solve the image blurring under some circumstances and lead to FOD difficult to discern, the FOD false retrieval rate is high and the real-time is low problem, improves the speed and the rate of accuracy that FOD detected, has effectively improved the ability that detects the FOD target under the fuzzy image frame.
Optionally, as shown in fig. 2, in the step 102, performing super-resolution processing on the first image to obtain a second image, where the method includes:
step 201, acquiring Structure (Structure) information and detail (Details) information of the first image, and Structure information and Details information of a third image; the third image is a frame image before the first image in the initial image stream;
step 202, determining target Structure information and target Details information according to the Structure information and Details information of the first image and the Structure information and Details information of the third image;
step 203, splicing the target Structure information and the target Details information to obtain the second image.
Specifically, Structure information and Details information for constructing the blurred video frame in step 201; video is decomposed into two parts, Structure information and Details information, wherein the Structure information part models low-frequency information in images and motion between frames, and the Details information part captures fine high-frequency information and has slightly changed appearance.
In step 202, the Structure information and the Details information may be processed in a related manner and feature fusion may be performed, and corresponding high resolution results of the Structure and the Details, that is, the target Structure information and the target Details information, may be output. For example, the Structure information and the Structure information estimated from the previous frame, and the hidden state information estimated from the previous frame may be spliced according to the channel dimension, and then sent to a 3x3 convolutional layer and a plurality of SD (Structure-Detail) modules for feature fusion, and the output at this time is further convolved by a 3x3 and an upsampling layer, and finally the high resolution Structure result of the current frame is output; the Details information part is the same as the processing procedure of the Structure branch, except that the input is the Details information, the Details information estimated in the previous frame, the hidden state information estimated in the previous frame, and the output is the high resolution Details result of the current frame.
In step 203, the Structure and detail high resolution results may be spliced to output the final super resolution result. For example, based on step 202, the Structure and Details high resolution results may be summed, and then the summed output is convolved by 3x3 to obtain the hidden state of the current frame, and meanwhile, the Structure and Details high resolution results are convolved by 3x3, spliced according to the channel dimension, and then fed into another 3x3 convolution and upsampling layer to obtain the super resolution result of the current frame.
In summary, in this embodiment, the super-resolution reconstruction of the low-resolution image may be performed based on the video frame super-resolution technology by using the correlation between multiple video frames, so as to effectively improve the quality of the image input into the trained airport runway foreign object detection model, and improve the accuracy of identifying the FOD.
Optionally, as shown in fig. 3, in step 101, before the second image is input into the trained airport runway foreign object detection model and the first detection result is obtained, the airport runway foreign object detection method further includes:
step 301, establishing an initial airport runway foreign matter detection model;
step 302, inputting a training sample into the initial airport runway foreign matter detection model to obtain an initial detection result, wherein the training sample comprises a sample image and label information of the sample image;
step 303, adjusting network parameters of the initial airport runway foreign matter detection model according to the labeling information and the initial detection result, and obtaining the trained airport runway foreign matter detection model until a loss value of a loss function in the initial airport runway foreign matter detection model meets a preset condition.
In this embodiment, the initial airport runway foreign matter detection model may be considered as a deep learning model that is sufficiently trained, and network parameters of the initial airport runway foreign matter detection model may be adjusted through training samples, for example, weights, or in the case of a poor training effect, super parameters thereof may also be adjusted, and the like, and this is not specifically limited herein.
The basis for completing the training of the initial airport runway foreign object detection model may be determined based on a loss value of a loss function, for example, the training sample may include a sample image and label information obtained by labeling the sample image, and after the training sample is input into the initial airport runway foreign object detection model, an initial detection result, such as a prediction frame and a prediction type corresponding to FOD, may be output. Therefore, when the loss value of the loss function is smaller, it represents that the initial detection result is closer to the true result.
In this embodiment, when the loss value of the loss function in the initial airport runway foreign matter detection model satisfies the preset condition, it may be considered that the initial airport runway foreign matter detection model has been sufficiently trained, and then the trained airport runway foreign matter detection model may be obtained. The preset condition may be that the loss value is smaller than a certain loss threshold, or that the preset condition is satisfied when the loss value is within an acceptable range in the case that the number of training samples is valid.
In an example, in step 301, before inputting a training sample into the initial airport runway foreign object detection model and obtaining an initial detection result, the airport runway foreign object detection method further includes:
automatically labeling the obtained first sample image to obtain a first training sample;
performing image enhancement processing on the first training sample to obtain a second training sample;
wherein the training samples comprise the first training sample and the second training sample, and the image enhancement processing comprises at least one of the following processing modes: mosaic, rotation, saturation adjustment, exposure adjustment, and hue adjustment.
To illustrate the present example in connection with a specific application scenario, the first sample image may be an image including FOD samples, and may be referred to as an original FOD image hereinafter. Specifically, FOD samples of different FOD including sharp images and FOD samples of blurred images can be collected, wherein the types of the FOD samples collected include golf balls, screws, lamps and standard objects (such as metal cylinders with the diameter of 10mm and the height of 10mm, metal cylinders with the diameter of 15mm and the height of 10mm, and metal cylinders with the diameter of 20mm and the height of 20 mm) with different sizes, thereby being helpful to improve the data volume and diversity of subsequent training samples.
In this example, the training samples may include two aspects, namely, a first training sample obtained by labeling the original FOD image, and a second training sample obtained by performing enhancement processing on the first training sample. The following describes the acquisition method of each type of training sample.
The first training sample may be obtained by automatically labeling the original FOD image, for example, may be obtained by automatically labeling a professional image labeling tool such as an existing labelImg or labelme, or may be obtained by labeling based on an existing deep learning model trained to some extent. It is easy to understand that the first training sample may be obtained by directly labeling an original photographed picture, and the second training sample may be obtained by performing certain processing on the first training sample, for example, the first training sample may be subjected to enhancement processing such as rotation, clipping, pixel parameter adjustment, and the like to obtain the second training sample, so that the sample size is increased and the diversity is enriched.
In addition, in this example, the image enhancement processing mode may further include Mosaic processing, and for the Mosaic processing mode, multiple images (generally four images) are randomly selected from the images and are spliced into one image, and the splicing mode includes random scaling, random clipping and random arrangement, so that the number of sample sets is greatly enriched, and the numbers of FODs of different sizes are balanced.
After the first training sample and the second training sample are obtained, all the training samples can be used for training the initial airport runway foreign matter detection model to obtain a trained airport runway foreign matter detection model; because the training sample is enhanced, the sample size and diversity are improved, the quality of the training sample is higher, and the robustness of the trained airport runway foreign matter detection model is better. In addition, in this embodiment, labeling and enhancement processing for the FOD image may be performed automatically, that is, from the acquisition of the original FOD image to the training of the airport runway foreign object detection model, an end-to-end (end-to-end) training process may be considered, which is helpful for adapting to a situation where a large amount of training samples is required due to the variety of FOD types and sizes, and ensuring the detection accuracy of the trained airport runway foreign object detection model.
In practical applications, the original FOD images can be acquired under different environments, for example, images under the background of an asphalt runway and a cement runway, images under the daytime and night environments, images under various extreme weather conditions, images under different distances from the runway and at different heights by a camera, and the like can be acquired. Therefore, the generalization performance of the airport runway foreign matter detection model obtained subsequently is promoted, and the accuracy of FOD detection is improved.
In combination with the automatic labeling and the process of training the initial airport runway foreign matter detection model, in the embodiment of the application, an end-to-end (end-to-end) training process can be realized, which is beneficial to adapting to the condition of large requirement of training samples caused by the variety of FOD types and sizes, and the detection accuracy of the trained airport runway foreign matter detection model is ensured.
Further optionally, in step 301, establishing an initial airport runway foreign object detection model may include:
an end-to-end neural network based on YOLOv4 is built, a guide file CFG is set, and an airport runway foreign matter category label is set.
Specifically, in this embodiment, the training process for the initial airport runway foreign object detection model may be summarized as follows:
step S11: collecting different FOD samples, including FOD samples with clear images and FOD samples with blurred images, wherein the collected FOD samples comprise golf balls, screws, lamps and standard objects with different sizes (a metal cylinder with the diameter of 10mm and the height of 10mm, a metal cylinder with the diameter of 15mm and the height of 10mm, a metal cylinder with the diameter of 20mm and the height of 20 mm);
step S12: the method for manufacturing the FOD image mixed data set specifically comprises the following steps: marking the position coordinates and the category of FOD in the FOD image by using a professional image marking tool LabelImg, and selecting and storing the marked data format as a YOLO format;
step S13: an end-to-end FOD detection neural network is built, and the method specifically comprises the following steps: building a corresponding end-to-end algorithm framework, setting relevant parameters of the CFG file, and setting a corresponding FOD category label;
step S14: carrying out end-to-end neural network model training, firstly loading a pre-training model for initializing parameters, which is beneficial to accelerating the convergence speed of the model, and then preprocessing an input mixed data set image, wherein the preprocessing specifically comprises the following steps: and rotating, adjusting saturation, exposure and color, finally starting model training, continuously adjusting hyper-parameters for optimization, and obtaining an optimal model after multiple iterations, namely obtaining the trained airport runway foreign matter detection model.
Optionally, the initial airport runway foreign object detection model comprises: the system comprises an input module, a backbone network, a fusion feature extraction module and a detection classification module;
the input end of the input module is used for receiving the second training sample, the input module, the backbone network, the fusion feature extraction module and the detection classification module are sequentially connected, and the detection classification module is used for outputting a prediction frame and a classification label.
The following illustrates specific implementations and functions of the modules in the initial airport runway foreign object detection model:
1) an input module: the innovation of the method is mainly improvement of an input end during Training, and mainly comprises motion image enhancement, Cross-Batch Normalization (CmBN) and Self-countermeasure Training (SAT). The Mosaic image enhancement mode refers to that four FOD images are spliced by adopting a random scaling, random cutting and random arrangement mode, so that a detection data set is greatly enriched, and particularly, a plurality of small targets are added by random scaling, so that the network robustness is better; in addition, the number of the GPUs is reduced, so that the size of the Mini-batch does not need to be large, and a good effect can be achieved by one GPU; the method has the advantages that the statistical information among the small batches in a single Batch is collected through the small-Batch normalization, the accuracy can be maintained in a reasonable range under the condition that the Batch-size ratio is small, and meanwhile, the method is suitable for single-card GPU training; the self-confrontation training is a data expansion technology, the confrontation sample is to add some slight disturbance on the basis of the original sample, so that under the condition that human eyes cannot distinguish differences, the induction model carries out error classification, and the technology is divided into a front stage and a rear stage to carry out: in the first stage, the neural network changes the original image instead of the network weights, in such a way that the neural network performs a antagonistic attack on itself, changing the original image, thus causing the illusion that there is no target on the image; and in the second stage, training the neural network to carry out normal target detection on the modified image.
2) Backbone network: CSPDarknet53, Mish activation function, Dropblock may be combined. CSPDarknet53 is to combine Darknet53 and CSPNet (cross Stage initiative network), CSP module divides the feature mapping of the basic layer into two parts first, then merge them through the cross-Stage hierarchical structure, have guaranteed the rate of accuracy while reducing the calculated amount, CSPNet has solved the problem that the calculated amount is very large in reasoning well; the formula of the Mish activation function is shown as formula (1).
f(x)=xtanh(ln(1+ex)) (1)
The optimization model adopts a Dropblock mode to delete and discard the whole local area of the characteristic diagram, so that the network becomes simpler, but the overall detection accuracy is not influenced.
In one example, the BackBone network may employ a backhaul BackBone network.
3) And a fusion feature extraction module Neck: layers, such as SPP modules, FPN + PAN structures, are inserted between the backbone network and the final output layer. The SPP module uses a maximum pooling mode of k ═ {1 × 1,5 × 5,9 × 9,13 × 13}, and then combines feature maps with different scales, so that the receiving range of the trunk features is effectively increased, and the most important context features are obviously separated; the FPN layer conveys strong semantic features from top to bottom, the feature pyramid conveys strong positioning features from bottom to top, two pairs of combinations are combined, parameter aggregation is carried out on different detection layers from different trunk layers, and the feature extraction capability is further improved.
4) Detection classification module Prediction: the improvement is the Loss function CIOU _ Loss during training and the nms of the predictor box filter becomes DIOU _ nms. The formula of the CIOU _ Loss function is shown in formula (2).
Figure BDA0002879576860000081
Wherein IOU is the intersection ratio between the prediction box and the real box; b and bgtRespectively representing the central points of the prediction frame and the real frame, rho (·) represents the Euclidean distance, c represents the minimum circumscribed rectangle diagonal distance of the prediction frame and the real frame, v is a parameter for measuring the consistency of the aspect ratio, and the expression is shown in formula (3).
Figure BDA0002879576860000091
Wherein h isgtAnd wgtRespectively representing the length and width of the real box, hpAnd wpRespectively representing the length and width of the prediction box.
According to the characteristics of the FOD data set, network related parameters and configuration files are changed before model training is carried out. And respectively modifying the categories in the training NAMES format file into four corresponding FOD category labels, and respectively modifying the category number and the txt file path for training and testing by the DATA format file. Then, the batch, subdivisions, steps and the like in the CFG file are correspondingly improved according to the experimental configuration. In improving anchors of the yolo layer, the anchors values suitable for the data set are calculated by using a k-means algorithm, and then the calculated anchors values are applied to a model configuration file. FIG. 4 illustrates a model framework diagram of a model training phase in one example.
Optionally, in step 103, after the second image is input into the trained airport runway foreign object detection model to obtain the first detection result, the airport runway foreign object detection method further includes:
obtaining a foreign matter detection result of the airport runway based on the first detection result and the second detection result; and the second detection result is a detection result obtained by detecting the foreign matters on the airfield runway through a radar.
In this embodiment, the detection result of the foreign object on the runway may be obtained by linking the FOD detected by the radar with the camera. Combining an actual application scene, behind the radar detected FOD, the camera linked to the area of FOD place, and the passenger calls the airport runway foreign matter detection model who trains well, and parameter among the analytic model then carries out target detection and classification to FOD in the image picture, and the detection result can accurately frame FOD in the image picture and show the label of the FOD framed. By the radar detection and video monitoring composite detection method, the detection accuracy and real-time performance can be effectively improved, and the false alarm rate is reduced.
Referring to fig. 5, in a specific application example, the method for detecting a foreign object on an airport runway provided by an embodiment of the present invention may specifically include:
step 1: training and deploying an end-to-end neural network based on a fuzzy image;
step 2: performing super-resolution reconstruction on the blurred video frame;
and step 3: and carrying out FOD detection and classification on the reconstructed video frame by using the trained end-to-end neural network model.
In step 1, the method specifically comprises the following steps:
step 1.1: collecting different FOD samples, including FOD samples with clear images and FOD samples with blurred images, wherein the collected FOD samples comprise golf balls, screws, lamps and standard objects with different sizes (a metal cylinder with the diameter of 10mm and the height of 10mm, a metal cylinder with the diameter of 15mm and the height of 10mm, a metal cylinder with the diameter of 20mm and the height of 20 mm);
step 1.2: the FOD image data set is manufactured, and the method specifically comprises the following steps: marking the position coordinates and the category of FOD in the FOD image by using a professional image marking tool LabelImg, and selecting and storing the marked data format as a YOLO format; fig. 2 shows the labeled label style.
Step 1.3: an end-to-end FOD detection neural network is built, and the method specifically comprises the following steps: building a corresponding end-to-end algorithm framework, setting relevant parameters of the CFG file, and setting a corresponding FOD category label;
step 1.4: carrying out end-to-end neural network model training, firstly loading a pre-training model for initializing parameters, which is beneficial to accelerating the convergence speed of the model, and then preprocessing an input data set image, wherein the preprocessing specifically comprises the following steps: and rotating, adjusting saturation, exposure and hue, finally starting model training, continuously adjusting hyper-parameters for optimization, and obtaining an optimal model after multiple iterations.
In step 2, the method specifically comprises the following steps:
step 2.1: structure information and Details information of the blurred video frame are constructed.
Step 2.2: and performing relevant processing on the Structure information and the Details information, performing feature fusion, and outputting corresponding Structure and Details high-resolution results.
Step 2.3: and splicing the Structure and the Details high-resolution results, and outputting a final super-resolution result.
In step 3, the method specifically comprises the following steps:
step 3.1: inputting an FOD image or video frame to be detected.
Step 3.2: a super-resolution image is constructed based on the image or video frame.
Step 3.3: and calling a detection model for detection.
Step 3.4: and detecting and classifying and outputting a result, drawing a frame drawing position of the detected coordinate position of the FOD, and displaying a predicted FOD label on the frame.
The method can be applied to detection and classification of FOD small targets in different runways and at night through end-to-end learning and super-resolution of the fuzzy video frames, can be used for carrying out FOD detection and classification in different time periods of the same runway, can carry out model training and testing only by a single GPU, solves the problems that FOD fuzzy images are difficult to identify under the condition of long focal length, an FOD system is high in false detection rate and low in real-time performance, improves the speed and accuracy of FOD detection, and effectively improves the capability of detecting small targets under the fuzzy video frames.
As shown in fig. 6, an embodiment of the present invention further provides an airport runway foreign matter detection apparatus, including:
a first obtaining module 601, configured to obtain a first image, where the target image is an image of a non-first frame in an initial image stream;
a processing module 602, configured to perform super-resolution processing on the first image to obtain a second image;
the detection module 603 is configured to input the second image into the trained airport runway foreign object detection model to obtain a first detection result.
Optionally, the processing module 602 includes:
the first acquisition unit is used for acquiring the Structure information and the detail Details information of the first image and the Structure information and the Details information of the third image; the third image is a frame image before the first image in the initial image stream;
the determining unit is used for determining target Structure information and target Details information according to the Structure information and Details information of the first image and the Structure information and Details information of the third image;
and the second acquisition unit is used for splicing the target Structure information and the target Details information to obtain the second image.
Optionally, the airfield runway foreign matter detection device further comprises:
the system comprises an establishing module, a detecting module and a judging module, wherein the establishing module is used for establishing an initial airport runway foreign matter detection model;
the second acquisition module is used for inputting a training sample into the initial airport runway foreign matter detection model to obtain an initial detection result, wherein the training sample comprises a sample image and label information of the sample image;
and the adjustment acquisition module is used for adjusting the network parameters of the initial airport runway foreign matter detection model according to the labeling information and the initial detection result until the loss value of the loss function in the initial airport runway foreign matter detection model meets a preset condition, and acquiring the trained airport runway foreign matter detection model.
Optionally, the airfield runway foreign matter detection device further comprises:
the third acquisition module is used for automatically labeling the acquired first sample image to obtain a first training sample;
the fourth acquisition module is used for carrying out image enhancement processing on the first training sample to obtain a second training sample;
wherein the training samples comprise the first training sample and the second training sample, and the image enhancement processing comprises at least one of the following processing modes: mosaic, rotation, saturation adjustment, exposure adjustment, and hue adjustment.
Optionally, the establishing module may be specifically configured to:
an end-to-end neural network based on YOLOv4 is built, a guide file CFG is set, and an airport runway foreign matter category label is set.
Optionally, the initial airport runway foreign object detection model comprises: the system comprises an input module, a backbone network, a fusion feature extraction module and a detection classification module;
the input end of the input module is used for receiving the second training sample, the input module, the backbone network, the fusion feature extraction module and the detection classification module are sequentially connected, and the detection classification module is used for outputting a prediction frame and a classification label.
Optionally, the input module is trained using cmBN and SAT self-confrontation;
the trunk network comprises a BackBone trunk network, and a Mish activation function and a Dropblock mode are fused in the BackBone trunk network;
the fusion feature extraction module comprises an SPP module and an FPN + PAN structure;
the detection classification module uses a CIOU _ Loss function, and selects a DIOU _ nms mode when the prediction box is screened.
Optionally, the airfield runway foreign matter detection device further comprises:
the fifth acquisition module is used for acquiring a foreign matter detection result of the airport runway based on the first detection result and the second detection result; and the second detection result is a detection result obtained by detecting the foreign matters on the airfield runway through a radar.
It should be noted that the airport runway foreign matter detection device provided by the embodiment of the present invention is a device corresponding to the above airport runway foreign matter detection method, and all implementation manners in the above method embodiment are applicable to the embodiment of the device, and the same technical effects can be achieved.
The embodiment of the invention also provides a readable storage medium, wherein a program or an instruction is stored on the readable storage medium, and the program or the instruction is executed by a processor to realize the steps of the airport runway foreign matter detection method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. An airport runway foreign matter detection method, comprising:
acquiring a first image, wherein the target image is an image of a non-first frame in an initial image stream;
performing super-resolution processing on the first image to obtain a second image;
and inputting the second image into a trained airport runway foreign matter detection model to obtain a first detection result.
2. The method of claim 1, wherein performing super resolution processing on the first image to obtain a second image comprises:
acquiring the Structure information and the detail Details information of the first image, and the Structure information and the Details information of the third image; the third image is a frame image before the first image in the initial image stream;
determining target Structure information and target Details information according to the Structure information and Details information of the first image and the Structure information and Details information of the third image;
and splicing the target Structure information and the target Details information to obtain the second image.
3. The method of claim 1, wherein before inputting the second image into the trained airport runway foreign object detection model to obtain the first detection result, the method further comprises:
establishing an initial airport runway foreign matter detection model;
inputting a training sample into the initial airport runway foreign matter detection model to obtain an initial detection result, wherein the training sample comprises a sample image and marking information of the sample image;
and adjusting the network parameters of the initial airport runway foreign matter detection model according to the labeling information and the initial detection result until the loss value of the loss function in the initial airport runway foreign matter detection model meets a preset condition, and obtaining the trained airport runway foreign matter detection model.
4. The method of claim 3, wherein before inputting training samples into the initial airport runway foreign object detection model to obtain initial detection results, the method further comprises:
automatically labeling the obtained first sample image to obtain a first training sample;
performing image enhancement processing on the first training sample to obtain a second training sample;
wherein the training samples comprise the first training sample and the second training sample, and the image enhancement processing comprises at least one of the following processing modes: mosaic, rotation, saturation adjustment, exposure adjustment, and hue adjustment.
5. The method of claim 3, wherein said establishing an initial airport runway foreign object detection model comprises:
an end-to-end neural network based on YOLOv4 is built, a guide file CFG is set, and an airport runway foreign matter category label is set.
6. The method of claim 3, wherein the initial airport runway foreign object detection model comprises: the system comprises an input module, a backbone network, a fusion feature extraction module and a detection classification module;
the input end of the input module is used for receiving the second training sample, the input module, the backbone network, the fusion feature extraction module and the detection classification module are sequentially connected, and the detection classification module is used for outputting a prediction frame and a classification label.
7. The method of claim 6, wherein the input module is trained using cmBN and SAT self-confrontation;
the trunk network comprises a BackBone trunk network, and a Mish activation function and a Dropblock mode are fused in the BackBone trunk network;
the fusion feature extraction module comprises an SPP module and an FPN + PAN structure;
the detection classification module uses a CIOU _ Loss function, and selects a DIOU _ nms mode when the prediction box is screened.
8. The method of claim 1, wherein after inputting the second image into a trained airport runway foreign object detection model and obtaining a first detection result, the method further comprises:
obtaining a foreign matter detection result of the airport runway based on the first detection result and the second detection result; and the second detection result is a detection result obtained by detecting the foreign matters on the airfield runway through a radar.
9. An airport runway foreign matter detection device, comprising:
the first acquisition module is used for acquiring a first image, and the target image is an image of a non-first frame in the initial image stream;
the processing module is used for carrying out super-resolution processing on the first image to obtain a second image;
and the detection module is used for inputting the second image into the trained airport runway foreign matter detection model to obtain a first detection result.
10. A readable storage medium, storing thereon a program or instructions which, when executed by a processor, carry out the steps of the method of detecting airfield runway foreign object according to any of claims 1 to 8.
CN202011639474.2A 2020-12-31 2020-12-31 Airport runway foreign matter detection method, device and storage medium Active CN112686172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011639474.2A CN112686172B (en) 2020-12-31 2020-12-31 Airport runway foreign matter detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011639474.2A CN112686172B (en) 2020-12-31 2020-12-31 Airport runway foreign matter detection method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112686172A true CN112686172A (en) 2021-04-20
CN112686172B CN112686172B (en) 2023-06-13

Family

ID=75456655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011639474.2A Active CN112686172B (en) 2020-12-31 2020-12-31 Airport runway foreign matter detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112686172B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269161A (en) * 2021-07-16 2021-08-17 四川九通智路科技有限公司 Traffic signboard detection method based on deep learning
CN113313678A (en) * 2021-05-20 2021-08-27 上海北昂医药科技股份有限公司 Automatic sperm morphology analysis method based on multi-scale feature fusion
CN113592002A (en) * 2021-08-04 2021-11-02 江苏网进科技股份有限公司 Real-time garbage monitoring method and system
CN113627305A (en) * 2021-08-03 2021-11-09 北京航空航天大学 Detection device and detection method for small-scale FOD on airport runway
CN114821484A (en) * 2022-06-27 2022-07-29 广州辰创科技发展有限公司 Airport runway FOD image detection method, system and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2862762C (en) * 2012-08-31 2015-03-10 Systemes Pavemetrics Inc. Method and apparatus for detection of foreign object debris
WO2018125014A1 (en) * 2016-12-26 2018-07-05 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
CN108764202A (en) * 2018-06-06 2018-11-06 平安科技(深圳)有限公司 Airport method for recognizing impurities, device, computer equipment and storage medium
CN109086656A (en) * 2018-06-06 2018-12-25 平安科技(深圳)有限公司 Airport foreign matter detecting method, device, computer equipment and storage medium
CN109766884A (en) * 2018-12-26 2019-05-17 哈尔滨工程大学 A kind of airfield runway foreign matter detecting method based on Faster-RCNN
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks
CN110717363A (en) * 2018-07-13 2020-01-21 印象认知(北京)科技有限公司 Fingerprint acquisition method and device applied to fingerprint registration
CN111060076A (en) * 2019-12-12 2020-04-24 南京航空航天大学 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area
CN111950456A (en) * 2020-08-12 2020-11-17 成都成设航空科技股份公司 Intelligent FOD detection method and system based on unmanned aerial vehicle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2862762C (en) * 2012-08-31 2015-03-10 Systemes Pavemetrics Inc. Method and apparatus for detection of foreign object debris
WO2018125014A1 (en) * 2016-12-26 2018-07-05 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
CN108764202A (en) * 2018-06-06 2018-11-06 平安科技(深圳)有限公司 Airport method for recognizing impurities, device, computer equipment and storage medium
CN109086656A (en) * 2018-06-06 2018-12-25 平安科技(深圳)有限公司 Airport foreign matter detecting method, device, computer equipment and storage medium
CN110717363A (en) * 2018-07-13 2020-01-21 印象认知(北京)科技有限公司 Fingerprint acquisition method and device applied to fingerprint registration
CN109766884A (en) * 2018-12-26 2019-05-17 哈尔滨工程大学 A kind of airfield runway foreign matter detecting method based on Faster-RCNN
CN110135296A (en) * 2019-04-30 2019-08-16 上海交通大学 Airfield runway FOD detection method based on convolutional neural networks
CN111060076A (en) * 2019-12-12 2020-04-24 南京航空航天大学 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area
CN111950456A (en) * 2020-08-12 2020-11-17 成都成设航空科技股份公司 Intelligent FOD detection method and system based on unmanned aerial vehicle

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALEXEY BOCHKOVSKIY ET AL: "YOLOv4:Optimal Speed and Accuracy of Object Detection", 《ARXIV PREPRINT ARXIV》 *
张洪明: "《大学计算机基础》", 30 June 2005, 云南大学出版社 *
藤田一弥 等: "《实践深度学习》", 31 July 2020, 机械工业出版社 *
郭晓静等: "改进YOLOv3在机场跑道异物目标检测中的应用", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313678A (en) * 2021-05-20 2021-08-27 上海北昂医药科技股份有限公司 Automatic sperm morphology analysis method based on multi-scale feature fusion
CN113269161A (en) * 2021-07-16 2021-08-17 四川九通智路科技有限公司 Traffic signboard detection method based on deep learning
CN113627305A (en) * 2021-08-03 2021-11-09 北京航空航天大学 Detection device and detection method for small-scale FOD on airport runway
CN113627305B (en) * 2021-08-03 2023-07-18 北京航空航天大学 Detection device and detection method for small-scale FOD on airport runway
CN113592002A (en) * 2021-08-04 2021-11-02 江苏网进科技股份有限公司 Real-time garbage monitoring method and system
CN114821484A (en) * 2022-06-27 2022-07-29 广州辰创科技发展有限公司 Airport runway FOD image detection method, system and storage medium

Also Published As

Publication number Publication date
CN112686172B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN112686172B (en) Airport runway foreign matter detection method, device and storage medium
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN110443763B (en) Convolutional neural network-based image shadow removing method
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
US20060221181A1 (en) Video ghost detection by outline
CN111222396A (en) All-weather multispectral pedestrian detection method
JP2003256807A (en) Land block data preparing method and device
EP2124194B1 (en) Method of detecting objects
CN109472193A (en) Method for detecting human face and device
CN109241902A (en) A kind of landslide detection method based on multi-scale feature fusion
CN112365467B (en) Foggy image visibility estimation method based on single image depth estimation
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN109377494A (en) A kind of semantic segmentation method and apparatus for image
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
CN114255407A (en) High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method
CN116129291A (en) Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
CN115240089A (en) Vehicle detection method of aerial remote sensing image
CN111881984A (en) Target detection method and device based on deep learning
CN114399734A (en) Forest fire early warning method based on visual information
CN112613568B (en) Target identification method and device based on visible light and infrared multispectral image sequence
CN111738312B (en) Power transmission line state monitoring method and device based on GIS and virtual reality fusion and computer readable storage medium
CN116977931A (en) High-altitude parabolic identification method based on deep learning
CN115761223A (en) Remote sensing image instance segmentation method by using data synthesis
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant