CN109086656B - Airport foreign matter detection method, device, computer equipment and storage medium - Google Patents

Airport foreign matter detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109086656B
CN109086656B CN201810573790.0A CN201810573790A CN109086656B CN 109086656 B CN109086656 B CN 109086656B CN 201810573790 A CN201810573790 A CN 201810573790A CN 109086656 B CN109086656 B CN 109086656B
Authority
CN
China
Prior art keywords
network
image
difference
training
foreign matter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810573790.0A
Other languages
Chinese (zh)
Other versions
CN109086656A (en
Inventor
叶明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810573790.0A priority Critical patent/CN109086656B/en
Priority to PCT/CN2018/092611 priority patent/WO2019232830A1/en
Publication of CN109086656A publication Critical patent/CN109086656A/en
Application granted granted Critical
Publication of CN109086656B publication Critical patent/CN109086656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an airport foreign matter detection method, an airport foreign matter detection device, computer equipment and a storage medium, wherein the airport foreign matter detection method is used for preprocessing an original image by acquiring the original image to obtain an image to be identified; inputting the image to be recognized into a full-difference pyramid characteristic network recognition model for recognition, and acquiring a classification confidence map; and finally, acquiring a foreign matter detection result according to the classification confidence map, so that the detection precision and the positioning precision of the tiny objects in the airport foreign matter detection process are ensured, and the detection efficiency of the airport foreign matter is also improved.

Description

Airport foreign matter detection method, device, computer equipment and storage medium
Technical Field
The invention relates to the field of image recognition, in particular to a method and a device for detecting airport foreign matters, computer equipment and a storage medium.
Background
Various abnormal objects, known as FOD (aircraft Object breakdown), are often present in airport runways, and FOD generally refers to some Foreign Object that may damage an aircraft or system, often referred to as an airport Foreign Object. FOD in a wide variety of types, such as aircraft and engine connectors (nuts, bolts, washers, fuses, etc.), machine tools, flying objects (nails, personal certificates, pens, pencils, etc.), wildlife, leaves, stones and sand, pavement material, wood, plastic or polyethylene material, paper products, operation area ice ballast, and so on. Both experiments and cases have shown that foreign objects on the airport pavement can be easily sucked into the engine, resulting in engine failure. Debris can also accumulate in the mechanical device and affect the proper operation of the landing gear, flaps, etc.
Due to the development of artificial intelligence, attempts have been made to detect airport foreign objects using deep learning object detection models. However, the existing deep learning object detection models are mainly classified into Two-step detection (Two stage detector) models (FastRCNN, fasternn, etc.) and Single-step detection (Single stage detector) models (FCN, SSD, etc.). In the traditional two-step detection model, under the condition that the occupation rate of an object scene is extremely low (less than one in a thousand), the region selection is difficult and the operation speed is slow. And is not suitable for scenes with certain real-time requirements. However, the traditional single-step detection model is not sensitive enough to the tiny objects, and for the tiny objects, the final detection position is easy to generate deviation.
Disclosure of Invention
In view of the above, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for detecting airport foreign objects, which can improve the accuracy of airport foreign object recognition.
An airport foreign matter detection method comprising:
acquiring an original image, and preprocessing the original image to obtain an image to be identified;
inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition, and obtaining a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set;
and acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
An airport foreign matter detection apparatus comprising:
the image to be recognized acquisition module is used for acquiring an original image and preprocessing the original image to obtain an image to be recognized;
the classification confidence map acquisition module is used for inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition to acquire a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set;
and the detection result acquisition module is used for acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
A computer device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, said processor implementing the steps of the above-mentioned airport foreign object detection method when executing said computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned airport foreign object detection method.
According to the airport foreign matter detection method, the airport foreign matter detection device, the computer equipment and the storage medium, the original image is preprocessed by acquiring the original image, so that an image to be recognized is obtained; inputting the image to be recognized into a full-difference pyramid feature network recognition model for recognition, and acquiring a classification confidence map; and finally, acquiring a foreign matter detection result according to the classification confidence map, ensuring the detection precision and the positioning precision of tiny objects in the airport foreign matter detection process, and improving the detection efficiency of the airport foreign matter.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a method for detecting airport foreign objects according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of a method for detecting airport foreign objects according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an exemplary step S10 of the airport foreign object detection method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an exemplary step S20 of the airport foreign object detection method according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of an airport foreign object detection apparatus in one embodiment of the present invention;
FIG. 6 is a schematic diagram of a computing device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The airport foreign object detection method provided by the application can be applied to the application environment shown in fig. 1, wherein a client (computer device) communicates with a server through a network. The client side obtains an original image in real time or obtains the original image by framing the video data, and sends the original image to the server side. And the server side preprocesses the original image after acquiring the original image, obtains a classification confidence map through a full difference-pyramid feature network recognition model, and finally obtains a detection result according to the classification confidence map. Among them, the client (computer device) may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for detecting airport foreign objects is provided, which is described by taking the method applied to the server in fig. 1 as an example, and includes the following steps:
s10: and acquiring an original image, and preprocessing the original image to obtain an image to be identified.
The original images are images reflecting different positions of an airport. Alternatively, the original image may be an image stored in the client, or may be obtained by video framing from video data captured in real time. Specifically, the client side obtains video data, then performs framing processing on the video data according to a preset frame rate to obtain an original image, and sends the original image to the server side. In a specific embodiment, the client may also directly send the video data to the server, and the server performs framing processing on the video data after acquiring the video data to obtain an original image.
The preprocessing of the original image refers to the enhancement of the original image to improve the subsequent detection precision. When acquiring an original image, there are many factors that affect the original image, such as: uneven illuminance, limitation of acquisition equipment, different acquisition environments and the like all cause insufficient definition of an original image, and subsequent reduction of identification precision is caused. Therefore, the original image is preprocessed in the step, so that the subsequent detection precision is improved. Optionally, the original image may be subjected to global enhancement or local enhancement processing by using an image enhancement algorithm, and then the enhanced original image is subjected to sharpening processing to obtain an image to be identified. Preferably, the image enhancement algorithm may be a multi-scale retina algorithm, an adaptive histogram equalization algorithm, or an optimized contrast algorithm, etc. After the original image is subjected to AND processing, the image to be identified is obtained.
S20: and inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition to obtain a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set.
The total difference-pyramid feature network identification model is a neural network identification model formed by adopting a total difference network (DenseNet) as an encoding network and a pyramid feature network (RefineNet) as a decoding network according to an encoding-decoding model.
Specifically, the homodyne network is formed by splicing networks of different layers in a neural network model, so that the input of each layer network comprises the output of all previous layer networks, and thus, the loss of tiny objects in the process of up-sampling the model can be avoided. The transmission efficiency of information and gradient in the network can be improved by the total difference network, each layer can directly obtain the gradient from a loss function, and an input signal is directly obtained, so that a deeper network can be trained, the network structure also has a regularization effect, and the network performance is improved by the total difference network from the characteristic reuse angle. Therefore, the adoption of the full-difference network not only reduces the phenomenon that tiny objects are lost in the process of sampling on the model, but also improves the training speed and reduces the over-fitting phenomenon.
The pyramid feature network is a multipath improved network, extracts all information in the down sampling process, and obtains a high-resolution prediction network by using a long-distance network connection. The pyramid feature network uses fine-layer features, so that high-layer semantic information can be improved. The pyramid feature network uses more RCUs (residual connection units), so that short-range connection is formed inside the pyramid feature network, and the pyramid feature network is beneficial to training. In addition, the pyramid feature network and the total difference network form long-range connection, so that the gradient can be effectively transmitted to the whole network, the influence of the bottom-layer features on the final result is increased, and the positioning accuracy of the object (airport foreign matter) is effectively improved.
The classification confidence map is a picture which is displayed by marking different types of pictures in different modes after the pictures to be recognized are detected. Optionally, different colors may be used to distinguish different categories in the picture to be recognized. For example: in the original image, objects that may appear are a runway, a lawn, airport equipment (non-foreign objects), airport foreign objects, and the like. Therefore, different colors can be imparted to the above-described different kinds of objects in advance. After the image to be recognized is input into the total difference-pyramid feature network recognition model for recognition, the total difference-pyramid feature network recognition model combines different judgment results of different areas in the image to be recognized and gives different colors in advance to form a classification confidence map.
In one embodiment, airport foreign objects may also be labeled with more specific objects, such as: engine attachments (nuts, screws, washers, fuses, etc.), machine tools, flying objects (nails, personal documents, pens, pencils, etc.), animals, and the like. And the types of the airport foreign matters are assigned to, so that the specific types of the airport foreign matters can be further determined when the airport foreign matters are identified, and appropriate treatment measures can be conveniently made.
And the all-difference-pyramid feature network identification model is obtained by training an all-difference network and a pyramid feature network by adopting a training sample set. The training sample set comprises training images, and the training images are sample images used for training a full difference-pyramid feature network recognition model. Optionally, the training image may be obtained by setting video capture devices or image capture devices at different locations of the airport to capture corresponding data, and the video capture devices or the image capture devices capture the corresponding data and then send the data to the server. If the video data is acquired by the server, the video data can be subjected to framing processing according to a preset frame rate to obtain a training image.
S30: and acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
After the classification confidence map is acquired, foreign matter detection results including the presence and absence of airport foreign matter may be acquired according to different colors on the classification confidence map. For example, if the airport foreign object is set to red in the preset setting, after the classification confidence map is acquired, it is determined whether or not a red region exists in the classification confidence map, and a different foreign object detection result is obtained. If the red area exists in the classification confidence map, the airport foreign matter exists in the image to be recognized, and the foreign matter detection result is that the airport foreign matter exists. If the red area does not exist in the classification confidence map, the situation that no airport foreign matter exists in the image to be recognized is shown, and the foreign matter detection result is that no airport foreign matter exists. Alternatively, the foreign object detection result may be embodied by text, voice, or a signal light, or may be a combination of at least two of text, voice, or a signal light. For example, when the foreign object detection result indicates that the foreign object exists in the airport, a voice prompt can be sent, and a warning light is adopted to prompt so as to better remind relevant personnel to process.
In one embodiment, when the classification confidence map has a red region, the position information of the airport foreign object may also be acquired, and the foreign object detection result further includes the position information of the airport foreign object. Specifically, each image to be recognized may be assigned with a recognition identifier in advance, which is used to locate the image source of the image to be recognized, for example, the image source is captured by the camera device to which the recognition identifier is located. Therefore, when the classification confidence map has a red area, the position of the red area in the image to be recognized can be obtained, and the actual position of the airport foreign matter corresponding to the red area in the airport can be obtained by combining the recognition identifier of the image to be recognized.
In the embodiment, the original image is preprocessed to obtain the image to be identified, so that the subsequent detection precision is improved. And the identification model of the total difference-pyramid characteristic network is adopted to identify the image to be identified, so that the identification precision and the positioning precision of the tiny objects in the identification process are ensured, and the identification efficiency is also improved.
In an embodiment, as shown in fig. 3, in step S10, preprocessing an original image to obtain an image to be recognized specifically includes the following steps:
s11: and carrying out global enhancement processing on the original image by adopting a multi-scale retina algorithm.
The Multi-Scale retina (MSR) algorithm is an image enhancement processing algorithm, and is used for reducing the influence of various factors (such as interference noise, edge detail loss, and the like) of an unprocessed original image. And enhancing the original image by adopting a multi-scale retinal algorithm, removing the illumination image of the original image, reserving the reflected image, and adjusting the gray dynamic range of the original image to obtain the reflection information of the reflected image corresponding to the original image, thereby achieving the enhancement effect.
Preferably, the global enhancement processing is performed on the original image by using a multi-scale retinal algorithm, which specifically includes:
carrying out global enhancement processing on the original image by adopting the following formula:
Figure GDA0004091111140000081
/>
wherein, N is the number of scales, (x, y) is the coordinate value of the original image pixel, G (x, y) is the input of the multi-scale retina algorithm, i.e. the gray value of the original image, R (x, y) is the output of the multi-scale retina algorithm, i.e. the gray value of the original image after the global enhancement processing, w n Is a scale weight factor with the constraint of
Figure GDA0004091111140000082
F n (x, y) is the nth center-surround function, and the expression is:
Figure GDA0004091111140000083
in the formula, σ n As a scale parameter of the n-th central surround function, coefficient K n Must satisfy:
Figure GDA0004091111140000084
specifically, gray values G (x, y) of an original image are obtained through an image information obtaining tool, and the gray values G (x, y) are obtained according to the scale parameters sigma of the input n center surrounding functions n Is determined to satisfy
Figure GDA0004091111140000086
K of n Then the center surround function F n And (x, y) and G (x, y) are calculated according to the following formula to obtain the gray value R (x, y) of the original image after the global enhancement processing:
Figure GDA0004091111140000085
wherein σ n Determining a size of a neighborhood of the central surround functionThe size determines the quality of the original image, σ n When the value is larger, the selected neighborhood range is larger, the influence of the pixels of the original image on the surrounding pixels is smaller, and the local details of the original image are highlighted.
In a specific embodiment, the number n =3 of the selected scales is set accordingly:
σ 1 =30,σ 2 =110,σ 3 =200;
wherein σ 1 、σ 2 And σ 3 Respectively corresponding to gray value interval [0,255 ] of the original image]Low gray, medium gray, and high gray, and set w 1 =w 2 =w 3 And (4) =1/3. According to the setting of the parameters, the multi-scale retina algorithm simultaneously considers 3 gray scales of low gray scale, medium gray scale and high gray scale, thereby obtaining better effect. The multi-scale retina algorithm can realize good adaptivity by combining a plurality of scales, highlight texture details of dark areas of images, and realize the adjustment of dynamic ranges of the images so as to achieve the aim of image enhancement.
S12: and sharpening the original image subjected to the global enhancement processing by adopting a Laplace operator to obtain an image to be identified.
The Laplacian operator (Laplacian operator) is a second order differential operator, and is suitable for improving image blurring caused by diffuse reflection of light. The image is subjected to the Laplacian sharpening transformation, so that the image blur can be reduced, and the image definition can be improved. Therefore, by sharpening the original image after the global enhancement processing, the edge detail characteristics of the original image after the global enhancement processing are highlighted, and the outline definition of the original image after the global enhancement processing is improved. The sharpening process refers to a transformation for sharpening an image to strengthen the boundary of an object and the details of the image in the image. After the original image subjected to the global enhancement processing is subjected to the sharpening processing by the Laplacian operator, the detail features of the image edge are enhanced and the halo is weakened, so that the details of the original image are protected.
The laplacian based on the second order differential is defined as:
Figure GDA0004091111140000091
for the original image R (x, y) after the global enhancement processing, the second derivative thereof is:
Figure GDA0004091111140000101
thus, the Laplace operator
Figure GDA0004091111140000102
Comprises the following steps:
Figure GDA0004091111140000103
obtain the Laplace operator
Figure GDA0004091111140000104
Thereafter, using Laplace operator>
Figure GDA0004091111140000105
And sharpening each pixel gray value of the original image R (x, y) after the global enhancement processing according to the following formula to obtain a sharpened pixel gray value, wherein g (x, y) is the sharpened pixel gray value.
Figure GDA0004091111140000106
And replacing the gray value of the sharpened pixel with the gray value of the original (x, y) pixel to obtain an image to be identified.
In one embodiment, the laplacian operator
Figure GDA0004091111140000107
Selecting a four-neighbor domain sharpening template matrix H:
Figure GDA0004091111140000108
and adopting a four-neighborhood sharpening template matrix H to sharpen the original image subjected to the global enhancement processing by using a Laplacian operator.
In the embodiment, the original image is subjected to global enhancement processing by adopting a multi-scale retinal algorithm, the original image subjected to enhancement processing by adopting the laplacian operator is sharpened, and the halo is weakened while the detail features of the image edge are enhanced, so that the details of the original image are protected. In addition, the steps are simple and convenient, the edge detail features of the to-be-identified image obtained after processing are more obvious, the texture features of the to-be-identified image are enhanced, and the accuracy of the to-be-identified image identification is improved.
In an embodiment, as shown in fig. 4, training a holonomic network and a pyramid feature network by using a training sample set specifically includes the following steps:
s21: and acquiring a training sample set, and carrying out classification and labeling on training images in the training sample set.
The step of performing classification and labeling on the training image refers to classifying different objects in the training image. For example, objects that may appear in the training image are runways, lawns, airport equipment (non-alien) and airport alien objects. Different labeling information is given to different objects in the training images, so that the classification and labeling of the training images are completed.
S22: and training a total difference network by adopting the training images labeled by the training sample set in a classified manner to obtain a target output vector.
In this step, a homodyne network is trained using training images in a training sample set, and in the homodyne network, the training image input is set to x 0 The total difference network is composed of L layers of structures, and each layer of total difference network comprises a nonlinear transformation H l (. Cndot.). Alternatively, the nonlinear transformation may include ReLU (Linear activation function) and Pooling, or BN (Batch Normalization), reLU and convolutional layer, or BN, reLU and Pooling. It is composed ofIn the BN, the distribution of the input values of any neuron of each layer of neural network is adjusted to the standard normal distribution with the mean value of 0 and the variance of 1 by a normalization means, so that the activation input values fall in a region of a nonlinear function sensitive to input, the gradient is increased, the problem of gradient disappearance is avoided, and the training speed can be greatly accelerated. The ReLU is a piecewise linear function, and is also a unilateral suppression function, and all negative values of the input can be output as 0, while the positive values of the input are kept unchanged. Through the ReLU, the sparse model can better mine relevant features and fit training data.
In this embodiment, let the output of the l-th layer in the total difference network be x l Each layer in the total difference network is directly connected with all the previous layers, namely:
x l =H l ([x 0 ,x 1 ,...,x l-1 ]);
and the output of the corresponding layer in the full-difference network forms a target output vector for the pyramid feature network to be trained by adopting the target output vector subsequently.
S23: and training the pyramid feature network by adopting the target output vector to obtain a total difference-pyramid feature network identification model.
In the pyramid feature network, each layer of output in the target output vector in the full-difference network is respectively connected with the RCU unit of the pyramid feature network. Namely, the RCU units with the same number of layers as the target output vectors in the full-differential network exist in the pyramid feature network.
The RCU unit refers to a unit structure extracted from a full difference network, and specifically comprises a ReLU, a convolution and a summation part and the like. And respectively carrying out ReLU, convolution and summation operations on each layer of target output vectors acquired in the full-difference network. Each layer output of the RCU unit is processed by Multi-resolution fusion, so that different output feature maps are obtained: the output characteristic diagram of each layer of the RCU unit is subjected to self-adaptive processing by using a convolution layer, and then the maximum resolution of the layer is up-sampled. The chain residual pooling upsamples the input output feature maps of different resolutions to the same size as the maximum output feature map and then superimposes them. And finally, convolving the superposed output characteristic diagram by an RCU to obtain a fine characteristic diagram.
The pyramid feature network has the function of fusing feature maps with different resolutions. Dividing a pre-trained total difference network into a plurality of total difference blocks according to the resolution of the feature map, then respectively taking the plurality of blocks right as a plurality of paths to be fused through a pyramid feature network, and finally obtaining a fine feature map (subsequently connected with a softmax layer and then output through bilinear interpolation).
In the pyramid feature network, training the pyramid feature network through a target output vector in a total difference network to form a primary training network, verifying and adjusting the pyramid feature network by adopting a verification sample until a preset classification accuracy is obtained, and finishing training. The preset classification accuracy can be set according to the actual requirement of the recognition model.
In this embodiment, the all-difference-pyramid feature network recognition model is obtained through training by using the training sample set after classification and labeling, so that the recognition accuracy and speed of the all-difference-pyramid feature network recognition model are ensured.
In an embodiment, training the homodyne network specifically includes:
and setting an initial convolution layer of the total difference network, and performing down-sampling by adopting a maximum pooling layer in the total difference network.
The convolutional layer is used for feature extraction of the input image, and the initial convolutional layer is extracted as the features of the training image, and optionally, the initial convolutional layer adopts a convolutional kernel of 7*7. And adopting a maximum pooling layer in the full-difference network to perform down-sampling, wherein in the sampling process, if the new sampling rate is less than the original sampling rate, the down-sampling is performed. Maximum pooling (max-pooling) refers to the sampling function taking the maximum of all neurons within a region. The input image passing through the initial convolutional layer is subjected to maximum pooling processing, feature compression is carried out, main features are extracted, and the complexity of network calculation is simplified.
And arranging three layers of all-difference network modules, wherein each all-difference network module comprises an all-difference convolution layer and an all-difference active layer, and the active function in the all-difference active layer adopts a linear active function.
In the three layers of all-differential network modules, the output of each all-differential network module is the combination of the outputs of all the previous modules, namely:
x l =H l ([x 0 ,x 1 ,...,x l-1 ]),l=1,2,3;
wherein each H l Both are a combination of two layer operations, the convolutional layer and the active layer: conv- > ReLU. Optionally, the convolution kernel size in the full-differenced convolution layer is 3*3. Each H l The characteristic quantity of the output (is) is the characteristic growth rate, and optionally, the characteristic growth rate is set to be 16, and the output characteristic quantity of the output of the three-layer full-differential network module is 48. And the linear activation function is formulated as:
Figure GDA0004091111140000131
the time of the training process can be made to converge quickly by the conversion of the linear activation function.
And arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
In the all-difference network module, the output characteristic of each all-difference network module is increased, and in the above arrangement, if the characteristic increase rate is 16, the output characteristic output by the three-layer all-difference network module is 48. In this way, the amount of calculation is increased step by step, and therefore, a transmission layer is introduced, and a transmission parameter is set to indicate how many times the input of the transmission layer is reduced. Illustratively, the transmission parameter is 0.6, i.e., the input of the transmission layer is reduced to 0.6.
In the embodiment, the training speed and the training precision of the total difference network are ensured by setting the network structure and each parameter in the total difference network.
In one embodiment, in the process of training the full difference-pyramid feature network recognition model, the Loss function is implemented by using a Focal local function:
FL(p t )=-(1-p t ) γ log(p t );
wherein,
Figure GDA0004091111140000141
p t is the predicted value of the total difference-pyramid characteristic network recognition model to the training image, p is the estimated probability of the model to the training image y =1, and belongs to [0,1 ]]Y is the labeled value of the training image, and gamma is the adjusting parameter.
A loss function refers to a function that maps an event (an element in a sample space) to a real number expressing the economic or opportunity cost associated with its event. In this embodiment, when training the all-difference-pyramid feature network recognition model, a loss function is used to measure the prediction performance of the all-difference-pyramid feature network recognition model, and the smaller the loss function is, the better the prediction performance of the recognition model is. In the embodiment of the invention, the number of the sample images of each classification in the training images of the training sample set may be unbalanced, especially the training images containing airport foreign matters may be less, and the loss function is selected in order to better improve the prediction capability of the full-differential-pyramid feature network recognition model.
Therefore, the Loss function is realized by adopting a Focal local function, and the Focal local function is increased by an adjusting factor (1-p) t ) γ Wherein the value of the adjusting parameter gamma is [0,5 ]]In the meantime. y is the label value of the training image, for example, for the label of the foreign object in the training image, y =1 if the foreign object is, and y = -1 if the foreign object is not. When a training image is misclassified, P t Very small, at this time the regulatory factor (1-p) t ) γ Close to 1, this loss will not have a great influence; when P is t The value of the scaling factor approaches 0 as the value approaches 1, and thus the loss value for correctly classified samples is reduced.
In this embodiment, the Focal local function is adopted in the process of training the all-difference pyramid feature network identification model, so that the influence of non-uniform classification samples on the training of the all-difference pyramid feature network identification model can be reduced, and the effect of improving the subsequent detection precision is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not limit the implementation process of the embodiments of the present invention in any way.
In one embodiment, an airport foreign matter detection apparatus is provided, which corresponds to the airport foreign matter detection method in the above-described embodiment one to one. As shown in fig. 5, the airport foreign matter detection apparatus includes an image to be recognized acquisition module 10, a classification confidence map acquisition module 20, and a classification confidence map acquisition module 30. The functional modules are explained in detail as follows:
and the image to be recognized acquiring module 10 is used for acquiring an original image and preprocessing the original image to obtain the image to be recognized.
And the classification confidence map acquisition module 20 is configured to input the image to be recognized into a total difference-pyramid feature network recognition model for recognition, and acquire a classification confidence map, where the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by using a training sample set.
And the detection result acquisition module 30 is used for acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
Preferably, the image acquiring module to be recognized 10 includes a global enhancement processing unit and a sharpening processing unit.
The global enhancement processing unit is used for carrying out global enhancement processing on the original image by adopting a multi-scale retina algorithm;
and the sharpening processing unit is used for sharpening the original image after the global enhancement processing by adopting a Laplacian operator to obtain an image to be identified.
Preferably, the airport foreign matter detection device further comprises a training sample set acquisition module, a target output vector acquisition module and a recognition model acquisition module.
The training sample set acquisition module is used for acquiring a training sample set and carrying out classification and labeling on training images in the training sample set;
the target output vector acquisition module is used for training a total difference network by adopting training images which are classified and labeled in a training sample set to obtain a target output vector;
and the identification model acquisition module is used for training the pyramid feature network by adopting the target output vector to obtain a total difference-pyramid feature network identification model.
For specific limitations of the airport foreign object detection device, reference may be made to the above limitations of the airport foreign object detection method, which are not described herein again. All or part of the modules in the airport foreign matter detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing original images and identification model data of the full difference-pyramid characteristic network. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an airport foreign object detection method.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
and acquiring an original image, and preprocessing the original image to obtain an image to be identified.
And inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition, and obtaining a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set.
And acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an original image, and preprocessing the original image to obtain an image to be identified;
inputting an image to be recognized into a total difference-pyramid feature network recognition model for recognition, and obtaining a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set;
and acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the nonexistence of airport foreign matters.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (7)

1. An airport foreign matter detection method, comprising:
acquiring an original image, and preprocessing the original image to obtain an image to be identified;
inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition, and obtaining a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set, the total difference network is adopted as a coding network, and the pyramid feature network is adopted as a decoding network;
acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the absence of airport foreign matters;
the training of the full-difference network and the pyramid feature network by adopting the training sample set specifically comprises the following steps:
acquiring a training sample set, and carrying out classification and labeling on training images in the training sample set;
training a total difference network by adopting the training images labeled in the training sample set in a classifying way to obtain a target output vector;
training a pyramid feature network by using the target output vector to obtain the total difference-pyramid feature network identification model;
the training of the total difference network specifically comprises the following steps:
setting an initial convolution layer of the full-difference network, and adopting a maximum pooling layer in the full-difference network to perform down-sampling;
setting three layers of all-difference network modules, wherein each all-difference network module comprises an all-difference convolution layer and an all-difference active layer, and the active function in the all-difference active layer adopts a linear active function;
and arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
2. The method for detecting airport foreign objects according to claim 1, wherein said preprocessing the original image to obtain the image to be recognized specifically comprises:
performing global enhancement processing on the original image by adopting a multi-scale retina algorithm;
and sharpening the original image subjected to the global enhancement processing by adopting a Laplace operator to obtain an image to be identified.
3. The method for detecting the airport foreign objects according to claim 1, wherein in the process of training the all-difference-pyramid feature network recognition model, the Loss function is realized by a Focal local function:
FL(p t )=-(1-p t ) γ log(p t );
wherein,
Figure FDA0004091111130000021
p t is the predicted value of the total difference-pyramid feature network recognition model to the training image, p is the estimated probability of the model to the training image y =1, and belongs to [0,1 ]]Y is the labeled value of the training image, and gamma is the adjusting parameter.
4. An airport foreign matter detection device, comprising:
the image to be recognized acquisition module is used for acquiring an original image and preprocessing the original image to obtain an image to be recognized;
the classification confidence map acquisition module is used for inputting the image to be recognized into a total difference-pyramid feature network recognition model for recognition to acquire a classification confidence map, wherein the total difference-pyramid feature network recognition model is obtained by training a total difference network and a pyramid feature network by adopting a training sample set;
the detection result acquisition module is used for acquiring a foreign matter detection result according to the classification confidence map, wherein the foreign matter detection result comprises the existence of airport foreign matters and the absence of airport foreign matters;
the airport foreign matter detection device further comprises:
the training sample set acquisition module is used for acquiring a training sample set and carrying out classification and labeling on training images in the training sample set;
a target output vector obtaining module, configured to train a total difference network using the training images labeled in the training sample set to obtain a target output vector;
the identification model acquisition module is used for training the pyramid feature network by adopting the target output vector to obtain a total difference-pyramid feature network identification model;
the airport foreign matter detection device is also used for:
setting an initial convolution layer of the full-difference network, and adopting a maximum pooling layer in the full-difference network to perform down-sampling;
setting three layers of all-difference network modules, wherein each all-difference network module comprises an all-difference convolution layer and an all-difference active layer, and the active function in the all-difference active layer adopts a linear active function;
and arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
5. The airport foreign object detection apparatus of claim 4, wherein said image acquisition module to be identified comprises:
the global enhancement processing unit is used for carrying out global enhancement processing on the original image by adopting a multi-scale retina algorithm;
and the sharpening processing unit is used for sharpening the original image subjected to the global enhancement processing by adopting a Laplace operator to obtain an image to be identified.
6. A computer arrangement comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor, when executing said computer program, carries out the steps of the airport foreign object detection method according to any one of claims 1 to 3.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the airport foreign object detection method according to any one of claims 1 to 3.
CN201810573790.0A 2018-06-06 2018-06-06 Airport foreign matter detection method, device, computer equipment and storage medium Active CN109086656B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810573790.0A CN109086656B (en) 2018-06-06 2018-06-06 Airport foreign matter detection method, device, computer equipment and storage medium
PCT/CN2018/092611 WO2019232830A1 (en) 2018-06-06 2018-06-25 Method and device for detecting foreign object debris at airport, computer apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810573790.0A CN109086656B (en) 2018-06-06 2018-06-06 Airport foreign matter detection method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109086656A CN109086656A (en) 2018-12-25
CN109086656B true CN109086656B (en) 2023-04-18

Family

ID=64839386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810573790.0A Active CN109086656B (en) 2018-06-06 2018-06-06 Airport foreign matter detection method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109086656B (en)
WO (1) WO2019232830A1 (en)

Families Citing this family (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815332B (en) * 2019-01-07 2023-06-20 平安科技(深圳)有限公司 Loss function optimization method, loss function optimization device, computer equipment and storage medium
CN109886935A (en) * 2019-01-28 2019-06-14 南京威翔科技有限公司 A kind of road face foreign matter detecting method based on deep learning
CN111523533B (en) * 2019-02-01 2023-07-07 阿里巴巴集团控股有限公司 Method and device for determining area of object from image
CN109919869B (en) * 2019-02-28 2021-06-04 腾讯科技(深圳)有限公司 Image enhancement method and device and storage medium
CN111862105A (en) * 2019-04-29 2020-10-30 北京字节跳动网络技术有限公司 Image area processing method and device and electronic equipment
CN110751958A (en) * 2019-09-25 2020-02-04 电子科技大学 Noise reduction method based on RCED network
CN111062345B (en) * 2019-12-20 2024-03-29 上海欧计斯软件有限公司 Training method and device for vein recognition model and vein image recognition device
CN111259763B (en) * 2020-01-13 2024-02-02 华雁智能科技(集团)股份有限公司 Target detection method, target detection device, electronic equipment and readable storage medium
CN111274936B (en) * 2020-01-19 2023-04-18 中国科学院上海高等研究院 Multispectral image ground object classification method, system, medium and terminal
CN111539443B (en) * 2020-01-22 2024-02-09 北京小米松果电子有限公司 Image recognition model training method and device and storage medium
CN111310645B (en) * 2020-02-12 2023-06-13 上海东普信息科技有限公司 Method, device, equipment and storage medium for warning overflow bin of goods accumulation
CN111325735A (en) * 2020-02-25 2020-06-23 杭州测质成科技有限公司 Aero-engine insurance state detection method based on deep learning
CN111353442A (en) * 2020-03-03 2020-06-30 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN111460894B (en) * 2020-03-03 2021-09-03 温州大学 Intelligent car logo detection method based on convolutional neural network
CN113469931B (en) * 2020-03-11 2024-06-21 北京沃东天骏信息技术有限公司 Image detection model training and modification detection method, device and storage medium
CN111539458B (en) * 2020-04-02 2024-02-27 咪咕文化科技有限公司 Feature map processing method and device, electronic equipment and storage medium
CN111553406B (en) * 2020-04-24 2023-04-28 上海锘科智能科技有限公司 Target detection system, method and terminal based on improved YOLO-V3
CN111539947B (en) * 2020-04-30 2024-03-29 上海商汤智能科技有限公司 Image detection method, related model training method, related device and equipment
CN111753643B (en) * 2020-05-09 2024-05-14 北京迈格威科技有限公司 Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN111709991B (en) * 2020-05-28 2023-11-07 武汉工程大学 Railway tool detection method, system, device and storage medium
CN111768372B (en) * 2020-06-12 2024-03-12 国网智能科技股份有限公司 Method and system for detecting foreign matters in cavity of GIS (gas insulated switchgear)
CN111598103A (en) * 2020-06-18 2020-08-28 上海眼控科技股份有限公司 Frame number identification method and device, computer equipment and storage medium
CN111753915B (en) * 2020-06-29 2023-11-07 广东浪潮大数据研究有限公司 Image processing device, method, equipment and medium
CN111862013B (en) * 2020-07-08 2024-02-02 湘潭大学 Insulator detection method, device and equipment based on deep convolutional neural network
CN113920208A (en) * 2020-07-10 2022-01-11 同方威视科技江苏有限公司 Image processing method and device, computer readable storage medium and electronic device
CN112183183A (en) * 2020-08-13 2021-01-05 南京众智未来人工智能研究院有限公司 Target detection method and device and readable storage medium
CN112036494A (en) * 2020-09-02 2020-12-04 公安部物证鉴定中心 Gun image identification method and system based on deep learning network
CN112016502B (en) * 2020-09-04 2023-12-26 平安国际智慧城市科技股份有限公司 Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN112308000B (en) * 2020-11-06 2023-03-07 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on space-time information
CN112285706A (en) * 2020-11-18 2021-01-29 北京望远四象科技有限公司 FOD detection method, device and system
CN113744141B (en) * 2020-11-19 2024-04-16 北京京东乾石科技有限公司 Image enhancement method and device and automatic driving control method and device
CN112560586B (en) * 2020-11-27 2024-05-10 国家电网有限公司大数据中心 Method and device for obtaining structural data of pole and tower signboard and electronic equipment
CN112613375B (en) * 2020-12-16 2024-05-14 中国人寿财产保险股份有限公司 Tire damage detection and identification method and equipment
CN112560967B (en) * 2020-12-18 2023-09-15 西安电子科技大学 Multi-source remote sensing image classification method, storage medium and computing device
CN112633375A (en) * 2020-12-23 2021-04-09 深圳市赛为智能股份有限公司 Bird detection method and device, computer equipment and storage medium
CN112651337A (en) * 2020-12-25 2021-04-13 国网黑龙江省电力有限公司电力科学研究院 Sample set construction method applied to training line foreign object target detection model
CN112686172B (en) * 2020-12-31 2023-06-13 上海微波技术研究所(中国电子科技集团公司第五十研究所) Airport runway foreign matter detection method, device and storage medium
CN112686823A (en) * 2020-12-31 2021-04-20 广西慧云信息技术有限公司 Automatic image enhancement method based on illumination transformation network
CN112836705B (en) * 2021-02-05 2023-05-26 成都益英光电科技有限公司 Feature organization labeling method and device, computer equipment and storage medium
CN113111703B (en) * 2021-03-02 2023-07-28 郑州大学 Airport pavement disease foreign matter detection method based on fusion of multiple convolutional neural networks
CN113012168B (en) * 2021-03-24 2022-11-04 哈尔滨理工大学 Brain glioma MRI image segmentation method based on convolutional neural network
CN112966788A (en) * 2021-04-19 2021-06-15 扬州大学 Power transmission line spacer fault detection method based on deep learning
CN113177497B (en) * 2021-05-10 2024-04-12 百度在线网络技术(北京)有限公司 Training method of visual model, vehicle identification method and device
CN113532513B (en) * 2021-06-21 2024-03-29 沈阳达能电安全高新产业技术研究院有限公司 Intrusion target real-time detection system and method based on power transmission system
CN113283395B (en) * 2021-06-28 2024-03-29 西安科技大学 Video detection method for blocking foreign matters at transfer position of coal conveying belt
CN113449743B (en) * 2021-07-12 2022-12-09 西安科技大学 Coal dust particle feature extraction method
CN113792578A (en) * 2021-07-30 2021-12-14 北京智芯微电子科技有限公司 Method, device and system for detecting abnormity of transformer substation
CN113642500B (en) * 2021-08-23 2024-03-19 桂林电子科技大学 Low-illumination target detection method based on multi-stage domain self-adaption
CN113781409B (en) * 2021-08-25 2023-10-20 五邑大学 Bolt loosening detection method, device and storage medium
CN113781416A (en) * 2021-08-30 2021-12-10 武汉理工大学 Conveyer belt tearing detection method and device and electronic equipment
CN113762159B (en) * 2021-09-08 2023-08-08 山东大学 Target grabbing detection method and system based on directional arrow model
CN113888477A (en) * 2021-09-13 2022-01-04 浙江大学 Network model training method, metal surface defect detection method and electronic equipment
CN113807367B (en) * 2021-09-17 2023-06-16 平安科技(深圳)有限公司 Image feature extraction method, device, equipment and storage medium
CN113945569B (en) * 2021-09-30 2023-12-26 河北建投新能源有限公司 Fault detection method and device for ion membrane
CN114022734A (en) * 2021-11-09 2022-02-08 重庆商勤科技有限公司 Liquid level height identification method based on image identification
CN114140427A (en) * 2021-11-30 2022-03-04 深圳集智数字科技有限公司 Object detection method and device
CN114332634B (en) * 2022-03-04 2022-06-10 浙江国遥地理信息技术有限公司 Method and device for determining position of risk electric tower, electronic equipment and storage medium
CN114595759A (en) * 2022-03-07 2022-06-07 卡奥斯工业智能研究院(青岛)有限公司 Protective tool identification method and device, electronic equipment and storage medium
CN114580564A (en) * 2022-03-21 2022-06-03 滁州学院 Dominant tree species remote sensing classification method and classification system based on unmanned aerial vehicle image
CN114821194B (en) * 2022-05-30 2023-07-25 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN115797914B (en) * 2023-02-02 2023-05-02 武汉科技大学 Metallurgical crane trolley track surface defect detection system
CN116416504B (en) * 2023-03-16 2024-02-06 北京瑞拓电子技术发展有限公司 Expressway foreign matter detection system and method based on vehicle cooperation
CN115965627B (en) * 2023-03-16 2023-06-09 中铁电气化局集团有限公司 Micro component detection system and method applied to railway operation
CN116168352B (en) * 2023-04-26 2023-06-27 成都睿瞳科技有限责任公司 Power grid obstacle recognition processing method and system based on image processing
CN116996675B (en) * 2023-09-27 2023-12-19 河北天英软件科技有限公司 Instant messaging system and information processing method
CN117593516B (en) * 2024-01-18 2024-03-22 苏州元脑智能科技有限公司 Target detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107188A1 (en) * 2015-12-25 2017-06-29 中国科学院深圳先进技术研究院 Method and apparatus for rapidly recognizing video classification
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107545263A (en) * 2017-08-02 2018-01-05 清华大学 A kind of object detecting method and device
CN107562900A (en) * 2017-09-07 2018-01-09 广州辰创科技发展有限公司 Method and system for analyzing airfield runway foreign matter based on big data mode

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202649484U (en) * 2011-09-30 2013-01-02 长春奥普光电技术股份有限公司 Airfield pavement foreign matter monitoring system
CN105389921B (en) * 2015-12-09 2019-03-26 中国民用航空总局第二研究所 A kind of monitoring system and method for airfield runway foreign matter
CN105931217A (en) * 2016-04-05 2016-09-07 李红伟 Image processing technology-based airport pavement FOD (foreign object debris) detection method
EP3151164A3 (en) * 2016-12-26 2017-04-12 Argosai Teknoloji Anonim Sirketi A method for foreign object debris detection
CN107608003A (en) * 2017-09-06 2018-01-19 广州辰创科技发展有限公司 A kind of FOD detection device for foreign matter and method based on virtual reality technology
CN207159915U (en) * 2017-09-06 2018-03-30 重庆交通大学 Runway barrier automatic cleaning apparatus
CN107728136A (en) * 2017-11-29 2018-02-23 航科院(北京)科技发展有限公司 A kind of airfield runway monitoring foreign bodies and removing guiding system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107188A1 (en) * 2015-12-25 2017-06-29 中国科学院深圳先进技术研究院 Method and apparatus for rapidly recognizing video classification
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107545263A (en) * 2017-08-02 2018-01-05 清华大学 A kind of object detecting method and device
CN107562900A (en) * 2017-09-07 2018-01-09 广州辰创科技发展有限公司 Method and system for analyzing airfield runway foreign matter based on big data mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Ivan Kreso等.Ladder-style DenseNets for Semantic Segmentation of Large Natural Images.《IEEE》.2018,全文. *

Also Published As

Publication number Publication date
CN109086656A (en) 2018-12-25
WO2019232830A1 (en) 2019-12-12

Similar Documents

Publication Publication Date Title
CN109086656B (en) Airport foreign matter detection method, device, computer equipment and storage medium
CN108764202B (en) Airport foreign matter identification method and device, computer equipment and storage medium
US11847775B2 (en) Automated machine vision-based defect detection
US20210081698A1 (en) Systems and methods for physical object analysis
Yi et al. An end‐to‐end steel strip surface defects recognition system based on convolutional neural networks
CN111080628A (en) Image tampering detection method and device, computer equipment and storage medium
WO2019089578A1 (en) Font identification from imagery
Neto et al. Brazilian vehicle identification using a new embedded plate recognition system
WO2019154383A1 (en) Tool detection method and device
Do et al. Automatic license plate recognition using mobile device
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN110472632B (en) Character segmentation method and device based on character features and computer storage medium
US11908178B2 (en) Verification of computer vision models
Xing et al. Traffic sign recognition from digital images by using deep learning
WO2022219402A1 (en) Semantically accurate super-resolution generative adversarial networks
CN114842478A (en) Text area identification method, device, equipment and storage medium
CN116934762B (en) System and method for detecting surface defects of lithium battery pole piece
Qu et al. Double domain guided real-time low-light image enhancement for ultra-high-definition transportation surveillance
Bolten et al. Evaluation of Deep Learning based 3D-Point-Cloud Processing Techniques for Semantic Segmentation of Neuromorphic Vision Sensor Event-streams.
CN114821484A (en) Airport runway FOD image detection method, system and storage medium
Lyu Research on subway pedestrian detection algorithm based on big data cleaning technology
CN116542884B (en) Training method, device, equipment and medium for blurred image definition model
Li et al. Research on Airport Target Recognition under Low‐Visibility Condition Based on Transfer Learning
CN112818840B (en) Unmanned aerial vehicle online detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant