CN115829839A - Image splicing method, server and storage medium for airport foreign matter detection - Google Patents

Image splicing method, server and storage medium for airport foreign matter detection Download PDF

Info

Publication number
CN115829839A
CN115829839A CN202211581366.3A CN202211581366A CN115829839A CN 115829839 A CN115829839 A CN 115829839A CN 202211581366 A CN202211581366 A CN 202211581366A CN 115829839 A CN115829839 A CN 115829839A
Authority
CN
China
Prior art keywords
image
airport
key feature
feature points
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211581366.3A
Other languages
Chinese (zh)
Inventor
赵栓峰
姚健
赵彦
罗志健
侯学谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN202211581366.3A priority Critical patent/CN115829839A/en
Publication of CN115829839A publication Critical patent/CN115829839A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses an image splicing method, a server and a storage medium for airport foreign matter detection, which belong to the field of image information processing and comprise the following steps: step 1: extracting key feature points of the airport runway pavement images shot by the camera; step 2: constructing a descriptor to describe the key feature points; and step 3: inputting the key feature points into an airport runway pavement image registration network for registration to obtain two airport runway pavement images; and 4, step 4: solving the transformation matrix of the characteristic points in the two airport runway pavement images to ensure that one image is projected to the other image through the transformation matrix for splicing; and 5: and performing fusion processing on the spliced images by using a fusion algorithm to eliminate shadows at the spliced positions. The problem of the concatenation quality that the shooting angle difference that the road surface unbalance arouses and illumination intensity change arouse is solved, the speed of splicing has been improved simultaneously, has promoted the concatenation effect.

Description

Image splicing method, server and storage medium for airport foreign matter detection
Technical Field
The application belongs to the field of image information processing, and particularly relates to an image splicing method, a server and a storage medium for airport foreign matter detection.
Background
Airport runway Foreign Objects (FOD) refer to stones, leaves, newspapers, nuts, foils, etc. that can cause damage and harm to airplanes. FOD not only causes damage to the aircraft, but also causes flight delays, interruptions in takeoff, losses in closing runways, and the like.
With the progress of image processing technology, FOD detection by using a vehicle-mounted camera has been developed to a certain extent. Before the foreign matter detection algorithm is used, the influence on repeated detection of an overlapped area and illumination and shooting angles is effectively eliminated by using an image splicing technology, and the efficiency of the foreign matter detection algorithm is improved, but the splicing effect of the existing splicing method on the pavement of the airport runway is poor.
The existing image splicing technology is mostly applied to the fields of conventional industrial production and the like, such as article identification and the like, the image splicing technology aiming at the pavement of an airport runway does not exist, if the conventional image splicing technology is directly transplanted into an airport foreign matter detection system, the defect that the image real-time splicing speed is low is caused, and the problems of low splicing quality are caused by unbalanced pavement, different shooting angles and illumination change.
Therefore, there is a need for an image stitching technique for airport foreign object detection, which can solve the above problems.
Disclosure of Invention
In order to solve the defects of the prior art, the method for splicing the airport foreign matter-oriented real-time images is provided, and the problem of slow splicing is solved by using the proposed method for constructing the key point feature descriptors based on the circular neighborhood. By using the airport runway pavement image registration network, the problems of different shooting angles caused by unbalanced pavement and splicing quality caused by illumination intensity change are solved, and the splicing speed is improved. The provided weighting fusion algorithm is used for reducing splicing gaps and improving splicing effect.
The technical effect that this application will reach is realized through following scheme:
according to a first aspect of the present invention, there is provided an image stitching method for airport foreign matter detection, comprising the steps of:
step 1: extracting key feature points of the airport runway pavement images shot by the camera;
and 2, step: constructing a descriptor to describe the key feature points;
and step 3: inputting the key characteristic points into an airport runway pavement image registration network for registration to obtain two airport runway pavement images;
and 4, step 4: solving the transformation matrix of the characteristic points in the two airport runway pavement images to ensure that one image is projected to the other image through the transformation matrix for splicing;
and 5: and performing fusion processing on the spliced images by using a fusion algorithm to eliminate shadows at the spliced positions.
Preferably, in step 2, the descriptor is constructed by:
step 21: making a circular neighborhood by taking the key characteristic point as a circle center;
step 22: dividing the neighborhood into a plurality of concentric circles, respectively solving gradient accumulated values of 8 directions of each concentric circle, and generating a feature vector in 8 directions of each concentric circle;
step 23: judging whether a feature vector is the maximum value, if not, synchronously and circularly moving the concentric circles to the left until the maximum feature vector is found;
step 24: and normalizing the descriptor.
Preferably, the manner of normalizing the descriptor is to use the following formula:
Figure BDA0003991357980000021
wherein: d is a set of feature vectors on concentric circles, D ij Is the jth feature vector of the ith concentric circle, D i =(d i1 ,d i2 ,...,d i8 ) And D is a feature point descriptor.
Preferably, in step 3, before inputting the key feature points into an airport runway pavement image registration network for registration, the airport runway pavement image registration network needs to be trained, wherein the airport runway pavement image registration network comprises 5 convolutional layers and two fully-connected layers.
Preferably, the specific method for training the airport runway pavement image registration network comprises the following steps:
step 31: carrying out Euclidean distance matching on the key feature points;
step 32: selecting M key points in the image M, selecting N key points in the image N, correspondingly distributing the M key points to a subset of the N key points to enable the key points to achieve optimal matching, and setting the optimal matching as a training target of the airport runway pavement image registration network;
step 33: shooting a plurality of images, and inputting the images into an airport runway pavement image registration network for training.
Preferably, in step 31, the method for performing euclidean distance matching on the key feature points includes:
setting key feature point sets in the image M and the image N as P and Q;
calculating P in P i Distances to all points in Q, the smallest distance being denoted as d min The next smallest distance is denoted as d min-1 If, if
Figure BDA0003991357980000022
Then consider P to be i And matching with the characteristic point with the minimum distance, otherwise, not matching.
Preferably, P is calculated according to the following formula i Distance to all points in Q:
Figure BDA0003991357980000031
pij is the Pi point coordinate and Qij is the coordinate of the Qi point.
Preferably, in step 5, the specific way of performing fusion processing on the spliced image is as follows:
dividing the overlapped area of the image to be jigsaw into two parts, which are respectively marked as a 1 、a 2 The calculation is performed by the following formula:
Gray=w 1 (x,y)f 1 (x,y)+w 2 (x,y)f 2 (x,y);
wherein Gray represents a weighted value of the overlap region, f 1 (x,y)、f 2 (x, y) respectively represent the images to be stitched M, N, f (x, y) represents the stitched images, w 1 、w 2 Respectively representing pixel weights of overlapped areas in M and N;
at a 1 When f is in 1 -f (x, y) = Gray when Gray is less than the threshold, otherwise f (x, y) = f 1 (x,y);
At a 2 When f is in 2 -f (x, y) = Gray when Gray is smaller than a threshold, otherwise f (x, y) = f 2 (x,y)。
According to a second aspect of the present invention, there is provided a server comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored in the memory to implement any one of the above image stitching methods for airport foreign object detection.
According to a third aspect of the present invention, there is provided a computer-readable storage medium, in which a computer program is stored, and the computer program is executed to implement the image stitching method for airport foreign object detection.
According to one embodiment of the invention, the method has the advantages that:
by using the proposed construction method of the key point feature descriptors based on the circular neighborhood, the problem of slow splicing is solved. By using the airport runway pavement image registration network, the problems of different shooting angles caused by unbalanced pavement and splicing quality caused by illumination intensity change are solved, and the splicing speed is improved. The provided weighting fusion algorithm is used for reducing splicing gaps and improving splicing effect.
Drawings
In order to more clearly illustrate the embodiments or prior art solutions of the present application, the drawings needed for describing the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a flowchart of an image stitching method for detecting airport foreign objects according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a circular neighborhood in an embodiment of the present application;
FIG. 3 is a schematic diagram of an overlapping area of cameras according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following embodiments and accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In an embodiment of the present application, as shown in fig. 1, an image stitching method for airport foreign object detection is provided, which includes the following steps:
s110: extracting key feature points of the airport runway pavement images shot by the camera;
in the step, a plurality of cameras mounted on the inspection vehicle form a camera array, different cameras in the array are responsible for shooting different areas of the airport runway pavement, and coverage shooting of the airport runway pavement is completed along with movement of the inspection vehicle.
The extraction of the key feature points is specifically to extract the key feature points by using an SIFT algorithm, wherein the feature points comprise points with feature properties such as object edge points, corner points, line intersection points and the like, and the images are spliced through the key feature points.
S120: constructing a descriptor to describe the key feature points;
the descriptors in this step are used to describe pixels around the key feature point, the key feature points with similar appearances should have similar descriptors, and it is desirable to determine whether the key feature points at two different positions are similar, which can be determined by calculating the distance between the descriptors, i.e. subsequently calculating the euclidean distance.
The descriptor in the SIFT algorithm takes a long time to calculate, so the following descriptor construction method based on the circular neighborhood is proposed.
In this step, many images need to be spliced at one time in airport foreign matter monitoring, so the description method of the key feature points needs to be efficient, and the proposed construction method of the efficient descriptor is as follows:
s121: as shown in fig. 2, a key feature point a is set, and a circular area with a radius of 10 is taken as a neighborhood with a circle center a;
s122: dividing a circular neighborhood with the radius of 10 into 5 concentric circles, wherein the radius difference of two adjacent concentric circles is 2;
s123: respectively obtaining the gradient accumulated values (0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees) of 5 concentric circles in 8 directions, wherein each direction on each concentric circle is a feature vector, and 40-dimensional feature vectors D are formed by the same 1 、D 2 、D 3 、D 4 、D 5 Wherein D is i =(d i1 ,d i2 ,...,d i8 );
S124: judging the feature vector d 11 Whether or not the maximum value is not d 11 D is 1 、D 2 、D 3 、D 4 、D 5 Simultaneously circularly moving left until d is found i1 Is measured. For example, the position where the innermost maximum occurs, i.e. d, is marked first 11 ,d 12 ,...,d 18 Of d, if 11 If the maximum value is not d, no processing is needed 11 Then, the following operations are performed: will D 1 、D 2 、D 3 、D 4 、D 5 Simultaneously circularly moving left until D 1 The maximum value in (1) is the feature vector D 1 First in (1), set d 14 Is a vector D 1 Maximum of (D), then the feature vector of five rings after the move is D i =(d i4 ,d i5 ,d i6 ,d i7 ,d i8 ,d i1 ,d i2 ,d i3 )。
S125: in order to reduce the influence of illumination change of an airport and camera vibration, the descriptor is subjected to normalization processing, and the processing method comprises the following steps:
Figure BDA0003991357980000051
wherein: d is the set of feature vectors on concentric circles, D ij Is the jth eigenvector, D, of the ith concentric circle i =(d i1 ,d i2 ,...,d i8 ),
Figure BDA0003991357980000052
A feature point descriptor;
s130: inputting the key feature points into an airport runway pavement image registration network for registration to obtain two airport runway pavement images;
image registration is to compare and extract the same object in different scenes and find the same place in two pictures. After registration, a change matrix is generated, i.e. two images need to be spliced through rotation scaling, and the change matrix is a mathematical expression of the rotation scaling.
In the step, the airport runway pavement image registration network is a neural network with 5 convolutional layers and two fully-connected layers, wherein the structure of the five-layer network is that each layer in the first 3 layers comprises convolution, maximum pooling and nonlinear transformation processes, and the rear two layers are fully-connected layers, so that the calculation speed of the network is improved, and the real-time requirement is met.
The method is characterized in that the method needs to be trained in advance before splicing real-time images of the pavement of the airport runway, and the training mainly comprises the following steps:
and according to the key feature points extracted in the steps, carrying out Euclidean distance matching on the key feature points, wherein the specific process is as follows:
and (5) assuming that the feature point sets in the images M and N are P and Q, judging whether the feature points are matched or not according to the distance between the two points in the sets. P i 、Q i P is calculated for a feature vector in the set P, Q i The distance from all the points in the set Q, and the minimum distance is recorded as d min The next smallest distance is denoted as d min-1 If at all
Figure BDA0003991357980000061
Then consider P to be i And matching with the characteristic point with the minimum distance, otherwise, not matching. The distance calculation formula is as follows:
Figure BDA0003991357980000062
m keypoints in image M are randomly selected, N keypoints in image N are selected, and M and N are not necessarily equal. And correspondingly allocating the selected key points m to a certain subset in the n, so that the optimal matching of the key points in the previous step is achieved, and setting the optimal matching as a training target of the airport runway pavement image registration network.
One hundred pictures are taken, and the airport runway pavement image registration network is trained, so that the accuracy is improved, and the influence caused by camera vibration is eliminated.
S140: solving the transformation matrix of the characteristic points in the two airport runway pavement images to ensure that one image is projected to the other image through the transformation matrix for splicing;
in this step, the image N in the above step is projected onto the image M through the above matrix to complete the stitching.
S150: and performing fusion processing on the spliced images by using a fusion algorithm to eliminate shadows at the spliced positions.
In this step, as shown in fig. 3, because the colors of the whole airfield runway pavement are similar, the spliced images may have double images, obvious splicing traces, and the like at the boundary of the spliced images, the spliced images are fused by using the proposed fusion algorithm, and the specific way of fusing the spliced images is as follows:
the overlapping area of the image to be spliced is divided into two blocks, so that the fusion effect has no splicing gap and no obvious ghost image, and the calculation speed is high. The two overlapping areas are respectively marked as a 1 、a 2 The calculation is performed by the following formula:
Gray=w 1 (x,y)f 1 (x,y)+w 2 (x,y)f 2 (x,y);
wherein Gray represents a weighted value of the overlap region, f 1 (x,y)、f 2 (x, y) respectively represent the images to be stitched M, N, f (x, y) represents the stitched images, w 1 、w 2 Respectively representing pixel weights of overlapped areas in M and N;
at a 1 In when f 1 -f (x, y) = Gray when Gray is smaller than a threshold, otherwise f (x, y) = f 1 (x,y);
At a 2 When f is in 2 -f (x, y) = Gray when Gray is less than the threshold, otherwise f (x, y) = f 2 (x,y)。
Through the steps, a whole set of real-time image splicing processing method for airport foreign matter detection is provided;
by using the proposed construction method of the key point feature descriptors based on the circular neighborhood, the splicing efficiency is improved, and the problem that the splicing is slow and cannot be reconstructed in real time is solved.
By using the airport runway pavement image registration network, the problems of different shooting angles caused by unbalanced pavement and splicing quality caused by illumination intensity change are solved, and the splicing speed is improved.
The provided weighting fusion algorithm is used for reducing splicing gaps and improving splicing effect.
According to a second aspect of the present invention, as shown in fig. 4, there is provided a server comprising: a memory 401 and at least one processor 402;
the memory 401 stores a computer program, and the at least one processor 402 executes the computer program stored in the memory 401 to implement any one of the above-mentioned image stitching methods for airport foreign object detection.
According to a third aspect of the present invention, there is provided a computer-readable storage medium having stored therein a computer program which, when executed, implements the image stitching method for airport foreign object detection as defined in any one of the above.
It should be noted that the above detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise. Furthermore, it will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described of illustrated herein.
Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
For ease of description, spatially relative terms such as "over … …", "over … …", "over … …", "over", etc. may be used herein to describe the spatial positional relationship of one device or feature to another device or feature as shown in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" may include both orientations of "above … …" and "below … …". The device may also be oriented in other different ways, such as by rotating it 90 degrees or at other orientations, and the spatially relative descriptors used herein interpreted accordingly.
In the foregoing detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, like numerals typically identify like components, unless context dictates otherwise. The illustrated embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image splicing method for airport foreign matter detection is characterized by comprising the following steps:
step 1: extracting key feature points of the airport runway pavement images shot by the camera;
step 2: constructing a descriptor to describe the key feature points;
and step 3: inputting the key feature points into an airport runway pavement image registration network for registration to obtain two airport runway pavement images;
and 4, step 4: solving the transformation matrix of the characteristic points in the two airport runway pavement images to ensure that one image is projected to the other image through the transformation matrix for splicing;
and 5: and performing fusion processing on the spliced images by using a fusion algorithm to eliminate shadows at the spliced positions.
2. The image stitching method for airport foreign object detection according to claim 1, wherein in step 2, the descriptor is constructed by:
step 21: making a circular neighborhood by taking the key characteristic point as a circle center;
step 22: dividing the neighborhood into a plurality of concentric circles, respectively solving gradient accumulated values of 8 directions of each concentric circle, and generating a feature vector in 8 directions of each concentric circle;
step 23: judging whether a feature vector is the maximum value, if not, synchronously and circularly moving the concentric circles to the left until the maximum feature vector is found;
step 24: and normalizing the descriptor.
3. The image stitching method for airport foreign object detection according to claim 2, wherein the descriptor is normalized by adopting the following formula:
Figure FDA0003991357970000011
wherein: d is the set of feature vectors on concentric circles, D ij Is the jth eigenvector, D, of the ith concentric circle i =(d i1 ,d i2 ,...,d i8 ),
Figure FDA0003991357970000012
Is a feature point descriptor.
4. The image stitching method for detecting airport foreign matters according to claim 1, wherein in step 3, before inputting the key feature points into an airport runway pavement image registration network for registration, the airport runway pavement image registration network needs to be trained, wherein the airport runway pavement image registration network comprises 5 convolutional layers and two fully-connected layers.
5. The image stitching method for airport foreign matter detection according to claim 4, wherein the specific method for training the airport runway pavement image registration network comprises the following steps:
step 31: carrying out Euclidean distance matching on the key feature points;
step 32: selecting M key feature points in the image M, selecting N key feature points in the image N, correspondingly allocating the M key feature points to a certain subset of the N key feature points to enable the key feature points to be optimally matched, and setting the key feature points as a training target of an airport runway pavement image registration network;
step 33: shooting a plurality of images, and inputting the images into an airport runway pavement image registration network for training.
6. The image stitching method for airport foreign object detection according to claim 5, wherein in step 31, the method for Euclidean distance matching of key feature points comprises:
setting key feature point sets in the image M and the image N as P and Q;
calculating P in P i Distances to all points in Q, the minimum distance being denoted as d min The next smallest distance is denoted as d min-1 If, if
Figure FDA0003991357970000021
Then consider P i And matching with the characteristic point with the minimum distance, otherwise, not matching.
7. The image stitching method for airport foreign object detection according to claim 6, wherein P is calculated according to the following formula i Distance to all points in Q:
Figure FDA0003991357970000022
P ij is P i Point coordinate, Q ij Is Q i The coordinates of the points.
8. The image stitching method for airport foreign matter detection according to claim 1, wherein in step 5, the specific way of performing fusion processing on the stitched image is as follows:
dividing the overlapped area of the image to be jigsaw into two parts, which are respectively marked as a 1 、a 2 The calculation is performed by the following formula:
Gray=w 1 (x,y)f 1 (x,y)+w 2 (x,y)f 2 (x,y);
wherein Gray represents a weighted value of the overlap region, f 1 (x,y)、f 2 (x, y) respectively represent the images to be stitched M, N, f (x, y) represent the stitched images, w 1 、w 2 Respectively representing pixel weights of overlapped areas in M and N;
at a 1 When | f 1 -when Gray | is less than the threshold value, f (x, y) = Gray, otherwise f (x, y) = f 1 (x,y);
At a 2 In, when | f 2 -when Gray | is less than the threshold value, f (x, y) = Gray, otherwise f (x, y) = f 2 (x,y)。
9. A server, comprising: a memory and at least one processor;
the memory stores a computer program, and the at least one processor executes the computer program stored by the memory to implement the image stitching method for airport foreign object detection as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, which when executed implements the image stitching method for airport foreign object detection according to any one of claims 1 to 8.
CN202211581366.3A 2022-12-09 2022-12-09 Image splicing method, server and storage medium for airport foreign matter detection Pending CN115829839A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211581366.3A CN115829839A (en) 2022-12-09 2022-12-09 Image splicing method, server and storage medium for airport foreign matter detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211581366.3A CN115829839A (en) 2022-12-09 2022-12-09 Image splicing method, server and storage medium for airport foreign matter detection

Publications (1)

Publication Number Publication Date
CN115829839A true CN115829839A (en) 2023-03-21

Family

ID=85546099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211581366.3A Pending CN115829839A (en) 2022-12-09 2022-12-09 Image splicing method, server and storage medium for airport foreign matter detection

Country Status (1)

Country Link
CN (1) CN115829839A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704446A (en) * 2023-08-04 2023-09-05 武汉工程大学 Real-time detection method and system for foreign matters on airport runway pavement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704446A (en) * 2023-08-04 2023-09-05 武汉工程大学 Real-time detection method and system for foreign matters on airport runway pavement
CN116704446B (en) * 2023-08-04 2023-10-24 武汉工程大学 Real-time detection method and system for foreign matters on airport runway pavement

Similar Documents

Publication Publication Date Title
Huijuan et al. Fast image matching based-on improved SURF algorithm
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
Yu et al. A UAV-based crack inspection system for concrete bridge monitoring
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN104217459B (en) A kind of spheroid character extracting method
CN105335973A (en) Visual processing method for strip steel processing production line
CN109858527B (en) Image fusion method
CN105021190B (en) The method of anti-satellite navigation fraud and the unmanned systems based on this method
CN102354364B (en) Three-dimensional barrier detecting method of monitoring system with single video camera
CN112163588A (en) Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment
CN115829839A (en) Image splicing method, server and storage medium for airport foreign matter detection
Tang et al. Multiple-kernel based vehicle tracking using 3D deformable model and camera self-calibration
CN109146920A (en) A kind of method for tracking target that insertion type is realized
CN110472628A (en) A kind of improvement Faster R-CNN network detection floating material method based on video features
Li et al. Aruco marker detection under occlusion using convolutional neural network
Kumaar et al. Juncnet: A deep neural network for road junction disambiguation for autonomous vehicles
Zhong et al. Stairway detection using Gabor filter and FFPG
CN112634130A (en) Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator
Ji et al. An evaluation of conventional and deep learning‐based image‐matching methods on diverse datasets
EP3352112A1 (en) Architecture adapted for recognising a category of an element from at least one image of said element
Lang et al. Vision based object identification and tracking for mobile robot visual servo control
Gökçe et al. Recognition of dynamic objects from UGVs using Interconnected Neuralnetwork-based Computer Vision system
Zhang et al. An improved YOLO algorithm for rotated object detection in remote sensing images
CN115424249B (en) Self-adaptive detection method for small and weak targets in air under complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination