CN111060076A - Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area - Google Patents

Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area Download PDF

Info

Publication number
CN111060076A
CN111060076A CN201911271674.4A CN201911271674A CN111060076A CN 111060076 A CN111060076 A CN 111060076A CN 201911271674 A CN201911271674 A CN 201911271674A CN 111060076 A CN111060076 A CN 111060076A
Authority
CN
China
Prior art keywords
image
detected
taxiway
region
runway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911271674.4A
Other languages
Chinese (zh)
Other versions
CN111060076B (en
Inventor
汤新民
陈济达
刘金安
李腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911271674.4A priority Critical patent/CN111060076B/en
Publication of CN111060076A publication Critical patent/CN111060076A/en
Application granted granted Critical
Publication of CN111060076B publication Critical patent/CN111060076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/257Belief theory, e.g. Dempster-Shafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for planning a routing inspection path and detecting foreign matters of an unmanned aerial vehicle facing an airport flight area, which is divided into two parts: unmanned aerial vehicle routing inspection path planning and airport FOD image detection based on multi-feature fusion; the invention combines the unmanned aerial vehicle industry with the airport FOD inspection, can quickly and effectively inspect the airport target area by the intelligent image recognition technology, can arrange the FOD removal procedure in time, can also perform statistical record analysis on the quantity, characteristics, types, sources and the like of the FOD, and establishes the FOD database of the airport.

Description

Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area
Technical Field
The invention relates to a path planning technology and an intelligent image identification technology, in particular to an unmanned aerial vehicle autonomous path planning technology and an airport FOD image detection technology.
Background
Fod (foreign Object debris) refers to some foreign substance, debris, or item that may damage an aircraft or system. FOD sources are a wide variety of sources, mainly including personnel, airport infrastructure (such as pavement, navigation lights and signs, signs), airport environment (such as wildlife, snow, ice) and airport area operating equipment. FOD has always been an important factor affecting the normal safe operation of navigable airports, causing unmeasurable airport direct and indirect losses each year. The foreign matter detection technology is developed under the background, and mainly searches the runway by various technical means, identifies foreign matters and feeds the foreign matters back to the terminal for unified processing. At present, most FOD prevention of various medium and small airports is finished by workers, and 4 times of manual inspection is usually carried out on runways every day. However, manual inspection is time-consuming and labor-consuming, low in efficiency and difficult to ensure that foreign matters are completely removed. The common FOD detection system comprises a laser radar system, equipment is arranged at the key position of a detected pavement, the runway pavement is continuously scanned by emitting millimeter wave radar detection, and an alarm is triggered if FOD is detected; the thermal imaging system detects electromagnetic radiation with different wavelengths emitted by foreign objects through the thermal imager and feeds the electromagnetic radiation back to the main control system for analysis and processing. In recent years, unmanned aerial vehicles are developed rapidly and are applied to many fields, and especially in civil aspects, unmanned aerial vehicles are applied to aerial photography, global warming monitoring, disaster rescue, forest monitoring, land surveying and the like. On one hand, as no government or large enterprises exist in the market of the unmanned aerial vehicle, various operation hidden dangers exist in the whole unmanned aerial vehicle industry at present, and profit points of the whole unmanned aerial vehicle market are not completely planned at present; on the other hand, the unmanned aerial vehicle has lower requirements on working conditions, can quickly execute repeated and complicated work, and is low in cost and convenient to maintain.
FOD of an airport is always an important hidden danger for safe operation of flights, for medium and small airports, an integrated FOD detection system is an unaffordable means with low cost performance, and the prior method mainly depends on manual inspection, so that the efficiency is low, the accuracy of the method cannot be ensured, and too many uncontrollable factors exist
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems in the prior art, the invention provides an airport flight area unmanned aerial vehicle routing inspection path planning and foreign matter detection system, which combines the unmanned aerial vehicle and the airport FOD inspection, can quickly and effectively inspect the FOD of an airport target area, can arrange the FOD removal procedure in time, can perform statistical record analysis on the quantity, characteristics, types, sources and the like of the FOD, and establishes an FOD database of the airport.
The technical scheme is as follows: an unmanned aerial vehicle routing inspection path planning method for an airport flight area comprises the following steps:
the method comprises the following steps: processing a target airport area through GIS application to form a plane graph in a vector format, subdividing the plane graph to obtain plane graphs of a runway and a main taxiway, analyzing the plane graphs of the runway and the main taxiway to obtain respective boundary coordinates, and storing the boundary coordinates into a database for later use;
step two: according to different connectivity, partitioning the plane diagrams of the runway and the main taxiway to obtain the area positions of the runway, the main taxiway and the liaison taxiway;
step three: selecting an area to carry out path planning on the area to obtain a routing inspection path track coordinate, and transmitting the routing inspection path track coordinate into a routing inspection unmanned aerial vehicle;
step four: and setting relevant flight parameters of the inspection unmanned aerial vehicle according to the trajectory coordinates of the inspection path, executing an inspection task by the unmanned aerial vehicle, and returning the video stream of the inspection area in real time.
Further, the second step specifically includes the following steps:
s021: rasterizing the plan view in step 1: marking the grids of the flyable areas of the unmanned aerial vehicle as 0 and the grids of the non-flyable areas as 1 to obtain a target airfield area represented by a 01 matrix, wherein the X direction of the 01 matrix is parallel to the runway direction, the Y direction of the 01 matrix is perpendicular to the runway direction, and the direction with a small subscript value of the 01 matrix is the runway direction;
s022: accumulating and summing all elements of each row or each column along the runway direction;
s023: screening out all items with the summation equal to zero, and combining the adjacent row numbers or column numbers to obtain the area positions of the runway and the main taxiway;
s024: and searching the area except the runway and the main taxiway in the area of the target aircraft by adopting a seed filling method to obtain the range and the position of each contact taxiway.
Further, the S024 specifically includes the steps of:
a. searching the value of 1 of all elements in i +1 rows or j +1 columns, randomly selecting an element with a value of 0 as a seed among the elements with the element of 1 in the rows or the columns, marking the coordinate of the seed as A (i +1, y) or B (x, j +1), giving label, and pressing all the territorable squares with the element of 0 adjacent to the seed into a stack, wherein i is a row vector in a 01 matrix, and j is a column vector in the 01 matrix;
b. popping the elements at the top of the stack, endowing the same label as the seeds with the elements, and pressing all the feasible region squares with 0 adjacent to the elements at the top of the stack into the stack;
c. and repeating the step b until the stacks are empty, obtaining the range and the position of the contact taxiway represented by each stack, storing the data stored in each stack space into a database, and naming each part according to label.
Further, in the third step, a "Z" scanning method is adopted for trajectory planning, and the method specifically includes the following steps:
respectively calculating the distances of the runway, the main taxiway and the liaison taxiway in the X and Y directions according to the positions of the runway, the main taxiway and the liaison taxiway obtained in the step two: x is the number ofmax-xminAnd ymax-ymin(ii) a When x ismax-xmin>ymax-yminIn the process, Z-shaped scanning is carried out on the runway, the main taxiway and the contact taxiway according to the X direction to obtain a path plan; when x ismax-xmin<ymax-yminIn the process, Z-shaped scanning is carried out on the runway, the main taxiway and the contact taxiway according to the Y direction to obtain a path plan;
the Z-shaped scanning method comprises the following steps:
acquiring coordinates of all points in a Z-shaped scanning area, arranging row coordinates from small to large in sequence, and then arranging column coordinates from small to large in sequence;
randomly selecting a row/column, and rearranging the arrangement sequence of the row/column from large to small by a bubble sorting method;
all points are connected in sequence.
The invention also discloses a foreign matter detection method, which comprises the following steps:
s100: extracting image frames to be detected from the patrol area video stream returned in the fourth step, and preprocessing the image frames to obtain final area images to be detected, wherein the preprocessing comprises image enhancement, image registration and superposition and image segmentation in sequence;
s200: extracting brightness features, color features and edge features in the region image to be detected finally;
s300: calculating the basic probability of detecting foreign matters in each feature in S200;
s400: calculating the support degree among the features according to the basic probability obtained in the step S300, and fusing the features according to the support degree to obtain a fused image;
s500: and confirming whether an object different from the background of the road surface exists according to the fused image.
Further, in the S100, an image enhancement is performed based on a multi-scale Retinex algorithm;
the expression of the multi-scale Retinex algorithm is as follows:
L(x,y)=I(x,y)*F(x,y) (2)
Figure BDA0002314370550000031
log Ri(x,y)=log I(x,y)-log[I(x,y)*Fi(x,y)](4)
Figure BDA0002314370550000032
in the above formula, x and y are coordinate points, I (x, y) represents an image of an object, L (x, y) represents an incident light component of the image, R (x, y) represents a reflected light component of the image, and x represents a convolution operation; f (x, y) represents a Gaussian low-pass filter function; k is the normalized molecule, σ is the scale parameter of the Gaussian filter, ωσFor weights of different size σ, all weight coefficients add up to 1.
Further, the image registration and overlay in S100 includes the following steps:
s121: finding out the central point of the target image after image enhancement, establishing a coordinate system by taking the central point as an origin, and recording the coordinates of four points of the image as (X)1 min,0),(X1 max,0),(0,Y1 max),(0,Y1 min);
S122: based on similarity measurement and feature structure matching, finding the position of a central point in the front and rear n frame images to be registered, and simultaneously respectively recording four coordinates of the images: (X)2 min,0)、(X2 max,0)、(0,Y2 max)、(0,Y2 min)、(Xn min,0)、(Xn max,0)、(0,Yn max)、(0,Yn min);
S123: comparing the coordinates obtained in S121 and S122, selecting the maximum value of the X and Y directions, and selecting the minimum value of the + X and + Y directions
Figure BDA0002314370550000033
S124: performing coordinate conversion on the parameters in the S123 by adopting an equation (6) to complete image registration;
I1(x,y)=f(I2(x,y)) (6)
wherein, I1(x, y) and I2(x, y) represents the gray value of the pixel point (x, y) in the two images, and f represents coordinate conversion;
s125: fusing the target image subjected to image registration by adopting an equation (7);
Figure BDA0002314370550000041
in the formula ImIs a mixed signal, InIs a noise signal, IpIs an image signal, ωiIs the weight of the ith image.
Further, the image segmentation in S100 includes the following steps:
s131: performing linear recognition on the images subjected to image registration and superposition by adopting fusion pixel separation and classical Hough transformation, dividing the images according to the recognized linear to obtain each region, and calculating the pixel average value of each region, wherein the pixel average value is marked as pixel n, and n is the number of the divided regions;
s132: assume that the region to be detected is PnTaking the difference between the gray level average value of the region to be detected and the adjacent region, and taking the minimum value delta in the difference valuesmin
S133: expanding in a connected domain by taking 8 adjacent pixels as a unit from the central point of the region to be detected to obtain the average gray value of the expanded region, performing the difference of the gray value between the average gray value of the expanded region and the average gray value of the region to be detected, and judging whether the difference is delta or notminIf the detected region is within the range of (1), the detected region is determined to be the region where the detected region is located, otherwise, the expansion in the direction is stopped, and the final detected region is obtained.
Further, the S200 specifically includes:
extracting the brightness and the color in the region image to be detected finally by adopting an image feature extraction method based on an Itti model;
calculating to obtain a brightness characteristic diagram and a color characteristic diagram in a final region image to be detected by adopting the center-periphery difference;
and extracting the edge characteristics of the finally detected region image by adopting a Canny operator to obtain an edge characteristic diagram.
Further, the basic probability of detecting the foreign object by each feature in S300 is:
Figure BDA0002314370550000042
wherein A represents a set in which a foreign object is detected, B represents a set in which noise is detected, and U represents an indeterminable case; n denotes a sample set, Nk(A) Indicates the number of foreign matters detected by the method k, Nk(B) Indicating the amount of noise detected using method k.
Further, the support degree between the features in S400 is expressed as:
Figure BDA0002314370550000043
Figure BDA0002314370550000044
Figure BDA0002314370550000045
Figure BDA0002314370550000046
Figure BDA0002314370550000047
Figure BDA0002314370550000051
wherein, Deltalc,Δle,ΔecRepresenting the Euclidean distance from each movable peak value to the foreign matter under different characteristics, and taking the minimum distance to obtain the distance from the three points to the target;
the fuzzy relation by fuzzy logic is adopted to obtain:
Pk(A)=NOT(Pk(A)or Pk(B))=
1-max(Pk(A),Pk(B)) (23)
Pk(U)=1-max(Pk(A),Pk(B)) (24)
the method for fusing the features according to the support degree by adopting the D-S evidence fusion theory comprises the following steps:
and (3) solving by adopting an equation (25) according to the support degree of each feature pair A, B and U to obtain corresponding basic probability distribution values:
Figure BDA0002314370550000052
and combining the basic probability distribution functions of the features by using a Dempster fusion rule, and fusing the features from high to low according to the support degree.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages:
1. the unmanned aerial vehicle airport foreign matter detection can dynamically patrol the runway, find FOD in time and send an alarm; the FOD target can be accurately positioned at the first time; the high-resolution two-dimensional image of the area can be manually intervened and checked in real time; compared with manual inspection, the efficiency is greatly improved, and compared with other FOD systems such as a laser radar system, a thermal imaging system and a video camera, the purchase, maintenance and maintenance cost is greatly reduced.
2. Compared with the millimeter wave radar commonly used in the current market, the method for finding the FOD of the airport by using the image processing method greatly reduces the maintenance and purchase cost. And machine learning is added to enable the FOD to be classified without the need for staff to confirm on site. The whole set of equipment is low in price, convenient to operate, short in maintenance and learning time of workers, easy to operate and capable of saving labor cost. The flexibility is strong, is difficult to receive the environmental impact.
Drawings
FIG. 1 is a schematic diagram of a database;
FIG. 2 is a schematic diagram of a Z-scan planning path;
FIG. 3 is a schematic diagram of randomly selecting a foggy road map for image enhancement;
FIG. 4 is a schematic diagram of capturing video streams for image overlay and enhancement;
FIG. 5 is a schematic view of a pavement marking line separation;
FIG. 6 is a schematic diagram of a method for detecting road surface foreign matter by various characteristic methods;
FIG. 7 is a multi-feature fusion technique roadmap.
Detailed Description
The technical solution of the present invention will be further explained with reference to the accompanying drawings and examples.
The basic idea of the invention is as follows: the invention is subdivided into two parts: the method comprises the steps of unmanned aerial vehicle routing inspection path planning and airport FOD image detection based on multi-feature fusion, wherein the unmanned aerial vehicle routing inspection path planning firstly processes a target area through GIS application to form a plane graph in a vector format, the plane graph can be subdivided according to taxiways and runways, then the plane graphs of the taxiways and the runways are analyzed, boundary coordinates of the plane graphs are obtained and stored in a database for later use, then development is carried out by utilizing a Dajiang SDK, the plane graph is connected with an inspection unmanned aerial vehicle, the taxiways or the runways needing path planning are selected, the size of an inspection grid is set, the inspection grid is planned and routing track coordinates are obtained, an unmanned aerial vehicle flight control system starts to inspect after obtaining the coordinates, real-time video recording is carried out on the way of inspection, and then the. The method comprises the steps of firstly extracting image frames to be detected from a received video stream based on multi-feature fusion airport FOD image detection, then carrying out image splicing with a plurality of frames in front and at back on a time axis, and realizing noise reduction on the images and adopting a multi-scale Retinex algorithm to enhance the images through a digital image preprocessing technology; by means of fusing pixel separation and linear detection, the mark lines in the image are separated, the image is cut, and background complexity is reduced; the method comprises the steps of extracting two characteristics of brightness and color of a sample set by referring to a classical ITTI model, extracting edge characteristics by using a Canny operator, calculating the basic probability of detecting foreign matters by each characteristic by adopting a Monte Carlo method and fuzzy mathematics knowledge, customizing a method to represent the support degree between the characteristics, finally fusing the characteristics according to the support degree by using a D-S evidence fusion theory, and performing significance detection on a processed image.
Example (b):
the system for routing inspection and foreign matter detection of the unmanned aerial vehicle in the airport flight area is divided into two stages, namely a routing inspection and planning stage of the unmanned aerial vehicle and a FOD image detection stage.
The unmanned aerial vehicle routing inspection path planning stage comprises the following five steps:
s000: and processing the target airport area through GIS application to form a plane graph in a vector format, subdividing the plane graph according to the runway, the taxiways and the actual needs, analyzing the plane graphs of the taxiways and the runway, and storing boundary coordinates of the obtained plane graphs into a database for later use.
S010: the airport map recorded by ArcGIS generally adopts a WGS-84 coordinate system, the WGS-84 coordinate system (L, B) is converted into a rectangular coordinate system (x, y) through Gaussian projection, the original image is projected into a new coordinate system, then longitude and latitude coordinates are needed when geographical elevation data are calculated, and a plane coordinate is needed to be converted into a geodetic coordinate by utilizing a Gaussian projection back-calculation formula when the geographical elevation data are calculated.
S020: the embodiment mainly faces to an airport flight area which mainly comprises a runway and a taxiway, so that a target area is divided into different blocks according to connectivity to obtain the runway, a main taxiway and a connecting taxiway;
the target area-based blocking method specifically comprises the following steps:
s021: rasterizing the plan view in S000: marking the grids of the flyable areas of the unmanned aerial vehicle as 0 and the grids of the non-flyable areas as 1 to obtain a target airport area represented by a 01 matrix, wherein the X direction of the 01 matrix is parallel to the runway direction, the Y direction is perpendicular to the runway direction, and the number of squares in the X direction and the Y direction of the matrix is compared, namely, judging that n is greater than m or n is less than m. The directions of the runway and the main taxiway are consistent with the direction with smaller numerical value of the subscript of the matrix;
s022: along the runway direction, all elements of each row or column of the direction are summed up, namely:
Figure BDA0002314370550000071
or
Figure BDA0002314370550000072
S023: calculating all terms with the summation equal to zero, and combining the adjacent row numbers or column numbers to obtain the positions of the runway and the main taxiway;
s024: searching the range and the position of each contact taxiway by using a seed filling method, specifically, searching a 01 matrix representing an airport area, wherein the searching range is an area except a runway and a main taxiway in the matrix;
the specific operation steps are as follows:
a. searching the value of 1 of all the elements in the i +1 or j +1 columns, randomly selecting a coordinate with 0 as A (i +1, y) or B (x, j +1) in each interval, giving the coordinate as A (i +1, y) or B (x, j +1) as a seed, and then pressing all the feasible region squares with 0 of all the elements adjacent to the seed into a stack;
b. popping up the elements at the top of the stack, endowing the same label to the elements, and then pressing all the grids with 0 elements adjacent to the elements at the top of the stack into the stack;
c. repeating the step b until the stack is empty; and storing the data stored in each stack space into a database, naming each part according to label of label, and knowing the position and range of the liaison taxiway represented by each stack because the position of the element in each stack is known.
S025: numbering communicated areas such as a runway, a main taxiway and contact taxiways;
s030: adopt "Z" word scanning method to carry out the trajectory planning, to this kind of regular rectangle of runway and main taxiway, scan the inside track point of runway, main taxiway and communicate different blocks again, specifically do: calculating the distance in the x and y directions, i.e. x, according to the two-dimensional coordinates of the planemax-xminAnd ymax-yminWhen x ismax-xmin>ymax-yminIn the course of time, just in the runway and main taxiway according to the x directionPerforming Z-shaped scanning; when x ismax-xmin<ymax-yminThen, Z-shaped scanning is carried out on the runway and the main taxiway according to the y direction;
and for irregular graphs such as the communicated taxiways, in order to ensure that the whole area can be traversed, the communicated taxiways are directly inspected in the same direction according to the steps.
The Z-shaped scanning steps are as follows:
s031: taking out coordinates of all points in a Z-shaped scanning area, preferentially arranging the coordinates from small to large according to the size of the row coordinates, and then sequentially arranging the coordinates from small to large according to the size of the column coordinates;
s032: randomly selecting odd or even rows, and rearranging the arrangement sequence of the rows (columns) according to the descending order of the columns (rows) by a bubble sorting method;
s033: and connecting all the points in sequence to obtain the area to be scanned.
S040: and after path planning, acquiring a coordinate of a routing inspection path track, transmitting the coordinate into a flight control system of the unmanned aerial vehicle, setting relevant flight parameters of the routing inspection unmanned aerial vehicle, starting a routing inspection task by the unmanned aerial vehicle, and returning the region of a routing inspection region in real time.
The airport FOD image detection technology stage of this embodiment includes the following three parts:
s100: after the real-time video of the inspection area returned by S040 is obtained, the shot image is preprocessed by using a digital image processing means before foreign matter identification, so that the problems of noise in the image and motion blur possibly generated are mainly eliminated, the data volume is effectively reduced, and meanwhile, the problems of image visibility and contrast reduction caused by weather conditions are solved. The image visual effect through preprocessing is better, the data volume is less, and the accuracy and the detection efficiency of follow-up foreign matter detection are favorably improved. The pretreatment process of the embodiment is divided into three parts: image enhancement, image registration and superposition, and image segmentation.
S110: image enhancement: the embodiment adopts a Retinex method based on image splicing to carry out image enhancement, and the method expresses an image of an object by I (x, y), and the image is composed of an incident light component and a reflected light component of the object. L (x, y) represents the incident light component of an image, which determines the dynamic range of an image; r (x, y) represents the reflected light component of the image, which carries the detail information of the image, so the perceived color intensity of each channel is as follows:
I(x,y)=R(x,y)×L(x,y) (1)
in the formula, x and y are coordinate points. In order to reduce the complexity of the calculation, it is considered that the light irradiation component of a certain point passes through the adjacent position estimation, and a Multi-Scale Retinex (MSR) based on the MSR is also produced.
L(x,y)=I(x,y)*F(x,y) (2)
Figure BDA0002314370550000081
log Ri(x,y)=log I(x,y)-log[I(x,y)*Fi(x,y)](4)
Figure BDA0002314370550000082
In the above formula, denotes convolution operation; f (x, y) represents a Gaussian low-pass filter function; k is a normalized molecule, sigma is a scale parameter of a Gaussian filter, in order to make the result take account of the coordination and the local details of the global background, three values with different sizes sigma are generally adopted, and then different weights omega are respectively givenσAll weight coefficients add up to 1.
S120: image registration and overlay: some foreign objects may be identified by the algorithm as noise erasure due to their small size. Therefore, in order to avoid omission of foreign matter detection as much as possible, the detection image in the middle axis of the video and the two frames of images before and after the detection image are spliced together, so that target energy can be accumulated, noise errors generated randomly by the video are eliminated, an image with higher resolution is obtained, in order to eliminate interference of the pavement marking line on foreign matter detection, the runway and the marking line in the whole picture are separated by the system through color and edge characteristics, the complexity of the background is reduced, foreign matter detection is respectively carried out, and the detection efficiency is improved. The whole process is divided into image registration and image fusion.
The image registration mainly inherits the mapping relation between space and gray scale of the image, if I is used1(x, y) and I2The (x, y) represents the gray value of a certain pixel point (x, y) in two images, and generally, the spatial geometric transformation of the two images to be registered is mainly obtained, so the transformation relation is as follows:
I1(x,y)=f(I2(x,y)) (6)
where f represents coordinate conversion.
The specific steps of image registration in this embodiment are as follows:
s121: finding out the central point of the target image, establishing a coordinate system by taking the point as an origin, and recording the coordinates of four points of the image as (x)1 min,0),(X1 max,0),(0,Y1 max),(0,Y1 min);
S122: based on similarity measurement and feature structure matching, finding out the position of a central point in a plurality of frames of images before and after the image to be registered, and simultaneously respectively recording four coordinates of the image: (X)2 min,0),(X2 max,0),(0,Y2 max),(0,Y2 min)……(Xn min,0),(Xn max,0),(0,Yn max),(0,Yn min);
S123: comparing the sets of coordinates, selecting the maximum value of the X and Y directions, and selecting the minimum value of the + X and + Y directions
Figure BDA0002314370550000091
S124: and completing coordinate conversion and interpolation according to the parameters in the S123, and completing matching.
Image fusion: since the coordinate origin cannot be completely searched, but only the slight displacement exists between the images, the images to be combined still have extremely large phasesAnd (4) turning off. In the embodiment, a weighted average method is adopted, and several images are weighted differently according to the distance from a target detection image, and generally the closer the images are, the higher the weight is. The general picture signal is IpNoise signal is InThen the mixed signal Im
Figure BDA0002314370550000092
In the formula, ωiIs the weight of the ith image. Since the background noise cannot be zero in the actual process, if the background noise is a constant value, the background noise is indicated to contain a direct current component, and if the background noise is a function variable, the background noise is indicated to be related to other frequencies. If the value is not large enough, the result is not so much affected and the actual requirement can still be met.
S130: image segmentation: the marking lines are generally polygons, the edges of the marking lines are obvious straight lines, the system identifies the straight lines in the image by fusing pixel separation and classical Hough Transform (Hough Transform), and then better partitions the runway from the marking lines by detecting the gray value of the pixels of each area.
The specific method comprises the following steps:
s131: calculating pixel average values of the areas divided by red lines, and recording the average values as pixel1, pixel2 and pixel3 … …;
s132: assume that the region to be detected is PnTaking the difference, Delta, of the mean values of the gray levels of the regions adjacent thereto1,Δ2,Δ3… …, take the minimum value of the difference and record as deltamin
S133: from the center point of the region, the image is expanded along the adjacent 8 adjacent pixels in the connected domain, and the difference of the gray value with the average gray value of the region is carried out to judge whether the difference is deltaminIf so, is classified as PnOtherwise, the area in which the film is located stops expanding towards the direction.
S200: in the embodiment of significance detection, the luminance and the color in the image are extracted by using an image feature extraction method based on the Itti model, and the calculation formula is as follows:
Figure BDA0002314370550000093
wherein r, g and b represent red, green and blue channels respectively. In order to obtain the color characteristics of the image, a color Gaussian pyramid with nine scales is constructed:
Figure BDA0002314370550000101
Figure BDA0002314370550000102
Figure BDA0002314370550000103
Figure BDA0002314370550000104
wherein, R, G, B and Y are four wide harmonic channels of red, green, blue and yellow respectively. In order to obtain the feature maps of brightness and color, the corresponding feature map is calculated (using the symbol Θ) by using the Center-periphery (Center-periphery) difference, and the calculation method is as follows:
Figure BDA0002314370550000105
Figure BDA0002314370550000106
Figure BDA0002314370550000107
where c is {2, 3, 4}, and s is c + δ, δ is {3, 4 }. Above that
Figure BDA0002314370550000108
Indicating that two images are to be tonedAnd performing matrix subtraction to the same size, wherein I represents a brightness characteristic diagram, and RG and BY represent color characteristic diagrams. Indicating color opposition in the human brain.
And in addition, fusing all characteristic graphs by adopting a D-S evidence fusion theory to obtain a final saliency map. Each sample can obtain a saliency map with active peaks based on brightness, color and edge features through an Itti model and an edge Canny operator. The input saliency map is first normalized to the same range. Suppose that the position of the activity peak of the detected brightness saliency map in the whole map (i.e. matrix) is l1(xl1,yl1),l2(xl2,yl2),......,lm(xlm,ylm). Similarly, the colors and the positions of the edges are:
C1(xc1,yc1),……,Cn(xcn,ycn);
E1(xe1,ye1),......,Ek(xck,yck) (m, n, k are all natural numbers).
Since there is a measurement error between each feature and the detection target, in order to increase the accuracy of detecting a foreign object, confidence between each feature is introduced. The method comprises the steps of firstly recording the position of the foreign matter, solving the Euclidean distance from each movable peak value to the foreign matter under different characteristics, and obtaining the distance delta from three points to a target by taking the minimum distancelc,Δle,Δec. By three distances, the confidence of each feature in detecting the foreign matter can be obtained, and the formula is as follows:
Figure BDA0002314370550000109
Figure BDA00023143705500001010
Figure BDA0002314370550000111
Figure BDA0002314370550000112
wherein S isl,Sc,SeIs the support degree of the other two characteristics, and C is the support matrix of the whole characteristics.
S300: and finally, learning a sample set by using a Monte Carlo method to obtain a basic probability, wherein in the detection process, A is used for replacing a set of detected foreign matters, B is used for replacing a set of detected noise, and U is used for representing the situation which cannot be determined. Assume that the total sample set is N, where the number of detected foreign objects using method k is Nk(A) The number of detected noises is Nk(B) Then the probability is:
Figure BDA0002314370550000113
after the confidence degrees of the information provided by each feature are obtained, the confidence degrees corresponding to the method are multiplied respectively, and then the support degree of each feature on the whole foreign object detection is obtained.
Figure BDA0002314370550000114
Figure BDA0002314370550000115
And simultaneously, obtaining the following fuzzy relation according to fuzzy logic:
Pk(A)=NOT(Pk(A)or Pk(B))=
1-max(Pk(A),Pk(B)) (23)
Pk(U)=1-max(Pk(A),Pk(B)) (24)
through the steps, the support degree of each feature pair A, B and U is obtained, and then the corresponding basic probability distribution value is obtained:
Figure BDA0002314370550000116
and finally, combining the basic probability distribution functions of the features by using a Dempster fusion rule, firstly fusing two features with higher support degree, and then fusing a third feature.
The type and size of the invader are not clearly defined, and in order to detect the foreign matters on the airport pavement, the existence of the background different from the pavement in the image, namely the airport foreign matters to be detected is confirmed by analyzing the bottom layer characteristics in the image.

Claims (11)

1. An unmanned aerial vehicle routing inspection path planning method for an airport flight area is characterized by comprising the following steps: the method comprises the following steps:
the method comprises the following steps: processing a target airport area through GIS application to form a plane graph in a vector format, subdividing the plane graph to obtain plane graphs of a runway and a main taxiway, analyzing the plane graphs of the runway and the main taxiway to obtain respective boundary coordinates, and storing the boundary coordinates into a database for later use;
step two: according to different connectivity, partitioning the plane diagrams of the runway and the main taxiway to obtain the area positions of the runway, the main taxiway and the liaison taxiway;
step three: selecting an area to carry out path planning on the area to obtain a routing inspection path track coordinate, and transmitting the routing inspection path track coordinate into a routing inspection unmanned aerial vehicle;
step four: and setting relevant flight parameters of the inspection unmanned aerial vehicle according to the trajectory coordinates of the inspection path, executing an inspection task by the unmanned aerial vehicle, and returning the video stream of the inspection area in real time.
2. The method for planning the routing of the unmanned aerial vehicle inspection tour facing the airport flight area according to claim 1, wherein: the second step specifically comprises the following steps:
s021: rasterizing the plan view in step 1: marking the grids of the flyable areas of the unmanned aerial vehicle as 0 and the grids of the non-flyable areas as 1 to obtain a target airfield area represented by a 01 matrix, wherein the X direction of the 01 matrix is parallel to the runway direction, the Y direction of the 01 matrix is perpendicular to the runway direction, and the direction with a small subscript value of the 01 matrix is the runway direction;
s022: accumulating and summing all elements of each row or each column along the runway direction;
s023: screening out all items with the summation equal to zero, and combining the adjacent row numbers or column numbers to obtain the area positions of the runway and the main taxiway;
s024: and searching the area except the runway and the main taxiway in the area of the target aircraft by adopting a seed filling method to obtain the range and the position of each contact taxiway.
3. The method for planning the routing of the unmanned aerial vehicle inspection tour facing the airport flight area according to claim 2, is characterized in that: the S024 specifically comprises the following steps:
a. searching the value of 1 of all elements in i +1 rows or j +1 columns, randomly selecting an element with a value of 0 as a seed among the elements with the element of 1 in the rows or the columns, marking the coordinate of the seed as A (i +1, y) or B (x, j +1), giving label, and pressing all the territorable squares with the element of 0 adjacent to the seed into a stack, wherein i is a row vector in a 01 matrix, and j is a column vector in the 01 matrix;
b. popping the elements at the top of the stack, endowing the same label as the seeds with the elements, and pressing all the feasible region squares with 0 adjacent to the elements at the top of the stack into the stack;
c. and repeating the step b until the stacks are empty, obtaining the range and the position of the contact taxiway represented by each stack, storing the data stored in each stack space into a database, and naming each part according to label.
4. The method for planning the routing of the unmanned aerial vehicle inspection tour facing the airport flight area according to claim 1, wherein: in the third step, a Z-shaped scanning method is adopted for trajectory planning, and the method specifically comprises the following steps:
respectively calculating the positions of the runway, the main taxiway and the liaison taxiway according to the positions of the runway, the main taxiway and the liaison taxiway obtained in the step twoDistance between the lane and the cross taxiway in the X, Y directions: x is the number ofmax-xminAnd ymax-ymin(ii) a When x ismax-xmin>ymax-yminIn the process, Z-shaped scanning is carried out on the runway, the main taxiway and the contact taxiway according to the X direction to obtain a path plan; when x ismax-xmin<ymax-yminIn the process, Z-shaped scanning is carried out on the runway, the main taxiway and the contact taxiway according to the Y direction to obtain a path plan;
the Z-shaped scanning method comprises the following steps:
acquiring coordinates of all points in a Z-shaped scanning area, arranging row coordinates from small to large in sequence, and then arranging column coordinates from small to large in sequence;
randomly selecting a row/column, and rearranging the arrangement sequence of the row/column from large to small by a bubble sorting method;
all points are connected in sequence.
5. The foreign matter detection method for the unmanned aerial vehicle inspection path planning method for the airport flight area based on any one of claims 1 to 4, is characterized in that: the method comprises the following steps:
s100: extracting image frames to be detected from the patrol area video stream returned in the fourth step, and preprocessing the image frames to obtain final area images to be detected, wherein the preprocessing comprises image enhancement, image registration and superposition and image segmentation in sequence;
s200: extracting brightness features, color features and edge features in the region image to be detected finally;
s300: calculating the basic probability of detecting foreign matters in each feature in S200;
s400: calculating the support degree among the features according to the basic probability obtained in the step S300, and fusing the features according to the support degree to obtain a fused image;
s500: and confirming whether an object different from the background of the road surface exists according to the fused image.
6. The foreign object detection method according to claim 5, characterized in that: performing image enhancement based on a multi-scale Retinex algorithm in the S100;
the expression of the multi-scale Retinex algorithm is as follows:
L(x,y)=I(x,y)*F(x,y) (2)
Figure FDA0002314370540000021
log Ri(x,y)=log I(x,y)-log[I(x,y)*Fi(x,y)](4)
Figure FDA0002314370540000022
in the above formula, x and y are coordinate points, I (x, y) represents an image of an object, L (x, y) represents an incident light component of the image, R (x, y) represents a reflected light component of the image, and x represents a convolution operation; f (x, y) represents a Gaussian low-pass filter function; k is the normalized molecule, σ is the scale parameter of the Gaussian filter, ωσFor weights of different size σ, all weight coefficients add up to 1.
7. The foreign object detection method according to claim 6, characterized in that: the image registration and overlay in said S100 comprises the steps of:
s121: finding out the central point of the target image after image enhancement, establishing a coordinate system by taking the central point as an origin, and recording the coordinates of four points of the image as (X)1min,0),(X1max,0),(0,Y1max),(0,Y1min);
S122: based on similarity measurement and feature structure matching, finding the position of a central point in the front and rear n frame images to be registered, and simultaneously respectively recording four coordinates of the images: (X)2min,0)、(X2max,0)、(0,Y2max)、(0,Y2min)、(Xn min,0)、(Xn max,0)、(0,Yn max)、(0,Yn min);
S123: comparing the coordinates obtained in S121 and S122, selecting the maximum value of the X and Y directions, and selecting the minimum value of the + X and + Y directions
Figure FDA0002314370540000031
S124: performing coordinate conversion on the parameters in the S123 by adopting an equation (6) to complete image registration;
I1(x,y)=f(I2(x,y)) (6)
wherein, I1(x, y) and I2(x, y) represents the gray value of the pixel point (x, y) in the two images, and f represents coordinate conversion;
s125: fusing the target image subjected to image registration by adopting an equation (7);
Figure FDA0002314370540000032
in the formula ImIs a mixed signal, InIs a noise signal, IpIs an image signal, ωiIs the weight of the ith image.
8. The foreign object detection method according to claim 7, characterized in that: the image segmentation in S100 includes the steps of:
s131: performing linear recognition on the images subjected to image registration and superposition by adopting fusion pixel separation and classical Hough transformation, dividing the images according to the recognized linear to obtain each region, and calculating the pixel average value of each region, wherein the pixel average value is marked as pixeln, and n is the number of the divided regions;
s132: assume that the region to be detected is PnTaking the difference between the gray level average value of the region to be detected and the adjacent region, and taking the minimum value delta in the difference valuesmin
S133: expanding in a connected domain by taking 8 adjacent pixels as a unit from the central point of the region to be detected to obtain the average gray value of an expanded region, and expanding the expanded regionIs compared with the average gray value of the region to be detected to determine whether the difference is deltaminIf the detected region is within the range of (1), the detected region is determined to be the region where the detected region is located, otherwise, the expansion in the direction is stopped, and the final detected region is obtained.
9. The foreign object detection method according to claim 5, characterized in that: the S200 specifically includes:
extracting the brightness and the color in the region image to be detected finally by adopting an image feature extraction method based on an Itti model;
calculating to obtain a brightness characteristic diagram and a color characteristic diagram in a final region image to be detected by adopting the center-periphery difference;
and extracting the edge characteristics of the finally detected region image by adopting a Canny operator to obtain an edge characteristic diagram.
10. The foreign object detection method according to claim 5, characterized in that: the basic probability of detecting the foreign object by each feature in S300 is:
Figure FDA0002314370540000041
wherein A represents a set in which a foreign object is detected, B represents a set in which noise is detected, and U represents an indeterminable case; n denotes a sample set, Nk(A) Indicates the number of foreign matters detected by the method k, Nk(B) Indicating the amount of noise detected using method k.
11. The foreign object detection method according to claim 10, characterized in that: the support degree between the features in S400 is represented as:
Figure FDA0002314370540000042
Figure FDA0002314370540000043
Figure FDA0002314370540000044
Figure FDA0002314370540000045
Figure FDA0002314370540000046
Figure FDA0002314370540000047
wherein, Deltalc,ΔleecRepresenting the Euclidean distance from each movable peak value to the foreign matter under different characteristics, and taking the minimum distance to obtain the distance from the three points to the target;
the fuzzy relation by fuzzy logic is adopted to obtain:
Pk(A)=NOT(Pk(A)or Pk(B))=1-max(Pk(A),Pk(B)) (23)
Pk(U)=1-max(Pk(A),Pk(B)) (24)
the method for fusing the features according to the support degree by adopting the D-S evidence fusion theory comprises the following steps:
and (3) solving by adopting an equation (25) according to the support degree of each feature pair A, B and U to obtain corresponding basic probability distribution values:
Figure FDA0002314370540000048
and combining the basic probability distribution functions of the features by using a Dempster fusion rule, and fusing the features from high to low according to the support degree.
CN201911271674.4A 2019-12-12 2019-12-12 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area Active CN111060076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911271674.4A CN111060076B (en) 2019-12-12 2019-12-12 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911271674.4A CN111060076B (en) 2019-12-12 2019-12-12 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area

Publications (2)

Publication Number Publication Date
CN111060076A true CN111060076A (en) 2020-04-24
CN111060076B CN111060076B (en) 2021-10-08

Family

ID=70298897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911271674.4A Active CN111060076B (en) 2019-12-12 2019-12-12 Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area

Country Status (1)

Country Link
CN (1) CN111060076B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950456A (en) * 2020-08-12 2020-11-17 成都成设航空科技股份公司 Intelligent FOD detection method and system based on unmanned aerial vehicle
CN112116830A (en) * 2020-09-02 2020-12-22 南京航空航天大学 Unmanned aerial vehicle dynamic geo-fence planning method based on airspace meshing
CN112162566A (en) * 2020-09-04 2021-01-01 深圳市创客火科技有限公司 Route planning method, electronic device and computer-readable storage medium
CN112484717A (en) * 2020-11-23 2021-03-12 国网福建省电力有限公司 Unmanned aerial vehicle oblique photography route planning method and computer readable storage medium
CN112686172A (en) * 2020-12-31 2021-04-20 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and device for detecting foreign matters on airport runway and storage medium
CN112860946A (en) * 2021-01-18 2021-05-28 四川弘和通讯有限公司 Method and system for converting video image information into geographic information
TWI750859B (en) * 2020-10-22 2021-12-21 長榮大學 UAV swarm flying to carry out the inspection method of the airport runway and the periphery of the field
CN116520853A (en) * 2023-06-08 2023-08-01 江苏商贸职业学院 Agricultural inspection robot based on artificial intelligence technology
CN117372960A (en) * 2023-10-24 2024-01-09 宁夏隆合科技有限公司 Unmanned aerial vehicle inspection target attribute automatic extraction method combining scene relation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581143A (en) * 2015-01-14 2015-04-29 宁波大学 Reference-free three-dimensional picture quality objective evaluation method based on machine learning
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method
CN105427320A (en) * 2015-11-30 2016-03-23 威海北洋电气集团股份有限公司 Image segmentation and extraction method
CN109765930A (en) * 2019-01-29 2019-05-17 理光软件研究所(北京)有限公司 A kind of unmanned plane vision navigation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581143A (en) * 2015-01-14 2015-04-29 宁波大学 Reference-free three-dimensional picture quality objective evaluation method based on machine learning
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method
CN105427320A (en) * 2015-11-30 2016-03-23 威海北洋电气集团股份有限公司 Image segmentation and extraction method
CN109765930A (en) * 2019-01-29 2019-05-17 理光软件研究所(北京)有限公司 A kind of unmanned plane vision navigation system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950456A (en) * 2020-08-12 2020-11-17 成都成设航空科技股份公司 Intelligent FOD detection method and system based on unmanned aerial vehicle
CN112116830A (en) * 2020-09-02 2020-12-22 南京航空航天大学 Unmanned aerial vehicle dynamic geo-fence planning method based on airspace meshing
CN112116830B (en) * 2020-09-02 2021-09-24 南京航空航天大学 Unmanned aerial vehicle dynamic geo-fence planning method based on airspace meshing
CN112162566A (en) * 2020-09-04 2021-01-01 深圳市创客火科技有限公司 Route planning method, electronic device and computer-readable storage medium
CN112162566B (en) * 2020-09-04 2024-01-16 深圳市创客火科技有限公司 Route planning method, electronic device and computer readable storage medium
TWI750859B (en) * 2020-10-22 2021-12-21 長榮大學 UAV swarm flying to carry out the inspection method of the airport runway and the periphery of the field
CN112484717A (en) * 2020-11-23 2021-03-12 国网福建省电力有限公司 Unmanned aerial vehicle oblique photography route planning method and computer readable storage medium
CN112686172A (en) * 2020-12-31 2021-04-20 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and device for detecting foreign matters on airport runway and storage medium
CN112860946A (en) * 2021-01-18 2021-05-28 四川弘和通讯有限公司 Method and system for converting video image information into geographic information
CN112860946B (en) * 2021-01-18 2023-04-07 四川弘和通讯集团有限公司 Method and system for converting video image information into geographic information
CN116520853A (en) * 2023-06-08 2023-08-01 江苏商贸职业学院 Agricultural inspection robot based on artificial intelligence technology
CN117372960A (en) * 2023-10-24 2024-01-09 宁夏隆合科技有限公司 Unmanned aerial vehicle inspection target attribute automatic extraction method combining scene relation

Also Published As

Publication number Publication date
CN111060076B (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN111060076B (en) Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area
Spencer Jr et al. Advances in computer vision-based civil infrastructure inspection and monitoring
Lam et al. xview: Objects in context in overhead imagery
Vetrivel et al. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
Vetrivel et al. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images
RU2669656C2 (en) Condition detection with use of image processing
Pu et al. Recognizing basic structures from mobile laser scanning data for road inventory studies
US7528938B2 (en) Geospatial image change detecting system and associated methods
Yao et al. Extraction and motion estimation of vehicles in single-pass airborne LiDAR data towards urban traffic analysis
US7630797B2 (en) Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods
CN110866887A (en) Target situation fusion sensing method and system based on multiple sensors
El-Halawany et al. Detection of road poles from mobile terrestrial laser scanner point cloud
CN112712535B (en) Mask-RCNN landslide segmentation method based on simulation difficult sample
US20070162194A1 (en) Geospatial image change detecting system with environmental enhancement and associated methods
US20070162195A1 (en) Environmental condition detecting system using geospatial images and associated methods
Benedek et al. Positioning and perception in LIDAR point clouds
Balaska et al. Enhancing satellite semantic maps with ground-level imagery
Mohammadi et al. An object based framework for building change analysis using 2D and 3D information of high resolution satellite images
Nagy et al. ChangeGAN: A deep network for change detection in coarsely registered point clouds
CN114358133B (en) Method for detecting looped frames based on semantic-assisted binocular vision SLAM
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
Duarte et al. Damage detection on building façades using multi-temporal aerial oblique imagery
Sun et al. Building outline extraction from aerial imagery and digital surface model with a frame field learning framework
Comert et al. Rapid mapping of forested landslide from ultra-high resolution unmanned aerial vehicle data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant