CN112348105B - Unmanned aerial vehicle image matching optimization method - Google Patents

Unmanned aerial vehicle image matching optimization method Download PDF

Info

Publication number
CN112348105B
CN112348105B CN202011289613.3A CN202011289613A CN112348105B CN 112348105 B CN112348105 B CN 112348105B CN 202011289613 A CN202011289613 A CN 202011289613A CN 112348105 B CN112348105 B CN 112348105B
Authority
CN
China
Prior art keywords
matrix
image
unmanned aerial
points
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011289613.3A
Other languages
Chinese (zh)
Other versions
CN112348105A (en
Inventor
尹辉
来楷迪
张忠辉
李鹏程
方霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Guimu Robot Co ltd
Guizhou Environmental Engineering Assessment Center
Original Assignee
Chengdu Guimu Robot Co ltd
Guizhou Environmental Engineering Assessment Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Guimu Robot Co ltd, Guizhou Environmental Engineering Assessment Center filed Critical Chengdu Guimu Robot Co ltd
Priority to CN202011289613.3A priority Critical patent/CN112348105B/en
Publication of CN112348105A publication Critical patent/CN112348105A/en
Application granted granted Critical
Publication of CN112348105B publication Critical patent/CN112348105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an unmanned aerial vehicle image matching optimization method, which comprises the following steps: detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; respectively obtaining H by adopting a RANSAC algorithm and an LMEDS algorithm 1 Matrix sum H 2 A matrix; for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix; and checking the optimal matrix to obtain the most reasonable H matrix. Through the scheme, the method and the device have the advantages of simple logic, less calculation workload, accurate matching and the like, and have high practical value and popularization value in the technical field of image processing.

Description

Unmanned aerial vehicle image matching optimization method
Technical Field
The application relates to the technical field of image processing, in particular to an unmanned aerial vehicle image matching optimization method.
Background
With the rapid development and popularization of multi-axis unmanned aerial vehicles, unmanned aerial vehicle aerial photographing technology is applied to more and more fields, and matching and recognition of unmanned aerial vehicle aerial photographing images are also hot spots for research. Image matching and recognition are comprehensive operations of various technologies, and are widely applied to various fields such as artificial intelligence, safety protection, unmanned aerial vehicle aerial photography, auxiliary driving, image remote sensing, computer vision and the like.
At present, unmanned aerial vehicle images in the prior art are mostly matched by adopting characteristic points, for example, patent application number is 201810735162.8, and Chinese patent application name is 'a rapid unmanned aerial vehicle image matching method based on fusion local characteristics', which adopts: 3*3 grid partitioning is carried out on the reference image and the image to be matched respectively, one image is divided into 9 subregions, and the extraction of invariant features is carried out in the subregions; extracting feature vectors from the invariant feature regions using feature descriptors; judging initial homonymous features by comparing the similarity between the feature vectors to obtain stable initial matching; counting the number of matching points in each grid, and carrying out characteristic matching of regional MSERs for the regions with the number of the matching points in the grids smaller than a threshold value; the mismatching point pairs are deleted by affine invariance of the mahalanobis distance. The technology performs direct image matching by matching the feature points. In addition, in Wang Zhenhua, "a matching method applied to aerial images of unmanned aerial vehicles", firstly, image data and GPS information transmitted by the unmanned aerial vehicle through 4G are received; secondly, reading corresponding geographic information feature point data in a database according to GPS information; finally, the feature points are used for completing image matching. In order to accelerate the matching speed, SURF features are adopted to replace SIFT features to complete the operation.
The technology adopts characteristic points to carry out direct matching, and the mismatching points are deleted; because the matched characteristic points are random, the characteristic points which are uniformly distributed cannot be obtained, and the matched images have the problems of repetition and no matching. If the H matrix is calculated only by using local feature points, it is very easy to solve a more abnormal value, resulting in excessive distortion of the image. The coverage rate of 70% -80% is required to be achieved when the unmanned aerial vehicle collects images, and the characteristic points are distributed in various places of the images.
Therefore, it is urgently required to provide an unmanned aerial vehicle image matching optimization method which is simple in logic, small in calculation workload and accurate in matching.
Disclosure of Invention
Aiming at the problems, the application aims to provide an unmanned aerial vehicle image matching optimization method, which adopts the following technical scheme:
an unmanned aerial vehicle image matching optimization method comprises the following steps:
detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; the N is a natural number greater than or equal to 4;
traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; m is less than or equal to N;
the expression of the H matrix is as follows:
h under homogeneous coordinates 22 =1, and H is calculated by RANSAC algorithm and LMEDS algorithm 1 Matrix sum H 2 A matrix;
for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix;
and checking the optimal matrix to obtain the most reasonable H matrix.
Further, the pair H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix, wherein the method comprises the following steps of:
if H 1 Matrix sum H 2 One of the matrices satisfies |h 20 +h 21 The I is more than 0.0005, and the corresponding matrix is eliminated;
if H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
Further, the verifying the optimal matrix to obtain the most reasonable H matrix includes the following steps:
according to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P’ i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four points after transformation are P' i (i=1..4);
The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P’ 1 -P’ 2 ||,d 2 =||P’ 2 -P’ 3 ||,d 3 =||P’ 3 -P’ 4 ||,d 4 =||P’ 4 -P’ 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
if the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix.
Compared with the prior art, the application has the following beneficial effects:
(1) The method comprises the steps of obtaining pairs of matching points by setting a Hessian matrix threshold parameter; traversing and selecting uniformly distributed feature matching points in the matched feature point pairs to ensure reliable matching;
(2) The application skillfully adopts a plurality of algorithms to calculate the H matrix, and evaluates and checks the H matrix, wherein the algorithms adopted to calculate the H matrix are all iterative fitting and are not necessarily optimal solutions, so that the H matrix is calculated by adopting the thought of integrated learning, the best among the best can be achieved, and the accuracy of final matching is improved. In addition, if the H matrix solved by adopting a single algorithm is used for projection transformation, larger deformation can be generated, which obviously does not accord with the scene of the image acquired by the unmanned aerial vehicle, and larger error can be brought to the subsequent unmanned aerial vehicle splicing; according to different application scenes, different evaluation means are adopted to evaluate the H matrix, so that more excellent H matrix can be conveniently screened.
In conclusion, the method has the advantages of simple logic, less calculation workload, accurate matching and the like, and has high practical value and popularization value in the technical field of image processing.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope of protection, and other related drawings may be obtained according to these drawings without the need of inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of feature point screening according to the present application (a).
Fig. 2 is a schematic diagram of feature point screening according to the present application (ii).
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings and examples, which include, but are not limited to, the following examples. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Examples
As shown in fig. 1 to 2, the present embodiment provides an unmanned aerial vehicle image matching optimization method, which includes the following steps:
firstly, matching by adopting a pyramid SURF method:
in this embodiment, graphs a and B are matched, and first, more stringent SURF feature point detection is used, where the Hessian matrix threshold parameter is set to 1200. If there are fewer feature points, then looser SURF feature point detection is used and the Hessian matrix threshold parameter is gradually reduced, e.g., set to 800, 400. In this embodiment, there are a total of N pairs of matching points. Pts1 represents the feature point set of the map A, pts2 represents the feature point set of the map B. If N is greater than or equal to 4, continuing. N < 4 is re-matched by lowering the Heisson threshold.
Step two, screening M pairs of matching points to ensure that the characteristic points are uniformly distributed on the image: dividing the image into 16 x 16 regions; each time these areas are traversed, the matching points are fetched, and if the areas have no points, the points are skipped; until M points are taken. In this embodiment, m=50 is selected, and if N < M, all N are selected.
Thirdly, calculating an H matrix by a multi-algorithm:
wherein, the expression of the H matrix is:
h under homogeneous coordinates 22 =1, and H is calculated by RANSAC algorithm and LMEDS algorithm 1 Matrix sum H 2 A matrix.
Fourth, evaluating an H matrix, and screening the optimal H:
(1) Shear deformation size
If |h of a certain H matrix 20 +h 21 I > 0.0005, reject;
(2) Torsional deformation
If H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
Fifthly, checking an H matrix:
(1) According to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P’ i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four transformed points are P i '(i=1..4);
(2) The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P’ 1 -P’ 2 ||,d 2 =||P’ 2 -P’ 3 ||,d 3 =||P’ 3 -P’ 4 ||,d 4 =||P’ 4 -P’ 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
(3) If the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix. I.e.
d 1 W > 2 or d 1 Eliminating if w is less than 0.5;
d 2 /h > 2 or d 2 /h<0.5, eliminating;
d 3 w > 2 or d 3 Eliminating if w is less than 0.5;
d 4 /h > 2 or d 4 Eliminating if the ratio of the ratio to the ratio of the ratio is less than 0.5;
in this embodiment, through the first step to the fifth step, the most reasonable H matrix is finally obtained.
The above embodiments are only preferred embodiments of the present application and are not intended to limit the scope of the present application, but all changes made by adopting the design principle of the present application and performing non-creative work on the basis thereof shall fall within the scope of the present application.

Claims (3)

1. The unmanned aerial vehicle image matching optimization method is characterized by comprising the following steps of:
detecting characteristic points of an unmanned aerial vehicle aerial image A and an unmanned aerial vehicle aerial image B to be matched by adopting a SURF algorithm, setting a Hessian matrix threshold of the SURF algorithm, and obtaining N pairs of matching points; the N is a natural number greater than or equal to 4;
traversing the matching points in the image A and the image B, screening M pairs of matching points uniformly distributed in the image A and the image B, and constructing to obtain an H matrix; m is less than or equal to N;
the expression of the H matrix is as follows:
h under homogeneous coordinates 22 =1, and respectively obtained by using RANSAC algorithm and LMEDS algorithmH 1 Matrix sum H 2 A matrix;
for H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix;
and checking the optimal matrix to obtain the most reasonable H matrix.
2. The unmanned aerial vehicle image matching optimization method of claim 1, wherein the pair H 1 Matrix sum H 2 Evaluating the matrix to obtain an optimal matrix, wherein the method comprises the following steps of:
if H 1 Matrix sum H 2 One of the matrices satisfies |h 20 +h 21 The I is more than 0.0005, and the corresponding matrix is eliminated;
if H 1 Matrix sum H 2 The matrix is not eliminated, and then the matrix is according to the h 00 |+|h 01 |+|h 10 |+|h 11 And selecting the matrix with the smallest value from the I-2I to obtain the optimal matrix.
3. The unmanned aerial vehicle image matching optimization method according to claim 1, wherein the verifying the optimal matrix to obtain the most reasonable H matrix comprises the following steps:
according to the optimal matrix, the conversion relation before and after coordinate transformation of any image is obtained, and the expression is as follows:
P' i =H*P i
wherein, four points of the original image are aligned with coordinates P 1 (0,0,1),P 2 (w,0,1),P 3 (w,h,1),P 4 (0, h, 1); w represents the original width of the image, and h represents the original height of the image; the four transformed points are P i ',i=1,...,4;
The lengths of four edges after image transformation are obtained, and the length is expressed as follows:
d 1 =||P' 1 -P' 2 ||,d 2 =||P' 2 -P' 3 ||,d 3 =||P' 3 -P' 4 ||,d 4 =||P' 4 -P' 1 ||
wherein, the term "| | | represents the euclidean distance between two points;
if the length change value of the four edges after the image transformation is more than 2, eliminating the H matrix to obtain the most reasonable H matrix.
CN202011289613.3A 2020-11-17 2020-11-17 Unmanned aerial vehicle image matching optimization method Active CN112348105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011289613.3A CN112348105B (en) 2020-11-17 2020-11-17 Unmanned aerial vehicle image matching optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011289613.3A CN112348105B (en) 2020-11-17 2020-11-17 Unmanned aerial vehicle image matching optimization method

Publications (2)

Publication Number Publication Date
CN112348105A CN112348105A (en) 2021-02-09
CN112348105B true CN112348105B (en) 2023-09-01

Family

ID=74364041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011289613.3A Active CN112348105B (en) 2020-11-17 2020-11-17 Unmanned aerial vehicle image matching optimization method

Country Status (1)

Country Link
CN (1) CN112348105B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574421A (en) * 2015-01-29 2015-04-29 北方工业大学 Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN109697692A (en) * 2018-12-29 2019-04-30 安徽大学 One kind being based on the similar feature matching method of partial structurtes
CN110458183A (en) * 2019-06-25 2019-11-15 上海圭目机器人有限公司 A kind of characteristic matching optimization algorithm of image adaptive
WO2020199424A1 (en) * 2019-04-01 2020-10-08 苏州中晟宏芯信息科技有限公司 Optimal h-matrix generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574421A (en) * 2015-01-29 2015-04-29 北方工业大学 Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN106940876A (en) * 2017-02-21 2017-07-11 华东师范大学 A kind of quick unmanned plane merging algorithm for images based on SURF
CN108961162A (en) * 2018-03-12 2018-12-07 北京林业大学 A kind of unmanned plane forest zone Aerial Images joining method and system
CN109697692A (en) * 2018-12-29 2019-04-30 安徽大学 One kind being based on the similar feature matching method of partial structurtes
WO2020199424A1 (en) * 2019-04-01 2020-10-08 苏州中晟宏芯信息科技有限公司 Optimal h-matrix generation method and device
CN110458183A (en) * 2019-06-25 2019-11-15 上海圭目机器人有限公司 A kind of characteristic matching optimization algorithm of image adaptive

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合SURF算法和单应性矩阵的无人机影像匹配;王晓红等;测绘通报(07);全文 *

Also Published As

Publication number Publication date
CN112348105A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN108960211B (en) Multi-target human body posture detection method and system
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
WO2022126377A1 (en) Traffic lane line detection method and apparatus, and terminal device and readable storage medium
CN109960742B (en) Local information searching method and device
CN109035292B (en) Moving target detection method and device based on deep learning
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN112733885A (en) Point cloud identification model determining method and point cloud identification method and device
CN108230330B (en) Method for quickly segmenting highway pavement and positioning camera
CN116188999B (en) Small target detection method based on visible light and infrared image data fusion
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN113723377A (en) Traffic sign detection method based on LD-SSD network
Farag A lightweight vehicle detection and tracking technique for advanced driving assistance systems
CN113095152A (en) Lane line detection method and system based on regression
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111898428A (en) Unmanned aerial vehicle feature point matching method based on ORB
US20200005078A1 (en) Content aware forensic detection of image manipulations
CN110969164A (en) Low-illumination imaging license plate recognition method and device based on deep learning end-to-end
CN111199558A (en) Image matching method based on deep learning
CN114332921A (en) Pedestrian detection method based on improved clustering algorithm for Faster R-CNN network
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN115375917A (en) Target edge feature extraction method, device, terminal and storage medium
CN116246119A (en) 3D target detection method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant