CN117274391A - Digital map matching target positioning method based on graphic neural network - Google Patents

Digital map matching target positioning method based on graphic neural network Download PDF

Info

Publication number
CN117274391A
CN117274391A CN202311566300.1A CN202311566300A CN117274391A CN 117274391 A CN117274391 A CN 117274391A CN 202311566300 A CN202311566300 A CN 202311566300A CN 117274391 A CN117274391 A CN 117274391A
Authority
CN
China
Prior art keywords
target
image
digital map
measured
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311566300.1A
Other languages
Chinese (zh)
Other versions
CN117274391B (en
Inventor
孙伯玉
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Tongshi Optoelectronic Technology Co ltd
Original Assignee
Changchun Tongshi Optoelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Tongshi Optoelectronic Technology Co ltd filed Critical Changchun Tongshi Optoelectronic Technology Co ltd
Priority to CN202311566300.1A priority Critical patent/CN117274391B/en
Publication of CN117274391A publication Critical patent/CN117274391A/en
Application granted granted Critical
Publication of CN117274391B publication Critical patent/CN117274391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a digital map matching target positioning method based on a graph neural network, which belongs to the field of target positioning and comprises the following steps: acquiring a data set and an actual measurement image, wherein the data set comprises a digital map image and a digital elevation model file of an area to be measured; based on an earth ellipsoid model, carrying out preliminary calculation on the position of a target to be detected to obtain a rough position, and searching a target digital map image and a target digital elevation model file corresponding to the actual measurement image in a data set; preprocessing the actual measurement image through a dark channel defogging algorithm and a histogram enhancement algorithm; converting the preprocessed actual measurement image into a target image of a forward-looking down visual angle through image orthographic conversion; and matching the target file and the target image by a matching algorithm based on the graph neural network to obtain an accurate position. The method provided by the application can be applied to target positioning of the large unmanned aerial vehicle under the detection conditions of long distance, large inclination angle and high altitude flight.

Description

Digital map matching target positioning method based on graphic neural network
Technical Field
The application relates to a digital map matching target positioning method based on a graph neural network, and belongs to the field of target positioning.
Background
In the technical field of unmanned aerial vehicle target positioning, there is a method for positioning by using image matching, mainly using traditional feature point matching algorithms such as FAST and SURF to match an actual measurement image with a reference image so as to acquire accurate target position information. The method can obtain better effect under the detection conditions of short-distance, small inclination angle and low-altitude flight, but when the method is applied to the light level platform of a large unmanned plane, the detection conditions of long-distance, large inclination angle and high-altitude flight often lead to low matching success rate. The development of a new image matching target positioning method suitable for the large unmanned aerial vehicle photoelectric platform is needed.
The traditional image matching target positioning method mainly has the following problems: on the one hand, when high-altitude long-distance detection is performed, the image definition is poor due to various influences (the best focusing state is not achieved, fog exists, the visibility is low, and the like), meanwhile, the non-real-time property of the reference image causes inconsistent scenes in the actual measurement image and the reference image, and the factors cause that the corresponding characteristic points in the actual measurement image and the reference image are possibly less; on the other hand, for an actually measured image with a large inclination angle, the conventional matching algorithm has a low matching success rate in this case because the angle of view is very different from the reference image (an image with a normal viewing angle). The two points limit the application of the traditional method in the positioning of the large unmanned aerial vehicle.
Disclosure of Invention
The purpose of the application is to provide a digital map matching target positioning method based on a graphic neural network, which greatly improves the image matching success rate and the target positioning precision and is suitable for a photoelectric platform of a large unmanned aerial vehicle.
In order to achieve the above object, a first aspect of the present application provides a digital map matching target positioning method based on a graph neural network, including:
acquiring a data set and actual measurement data, wherein the data set comprises a digital map image and a digital elevation model file of an area to be measured, and the actual measurement data comprises an actual measurement image containing an object to be measured in the flight process;
based on an earth ellipsoid model, performing preliminary calculation on the position of the target to be detected to obtain the approximate position of the target to be detected;
searching a target file corresponding to the actual measurement image in the data set according to the approximate position, wherein the target file comprises a target digital map image and a target digital elevation model file;
preprocessing the actual measurement image through a dark channel defogging algorithm and a histogram enhancement algorithm;
converting the preprocessed actual measurement image into a target image of a forward-looking down visual angle through image orthographic conversion;
and matching the target file with the target image through a matching algorithm based on a graph neural network to obtain the accurate position of the target to be detected.
In one embodiment, the acquiring the data set includes:
acquiring a digital map containing an area to be detected and cutting the digital map into a group of digital map images with preset resolution, wherein each pixel in each digital map image represents a preset longitude and latitude range;
and cutting and splicing the standard digital elevation model files according to the digital map images to obtain digital elevation model files corresponding to the preset resolutions of the digital map images one by one, and marking the longitude and latitude ranges of the digital elevation model files to obtain the data set.
In one embodiment, the measured data further includes: position and attitude angle information at a time corresponding to the actual measurement image;
the obtaining of the measured data comprises:
in the flight process, when a target to be detected appears in the field of view, acquiring an actual measurement image by using a photoelectric platform;
and acquiring position and attitude angle information at the moment corresponding to the measured image by using an inertial navigation device and an encoder.
In one embodiment, the preliminary calculation of the position of the target to be measured includes:
based on an earth ellipsoid model, the position of the target to be measured is initially calculated by utilizing the acquired position and attitude angle information and the pixel position of the target to be measured in an actual measurement image, and the approximate position of the target to be measured is obtained, wherein the approximate position comprises the approximate longitude, latitude and height of the target to be measured.
In one embodiment, the searching the target file corresponding to the measured image in the data set includes:
and judging whether the digital map image with the longitude and latitude range meeting the preset condition exists in the data set according to the approximate position, if so, judging the digital map image to be a target digital map image corresponding to the actually measured image, and simultaneously finding out a target digital elevation model file with the same longitude and latitude range as the target digital map image.
In one embodiment, the preprocessing of the measured image by a dark channel defogging algorithm and a histogram enhancement algorithm comprises:
defogging the actual measurement image through a dark channel defogging algorithm, and adopting an image sharpening algorithm to further process;
and enhancing the measured image after further processing in a histogram equalization mode by adopting a CLAHE algorithm.
In one embodiment, the converting the preprocessed actually measured image into the target image of the forward viewing angle through the image orthographic conversion includes:
and calculating an orthographic transformation matrix according to the measured data, calculating the coordinates of the object point position corresponding to each pixel in the preprocessed measured image in a map coordinate system, and carrying out orthographic transformation based on the orthographic transformation matrix to obtain a target image of a forward looking down visual angle.
In one embodiment, said matching said target file and said target image by a matching algorithm based on a graph neural network comprises:
scaling the target image to a scaled image with the same resolution as the target file, setting initial parameters, and matching the scaled image with the target file by using a SuperGlue feature point matching algorithm based on a graph neural network to obtain a group of matched pixel coordinates;
when the matching pairs are smaller than the preset number, modifying the initial parameters and carrying out re-matching; when the matching pairs are larger than the preset number, calculating a homography matrix through the matching pairs, calculating corresponding pixel coordinates of the target to be measured in the target digital map image according to the homography matrix, obtaining specific longitude and latitude of the target to be measured through longitude and latitude ranges of the target digital map image, and obtaining elevation information of the target to be measured through the target digital elevation model file.
A second aspect of the present application provides an electronic device, comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the first aspect or any implementation of the first aspect as described above when the computer program is executed.
A third aspect of the present application provides a computer readable storage medium storing a computer program which when executed by a processor performs the steps of the first aspect or any implementation of the first aspect.
From the above, the application provides a digital map matching target positioning method based on a graphic neural network, which greatly improves the image matching success rate by adding image preprocessing and image orthographic transformation steps and applying a matching algorithm based on the graphic neural network, is suitable for a photoelectric platform of a large unmanned aerial vehicle, realizes the target positioning of the large unmanned aerial vehicle under the detection conditions of long distance, large inclination angle and high altitude flight, and can obtain very accurate positioning results under the long distance of the large inclination angle.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a digital map matching target positioning method provided in an embodiment of the present application;
FIG. 2 is a flowchart of image preprocessing and orthographic transformation according to an embodiment of the present disclosure;
fig. 3 is an iteration flowchart in a matching algorithm provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The following description of the embodiments of the present application, taken in conjunction with the accompanying drawings, clearly and fully describes the technical solutions of the embodiments of the present application, and it is evident that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present application is not limited to the specific embodiments disclosed below.
Example 1
The embodiment of the application provides a digital map matching target positioning method based on a graph neural network, as shown in fig. 1, the method comprises the following steps:
s100, acquiring a data set and actual measurement data, wherein the data set comprises a digital map image and a digital elevation model file of an area to be measured, and the actual measurement data comprises an actual measurement image containing an object to be measured in the flight process;
optionally, the acquiring the data set includes: acquiring a digital map containing an area to be detected and cutting the digital map into a group of digital map images with preset resolution, wherein each pixel in each digital map image represents a preset longitude and latitude range; and cutting and splicing the standard digital elevation model files according to the digital map images to obtain digital elevation model files corresponding to the preset resolutions of the digital map images one by one, marking the longitude and latitude ranges of the digital elevation model files, numbering according to the longitude and latitude ranges, and obtaining the data set.
In one embodiment, the digital map containing the area to be measured is cut into a set of digital map images of preset resolution 1280×960, where each pixel represents a longitude and latitude range of 0.00001 °, approximately 50×50 digital map images are required. Meanwhile, corresponding digital elevation model files are manufactured, and because the default resolution of the digital elevation model files is 3601 multiplied by 3601, the digital elevation model files are needed to be cut and spliced to adapt to the resolution of the digital map images, files with 1280 multiplied by 960 which are consistent with the longitude and latitude ranges of the digital map images are obtained, 50 multiplied by 50 digital elevation model files which are in one-to-one correspondence with the digital map images are manufactured, the longitude and latitude ranges of each digital elevation model file are marked, and finally the digital elevation model files are integrated into one data set.
Optionally, the acquiring measured data includes: in the flight process, when a target to be detected (short for target) appears in a field of view, an actual measurement image is acquired by utilizing a photoelectric platform, and position and attitude angle information at the moment corresponding to the current actual measurement image is acquired by utilizing an inertial navigation device and an encoder.
In one embodiment, during the flight, the actual measurement image is shot by an aerial camera carried by the optical level platform and stored by a recorder, and the position and attitude angle information of the inertial navigation device and the encoder are acquired, wherein before data are acquired, the time sequence of the actual measurement image and other devices must be kept in an aligned state, and the position and attitude angle information of the inertial navigation device and the encoder which are subjected to time sequence alignment is acquired.
S200, performing preliminary calculation on the position of the target to be detected based on an earth ellipsoid model to obtain the approximate position of the target to be detected;
optionally, the performing preliminary calculation on the position of the target to be measured includes: based on an earth ellipsoid model, the position of the target to be measured is initially calculated by utilizing the acquired position and attitude angle information and the pixel position of the target to be measured in an actual measurement image, and the approximate position of the target to be measured is obtained, wherein the approximate position comprises the approximate longitude, latitude and height of the target to be measured.
In one embodiment, the projection of the target onto the detector is at a pointThe pixel coordinates in the camera coordinate system are:
wherein the method comprises the steps ofFor the pixel size of the detector, +.>For the offset of the horizontal axis of the image with respect to the center, +.>For the offset of the longitudinal axis of the image with respect to the center, +.>Is the focal length of the imaging system.
The projection point of the targetConversion to coordinates in the earth rectangular coordinate system +.>
Wherein,for the conversion of the camera coordinate system into the aircraft coordinate system, < >>For the conversion of the aircraft coordinate system into the geographical coordinate system, < >>Is the conversion from a geographic coordinate system to an earth rectangular coordinate system.
Wherein,for a rotation matrix around the z-axis of the aircraft coordinate system, < >>For the azimuth of the platform->For a rotation matrix around the y-axis of the aircraft coordinate system, < >>For the platform pitch angle>For a rotation matrix around the z-axis of the navigation coordinate system, < >>For the course angle of the aircraft, < >>For a rotation matrix around the y-axis of the navigation coordinate system, < >>For aircraft pitch angle->For a rotation matrix around the x-axis of the aircraft coordinate system, < >>For airplane roll angle->、/>、/>、/>For four rotation matrices of the navigation coordinate system to the earth rectangular coordinate system>For aircraft altitude, & lt>For aircraft latitude->For aircraft longitude->Is the curvature radius of the sphere ellipsoidal mortise unitary circle,ea first eccentricity of the earth's ellipsoid;
for the coordinates of the origin of the camera in the rectangular coordinate system of the earth, the following should be satisfied:
wherein,for aircraft altitude, & lt>,/>Is the longitude and latitude of the aircraft;
for an ideal optical system, the object point, the image point and the projection center are collinear. I.e. the object is on a straight line connecting the object projection point with the origin of the camera coordinate system. Thus the coordinates of the object in the rectangular earth coordinate systemThe following should be satisfied:
aircraft altitudeAdopting the average elevation of the region to be measured, and then the coordinates of the target under the rectangular coordinate system of the earthShould satisfy
Wherein the method comprises the steps of,/>Respectively aircraft altitude->A longer half shaft and a shorter half shaft of the corresponding earth ellipsoid, whereina=6378137m, which is the major half axis of the earth's ellipsoid. Coordinate value +.about.of the target under the rectangular coordinate system of the earth can be obtained by simultaneous equations of the ellipsoidal model of the earth>. Reconversion to values in the geodetic coordinate systemI.e. the approximate position of the target.
S300, searching a target file corresponding to the actual measurement image in the data set according to the approximate position, wherein the target file comprises a target digital map image and a target digital elevation model file;
optionally, the searching the target file corresponding to the measured image in the data set includes:
and judging whether the digital map image with the longitude and latitude range meeting the preset condition exists in the data set according to the approximate position, if so, judging the digital map image to be a target digital map image corresponding to the actually measured image, and simultaneously finding out a target digital elevation model file with the same longitude and latitude range as the target digital map image.
In one embodiment, the approximate location of the target is based onSearching corresponding files in the data set. Judging whether the longitude and latitude ranges of the digital map image labels in the data set meet the following conditions:
wherein,is the minimum latitude in a single map, +.>Is the maximum latitude in a single map, < ->Is the minimum longitude in a single map, < +.>Maximum longitude in a single map; if yes, determining that the image is a target digital map image corresponding to the actual measurement image, and simultaneously finding out the target digital map imageAnd the target digital elevation model files with the same longitude and latitude ranges of the map images.
Further, whether the adjacent map of the target digital map image needs to be taken into consideration or not can be determined according to the actual situation, and if the condition is satisfied:
it is necessary to add digital map images above, below, left and right of the corresponding target digital map image and corresponding digital elevation model files, wherein,is the maximum altitude in a single map, < > or->Is the minimum altitude in a single map, < > or->For the calculated target height, +.>For the azimuth of the platform->For the platform pitch angle>For the course angle of the aircraft, < >>Is the circumference of the earth equator;
s400, preprocessing the actual measurement image through a dark channel defogging algorithm and a histogram enhancement algorithm;
optionally, as shown in fig. 2, the preprocessing the measured image by using a dark channel defogging algorithm and a histogram enhancement algorithm includes: defogging the actual measurement image through a dark channel defogging algorithm, and adopting an image sharpening algorithm to further process; and enhancing the measured image after further processing in a histogram equalization mode by adopting a CLAHE algorithm.
In one embodiment, the image defogging process can be performed in real time using a modified version of the fast dark channel method. Firstly, obtaining a dark channel diagram according to a foggy image (namely an actually measured image), then estimating the transmissivity and the atmospheric light according to the dark channel diagram, refining the transmissivity, and finally restoring a clear defogging image through a foggy degradation model. Specifically, it is assumed that the atmospheric light value A has been obtained and is within the filter windowTransmittance in the range->The two sides of the formula are divided by A at the same time and the minimum value operation is carried out, so that the method can be obtained:
wherein,representing RGB three channels>Dark channel value for hazy image, +.>For a theoretical color reduction image, +.>Is a colored hazy image;
then using minimum filtering:
the dark channel map is obtained according to the dark channel prior theory:
the above formula is integrated to obtain:
introducing a constant parameter into the above formulaThe final transmittance was determined as:
in practice, a specific transmittance is generally usedSetting a lower threshold +.>If the determined transmittance is smaller than the threshold +.>Let->. The final image restoration formula is:
wherein,for a theoretical restored image, +.>In order to obtain a foggy image, the defogged image can be obtained through the steps.
In one embodiment, the defogged image is then further processed using a multi-detail layer image sharpening algorithm. Firstly, processing defogged images by using 5×5 Gaussian filtering to obtain a base layer 1; processing the image with 21 x 21 gaussian filtering to obtain base layer 2; processing the original image and the base layer 1 by a difference method to obtain a detail layer 1; processing the original image and the base layer 2 by a difference method to obtain a detail layer 2; the original image and detail layers 1 and 2 are fused through the following formula, and a sharpened image is obtained:
wherein,for the final sharpened image, +.>Is the original image (i.e. defogged image),representing base layer 1, a->Representing the base layer 2.
In one embodiment, the enhancement processing of histogram equalization is performed finally by applying the CLAHE algorithm. First calculate the original imageIs a Cumulative Distribution Function (CDF) of +.>For corresponding to brightness value->And then contrast cumulative histogram equalization to thereby enhance the contrast of the image. The method comprises the following specific steps:
the CLAHE algorithm is a limiting contrast adaptive histogram equalization algorithm. Consider a discrete gray-scale imageThe size is +.>A pixel in whichHAs the height of the pixel is to be determined,Wis the width of the pixel, wherein。/>Is the maximum brightness. />Is luminance +.>Frequency of occurrence of->Is the total number of pixels in the image. Defining a histogram distribution corresponding to normalization->Is recorded as +.>: at the same time construct a +.>To generate a new image +.>. Wherein (1)>And->Are luminance values. Is provided with->Representing corresponding to luminance value +.>Is>Is not limited to CDF. Make the transform function +.>The method meets the following conditions:
from the above, it can be further derived that
The above equation shows that the contrast gain is proportional to the probability of the corresponding luminance in the input. The contrast gain of the image is improved through the steps.
S500, converting the preprocessed actual measurement image into a target image of a forward-looking down visual angle through image orthographic conversion;
optionally, an orthographic transformation matrix is calculated according to the measured data, coordinates of object point positions corresponding to each pixel in the preprocessed measured image in a map coordinate system are calculated, orthographic transformation is performed based on the orthographic transformation matrix, and a target image of a forward looking down visual angle is obtained.
In one embodiment, as shown in FIG. 2, the pixel size and focal length are combined with position and orientation information to calculate an orthographic transformation matrix for the preprocessed measured image and perform an image orthographic transformation. Specifically, a point in the map coordinate systemAnd ideal image point on the image plane (i.e. the preprocessed measured image)>The following relationship is established:
wherein s is any contraction factor,representing the orientation element in the camera, calculated as:
wherein,,/>representing the physical size of an individual pixel in μm; f is the focal length of the camera, and the unit is mm;the main point coordinates of the image are in pixel units; />For the camera external azimuth element, the conversion relation from the map coordinate system to the camera coordinate system is represented, and is calculated according to the following formula:
wherein, for the offset between the geographic coordinate system and the map coordinate system,/->,/>Is a platform squareAzimuth angle (I)>For the platform pitch angle>For aircraft pitch angle->For the course angle of the aircraft, < >>Is the roll angle of the airplane;
further obtain through transformation
Wherein the method comprises the steps ofObtained by the following formula
According to the formula, the required orthographic transformation matrix can be obtained, the coordinates of the object point position corresponding to each pixel of the actually measured image in the map coordinate system are calculated, and the actually measured image is converted into an image of the forward looking down visual angle. Calculating the offset between the geographic coordinate system of the measured image and the map coordinate system at the moment according to the GPS position corresponding to the shot measured image Calculating coordinates of the actual measurement image position in an earth rectangular coordinate system:
in the method, in the process of the invention,,/>is longitude and latitude coordinates of the airplane, and is->For its elevation, the major half axis of the earth's ellipsoid +.>Short half axis b= 6356752m of the earth ellipsoid, first eccentricity e= = -of the earth ellipsoid>Earth ellipsoidal mortise unitary circle curvature radius N = =>,/>The coordinates in the map coordinate system may be calculated as follows:
in the method, in the process of the invention,is the coordinate of the origin of the map coordinate system in the rectangular coordinate system of the earth,/for the coordinate of the origin of the map coordinate system in the rectangular coordinate system of the earth>For a rotation matrix from the earth rectangular coordinate system to the map coordinate system, the following formula is calculated:
in the method, in the process of the invention,is the longitude and latitude coordinates of the origin of the map coordinate system.
And S600, matching the target file and the target image through a matching algorithm based on a graph neural network to obtain the accurate position of the target to be detected.
Alternatively, as shown in fig. 3, the longitude and latitude of the target are determined by corresponding the pixel point of the target image where the target to be measured is located to the corresponding longitude and latitude in the digital map image, and the altitude corresponding to the longitude and latitude of the target is determined by a corresponding digital elevation model. Specifically, the target image is scaled to an image with a resolution of 1280×960 and written into the SuperGlue algorithm program along with the target file. And matching the scaled image with the digital map image after setting initial parameters such as a key point detection confidence threshold value, iteration times and the like by using a SuperGlue feature point matching algorithm based on the graph neural network to obtain a group of pixel coordinates of matching pairs (namely matching feature points). If the number of matched pairs is smaller than the preset number, initial parameters such as the detection confidence threshold of the feature points, the iteration number and the like are modified and matched again until the number of matched pairs is larger than the preset number. Setting appropriate thresholds in an algorithm programAnd eliminating the matched characteristic points with the matched confidence coefficient smaller than the threshold value, calculating a homography matrix by using the reserved matched characteristic points, calculating corresponding pixel coordinates of the target in the digital map by using the homography matrix, converting the specific longitude and latitude of the target through the longitude and latitude range of the digital map image, and simultaneously obtaining the elevation information of the target through comparing the corresponding digital elevation model files.
In one application scenario, after setting the NMS radius, the key point detection confidence threshold and the iteration number initial values to be 2, 0.001 and 10 respectively, the set of pixel coordinates of a matching pair are obtained by starting to match with a corresponding digital map. Setting the matching confidence threshold value to be 0.85, if the matching pair larger than the threshold value is smaller than 4, modifying three parameters of NMS radius, key point detection confidence threshold value and iteration number, wherein the modification ranges are respectively 2-5, 0.001-0.01 and 10-20, and re-matching. When the matching pairs with the number greater than 4 are obtained, a homography matrix is calculated through the matching pairs, and finally the accurate longitude and latitude corresponding to the target are found out, so that the accurate altitude is found in the corresponding digital elevation model file.
From the above, the embodiment of the application provides a digital map matching target positioning method based on a graph neural network, which greatly improves the image matching success rate by adding image preprocessing and image orthographic transformation steps and applying a matching algorithm based on the graph neural network, is applicable to a photoelectric platform of a large unmanned aerial vehicle, realizes the target positioning of the large unmanned aerial vehicle under the detection conditions of long-distance, large-inclination and high-altitude flight, and can obtain very accurate positioning results under the long-distance of the large inclination.
Example two
The embodiment of the application provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the memory is used for storing the software program and a module, and the processor executes various functional applications and data processing by running the software program and the module stored in the memory. The memory and the processor are connected by a bus. In particular, the processor implements any of the steps of the above-described embodiment by running the above-described computer program stored in the memory.
It should be appreciated that in embodiments of the present application, the processor may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read-only memory, flash memory, and random access memory, and provides instructions and data to the processor. Some or all of the memory may also include non-volatile random access memory.
It should be appreciated that the above-described integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by instructing related hardware by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by a processor. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. The content of the computer readable storage medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
It should be noted that, the method and the details thereof provided in the foregoing embodiments may be combined into the apparatus and the device provided in the embodiments, and are referred to each other and are not described in detail.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of modules or elements described above is merely a logical functional division, and may be implemented in other ways, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The digital map matching target positioning method based on the graph neural network is characterized by comprising the following steps of:
acquiring a data set and actual measurement data, wherein the data set comprises a digital map image and a digital elevation model file of an area to be measured, and the actual measurement data comprises an actual measurement image containing an object to be measured in the flight process;
based on an earth ellipsoid model, performing preliminary calculation on the position of the target to be detected to obtain the approximate position of the target to be detected;
searching a target file corresponding to the actual measurement image in the data set according to the approximate position, wherein the target file comprises a target digital map image and a target digital elevation model file;
preprocessing the actual measurement image through a dark channel defogging algorithm and a histogram enhancement algorithm;
converting the preprocessed actual measurement image into a target image of a forward-looking down visual angle through image orthographic conversion;
and matching the target file with the target image through a matching algorithm based on a graph neural network to obtain the accurate position of the target to be detected.
2. The digital map matching target positioning method of claim 1, wherein said acquiring a data set comprises:
acquiring a digital map containing an area to be detected and cutting the digital map into a group of digital map images with preset resolution, wherein each pixel in each digital map image represents a preset longitude and latitude range;
and cutting and splicing the standard digital elevation model files according to the digital map images to obtain digital elevation model files corresponding to the preset resolutions of the digital map images one by one, and marking the longitude and latitude ranges of the digital elevation model files to obtain the data set.
3. The digital map matching target positioning method of claim 2, wherein said measured data further comprises: position and attitude angle information at a time corresponding to the actual measurement image;
the obtaining of the measured data comprises:
in the flight process, when a target to be detected appears in the field of view, acquiring an actual measurement image by using a photoelectric platform;
and acquiring position and attitude angle information at the moment corresponding to the measured image by using an inertial navigation device and an encoder.
4. The method for locating a digital map matching target according to claim 3, wherein said preliminary calculation of the position of the target to be measured includes:
based on an earth ellipsoid model, the position of the target to be measured is initially calculated by utilizing the acquired position and attitude angle information and the pixel position of the target to be measured in an actual measurement image, and the approximate position of the target to be measured is obtained, wherein the approximate position comprises the approximate longitude, latitude and height of the target to be measured.
5. The digital map matching target positioning method of any of claims 2-4, wherein said searching for a target file corresponding to said measured image in said dataset comprises:
and judging whether the digital map image with the longitude and latitude range meeting the preset condition exists in the data set according to the approximate position, if so, judging the digital map image to be a target digital map image corresponding to the actually measured image, and simultaneously finding out a target digital elevation model file with the same longitude and latitude range as the target digital map image.
6. The digital map matching target positioning method of any of claims 1-4, wherein said preprocessing said measured image by a dark channel defogging algorithm and a histogram enhancement algorithm comprises:
defogging the actual measurement image through a dark channel defogging algorithm, and adopting an image sharpening algorithm to further process;
and enhancing the measured image after further processing in a histogram equalization mode by adopting a CLAHE algorithm.
7. The digital map matching target positioning method according to any one of claims 1 to 4, wherein the converting the preprocessed actually measured image into the target image of the forward looking down view angle by the image orthographic conversion includes:
and calculating an orthographic transformation matrix according to the measured data, calculating the coordinates of the object point position corresponding to each pixel in the preprocessed measured image in a map coordinate system, and carrying out orthographic transformation based on the orthographic transformation matrix to obtain a target image of a forward looking down visual angle.
8. The digital map matching target positioning method of any of claims 1-4, wherein said matching said target file and said target image by a matching algorithm based on a graph neural network comprises:
scaling the target image to a scaled image with the same resolution as the target file, setting initial parameters, and matching the scaled image with the target file by using a SuperGlue feature point matching algorithm based on a graph neural network to obtain a group of matched pixel coordinates;
when the matching pairs are smaller than the preset number, modifying the initial parameters and carrying out re-matching; when the matching pairs are larger than the preset number, calculating a homography matrix through the matching pairs, calculating corresponding pixel coordinates of the target to be measured in the target digital map image according to the homography matrix, obtaining specific longitude and latitude of the target to be measured through longitude and latitude ranges of the target digital map image, and obtaining elevation information of the target to be measured through the target digital elevation model file.
9. An electronic device, comprising: memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 8.
CN202311566300.1A 2023-11-23 2023-11-23 Digital map matching target positioning method based on graphic neural network Active CN117274391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311566300.1A CN117274391B (en) 2023-11-23 2023-11-23 Digital map matching target positioning method based on graphic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311566300.1A CN117274391B (en) 2023-11-23 2023-11-23 Digital map matching target positioning method based on graphic neural network

Publications (2)

Publication Number Publication Date
CN117274391A true CN117274391A (en) 2023-12-22
CN117274391B CN117274391B (en) 2024-02-06

Family

ID=89208531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311566300.1A Active CN117274391B (en) 2023-11-23 2023-11-23 Digital map matching target positioning method based on graphic neural network

Country Status (1)

Country Link
CN (1) CN117274391B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056605A (en) * 2016-05-26 2016-10-26 西安空间无线电技术研究所 In-orbit high-precision image positioning method based on image coupling
CN107490364A (en) * 2017-09-01 2017-12-19 中国科学院长春光学精密机械与物理研究所 A kind of wide-angle tilt is imaged aerial camera object positioning method
CN112419374A (en) * 2020-11-11 2021-02-26 北京航空航天大学 Unmanned aerial vehicle positioning method based on image registration
WO2021121306A1 (en) * 2019-12-18 2021-06-24 北京嘀嘀无限科技发展有限公司 Visual location method and system
CN115527128A (en) * 2022-07-22 2022-12-27 北方信息控制研究院集团有限公司 Semantic segmentation based aerial image rapid positioning method
CN115630236A (en) * 2022-10-19 2023-01-20 感知天下(北京)信息科技有限公司 Global fast retrieval positioning method of passive remote sensing image, storage medium and equipment
CN116045921A (en) * 2023-01-09 2023-05-02 国家石油天然气管网集团有限公司 Target positioning method, device, equipment and medium based on digital elevation model
CN117073669A (en) * 2023-08-18 2023-11-17 武汉华中天经通视科技有限公司 Aircraft positioning method
CN117078756A (en) * 2023-08-16 2023-11-17 感知天下(北京)信息科技有限公司 Airborne ground target accurate positioning method based on scene retrieval matching

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056605A (en) * 2016-05-26 2016-10-26 西安空间无线电技术研究所 In-orbit high-precision image positioning method based on image coupling
CN107490364A (en) * 2017-09-01 2017-12-19 中国科学院长春光学精密机械与物理研究所 A kind of wide-angle tilt is imaged aerial camera object positioning method
WO2021121306A1 (en) * 2019-12-18 2021-06-24 北京嘀嘀无限科技发展有限公司 Visual location method and system
CN112419374A (en) * 2020-11-11 2021-02-26 北京航空航天大学 Unmanned aerial vehicle positioning method based on image registration
CN115527128A (en) * 2022-07-22 2022-12-27 北方信息控制研究院集团有限公司 Semantic segmentation based aerial image rapid positioning method
CN115630236A (en) * 2022-10-19 2023-01-20 感知天下(北京)信息科技有限公司 Global fast retrieval positioning method of passive remote sensing image, storage medium and equipment
CN116045921A (en) * 2023-01-09 2023-05-02 国家石油天然气管网集团有限公司 Target positioning method, device, equipment and medium based on digital elevation model
CN117078756A (en) * 2023-08-16 2023-11-17 感知天下(北京)信息科技有限公司 Airborne ground target accurate positioning method based on scene retrieval matching
CN117073669A (en) * 2023-08-18 2023-11-17 武汉华中天经通视科技有限公司 Aircraft positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘侃 等: "海鹰智库丛书 电子信息技术篇", 北京理工大学出版社, pages: 76 - 80 *

Also Published As

Publication number Publication date
CN117274391B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN111080526B (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
Kedzierski et al. Radiometric quality assessment of images acquired by UAV’s in various lighting and weather conditions
EP3876189A1 (en) Geographic object detection device, geographic object detection method, and geographic object detection program
CN110926475B (en) Unmanned aerial vehicle waypoint generation method and device and electronic equipment
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
KR20110082903A (en) Method of compensating and generating orthoimage for aerial-photo
CN111583119B (en) Orthoimage splicing method and equipment and computer readable medium
CN112967344B (en) Method, device, storage medium and program product for calibrating camera external parameters
CN115661262A (en) Internal and external parameter calibration method and device and electronic equipment
US20220074743A1 (en) Aerial survey method, aircraft, and storage medium
CN112330576A (en) Distortion correction method, device and equipment for vehicle-mounted fisheye camera and storage medium
CN111982132B (en) Data processing method, device and storage medium
CN117274391B (en) Digital map matching target positioning method based on graphic neural network
CN113012084A (en) Unmanned aerial vehicle image real-time splicing method and device and terminal equipment
CN113570554A (en) Single image visibility detection method based on scene depth
CN116402693B (en) Municipal engineering image processing method and device based on remote sensing technology
CN111582296B (en) Remote sensing image comprehensive matching method and device, electronic equipment and storage medium
US20230368357A1 (en) Visibility measurement device
CN112163562B (en) Image overlapping area calculation method and device, electronic equipment and storage medium
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN107025636A (en) With reference to the image defogging method and device and electronic installation of depth information
CN114445583A (en) Data processing method and device, electronic equipment and storage medium
CN114897968B (en) Method and device for determining vehicle vision, computer equipment and storage medium
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information
CN113870365B (en) Camera calibration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant