CN118097339B - Deep learning sample enhancement method and device based on low-altitude photogrammetry - Google Patents

Deep learning sample enhancement method and device based on low-altitude photogrammetry Download PDF

Info

Publication number
CN118097339B
CN118097339B CN202410502536.7A CN202410502536A CN118097339B CN 118097339 B CN118097339 B CN 118097339B CN 202410502536 A CN202410502536 A CN 202410502536A CN 118097339 B CN118097339 B CN 118097339B
Authority
CN
China
Prior art keywords
image
sample
low
coordinates
altitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410502536.7A
Other languages
Chinese (zh)
Other versions
CN118097339A (en
Inventor
吴弦骏
廖丽敏
张涵
王冲
闻平
吴小东
吴杰
王莹
付航
曹磊
杨勇喜
吴荣光
杨彦梅
朱琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PowerChina Kunming Engineering Corp Ltd
Original Assignee
PowerChina Kunming Engineering Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PowerChina Kunming Engineering Corp Ltd filed Critical PowerChina Kunming Engineering Corp Ltd
Priority to CN202410502536.7A priority Critical patent/CN118097339B/en
Publication of CN118097339A publication Critical patent/CN118097339A/en
Application granted granted Critical
Publication of CN118097339B publication Critical patent/CN118097339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry. According to the application, through unmanned aerial vehicle aviation flight and aerial triangulation, accurate POS data of the DEM, DOM and original images are formed; then manually drawing a rectangular sample of the target object based on the DOM; then, under the assistance of POS data, calculating the coordinates of image points of the corner points of the rectangular sample on the corresponding images based on a photogrammetry collineation equation; and finally, reconstructing a rectangular sample on the original image by using the minimum circumscribed rectangle. According to the deep learning sample enhancement method based on low-altitude photogrammetry, the enhancement sample which is about several times or even tens times can be expanded by only one-time manual drawing, so that the generalization capability and the application effect of a deep learning model can be improved; the method has the characteristics of simple parameter setting, stability and reliability, and greatly improves the manufacturing efficiency under the condition of ensuring accurate sample identification.

Description

Deep learning sample enhancement method and device based on low-altitude photogrammetry
Technical Field
The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry.
Background
Deep learning is the inherent law and presentation hierarchy of learning sample data, whose goal is to enable machines to analyze learning capabilities like humans, thus solving many complex pattern recognition challenges. In recent years, deep learning technology has made great progress, and has been used in many fields such as computer vision, speech recognition, natural language processing, etc. Studies have shown that the effect of deep learning models depends to a large extent on the quantity and quality of sample data. The sample data enhancement is to generate new training samples by carrying out operations such as transformation, expansion, recombination and the like on original training data, so that the problems of insufficient data quantity, uneven sample distribution and the like are effectively solved, and the robustness and generalization capability of the model are improved.
In the field of remote sensing applications based on deep learning, common image data enhancement techniques include: mirror image overturning, random clipping, rotation, scaling, translation, brightness adjustment, noise addition and the like, and the technologies can change factors such as appearance, angle, illumination and the like of an image, so that the diversity of training samples is increased. However, the existing remote sensing image data enhancement technology only uses the final orthographic image result, but the original image of the orthographic image is not used, which forms a waste of resources to a certain extent. In fact, for low-altitude photogrammetry remote sensing, a target object on an orthographic image can exist in several or even tens of original images (the number depends on the overlapping degree of the aerial images), if the target object in the original images can be extracted in a correlated way through mathematical logic and used as the expansion of deep learning sample data, the number of samples can be increased by about several times or even tens of times, and the training effect of the deep learning sample data is greatly improved. Therefore, it is needed to find a deep learning sample enhancement method based on the low-altitude photogrammetry-related original image.
Disclosure of Invention
The application mainly aims to provide a deep learning sample enhancement method based on low-altitude photogrammetry-related original images, which adopts POS data generated after the aerial triangulation of the original images as auxiliary information and automatically constructs an original image enhancement sample in a photogrammetry collineation equation and minimum circumscribed rectangle mode.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of the present invention, the present invention claims a deep learning sample enhancement method based on low-altitude photogrammetry, comprising:
Acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
and constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
Further, when a plurality of rectangular samples of the target object are manually drawn, in order to cope with image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance H/2 of the expansion is taken, and H is the average relative height of the target object.
Further, when the minimum circumscribed rectangle is geometrically rotated and rotated, one side of the circumscribed rectangle is collinear with the side of the original polygon according to the fact that one side of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected to be used as a rotation basis.
Further, the obtaining the low-altitude aerial image, the unmanned aerial vehicle taking the low-altitude aerial image with the original external azimuth element through changing the high flight, further comprises:
determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
Further, the importing the low-altitude aerial photography image into the first application program, performing aerial triangulation based on the ground actual measurement image control point, and constructing and exporting a digital elevation model and a digital orthophoto, further comprising:
Importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
Deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein, the method comprises the steps of, wherein,As the coordinates of the location(s),Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
Further, the importing the digital elevation model into the second application program, interpreting a target object, manually drawing a plurality of rectangular samples of the target object, and obtaining corner coordinates of the rectangular samples, and further includes:
drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
Acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,Wherein, the method comprises the steps of, wherein,AndIt may be directly read or converted by software,And (3) corresponding elevation values of coordinates for the digital orthophotos.
Further, the calculating the image point coordinates corresponding to the angular point coordinates by using a photogrammetry collineation equation, judging whether the image point coordinates are within a specified range, if so, reserving the image point coordinates as a sample enhancement image of the rectangular sample, otherwise, discarding, and further including:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
According to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,Calculating the coordinates of the image points corresponding to the corner points P aiming at the rectangular sample by adopting a photogrammetry collineation equation,
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
Further, the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, and further comprising:
Acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
Obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
and reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
According to a second aspect of the present invention, the present invention claims a deep learning sample enhancement device based on low-altitude photogrammetry, comprising:
one or more processors;
And a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a deep learning sample enhancement method according to the one or more low-altitude photogrammetry-based methods.
The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry. According to the application, through unmanned aerial vehicle aviation flight and aerial triangulation, accurate POS data of the DEM, DOM and original images are formed; then manually drawing a rectangular sample of the target object based on the DOM; then, under the assistance of POS data, calculating the coordinates of image points of the corner points of the rectangular sample on the corresponding images based on a photogrammetry collineation equation; and finally, reconstructing a rectangular sample on the original image by using the minimum circumscribed rectangle. According to the deep learning sample enhancement method based on low-altitude photogrammetry, the enhancement sample which is about several times or even tens times can be expanded by only one-time manual drawing, so that the generalization capability and the application effect of a deep learning model can be improved; the method has the characteristics of simple parameter setting, stability and reliability, and greatly improves the manufacturing efficiency under the condition of ensuring accurate sample identification.
Drawings
FIG. 1 is a workflow diagram of a deep learning sample enhancement method based on low-altitude photogrammetry in accordance with an embodiment of the present application;
FIG. 2 is a schematic diagram of the effect of the deep learning sample enhancement method based on low-altitude photogrammetry according to the embodiment of the present application;
fig. 3 is a block diagram of a deep learning sample enhancement device based on low-altitude photogrammetry according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "first," "second," and "third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
According to a first embodiment of the present invention, the present invention claims a deep learning sample enhancement method based on low-altitude photogrammetry, referring to fig. 1, comprising:
Acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
and constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
Further, when a plurality of rectangular samples of the target object are manually drawn, in order to cope with image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance H/2 of the expansion is taken, and H is the average relative height of the target object.
Further, when the minimum circumscribed rectangle is geometrically rotated and rotated, one side of the circumscribed rectangle is collinear with the side of the original polygon according to the fact that one side of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected to be used as a rotation basis.
Further, the obtaining the low-altitude aerial image, the unmanned aerial vehicle taking the low-altitude aerial image with the original external azimuth element through changing the high flight, further comprises:
determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
Wherein in this embodiment the heading overlap is set to be no less than 70% and the side overlap is set to be no less than 30%.
Further, the importing the aerial image from the low altitude into the first application program, performing aerial triangulation based on the ground actual measurement image control point, and constructing and exporting a Digital Elevation Model (DEM) and a Digital Orthophoto (DOM), further comprising:
Importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
Deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein, the method comprises the steps of, wherein,As the coordinates of the location(s),Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
Further, the importing the digital elevation model into the second application program, interpreting a target object, manually drawing a plurality of rectangular samples of the target object, and obtaining corner coordinates of the rectangular samples, and further includes:
drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
Acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,Wherein, the method comprises the steps of, wherein,AndIt may be directly read or converted by software,And (3) corresponding elevation values of coordinates for the digital orthophotos.
Further, the calculating the image point coordinates corresponding to the angular point coordinates by using a photogrammetry collineation equation, judging whether the image point coordinates are within a specified range, if so, reserving the image point coordinates as a sample enhancement image of the rectangular sample, otherwise, discarding, and further including:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
According to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,Calculating the coordinates of the image points corresponding to the corner points P aiming at the rectangular sample by adopting a photogrammetry collineation equation,
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
Wherein, in this embodiment, the first and second processing steps,
Wherein a 1,a2,a3,b1,b2,b3,c1,c2,c3 is a parameter of a rotation matrix between the image space auxiliary coordinate system and the ground photogrammetry coordinate system, and f is a camera focal length, a rotation matrix parameter and an attitude angle of a photoThe relation between the two is:
Selecting a sample enhancement image of the rectangular sample: judging whether the coordinates of the image point are within a specified range, setting the width and height of the image as w and h respectively, and setting the size of the image element as ; Then the coordinates of the image point,Should be within the following ranges:
if the coordinates of the 4 corner points of the rectangular sample are all in the range of the image i, the image i is reserved as a sample enhanced image of the rectangular sample, otherwise, the image i is removed;
further, the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, and further comprising:
Acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
Obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
and reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
In this embodiment, an initial simple circumscribed rectangle of 4 corner points is first obtained: the coordinates of image points with 4 angular points are respectively%,)、(,)、(,)、(,) Under the original image space coordinates of the image, the coordinates of 4 corner points of the initial simple circumscribed rectangle are respectively @,)、(,)、(,)、(,) Wherein, the method comprises the steps of, wherein,
The area S of the initial simple circumscribed rectangle is:
Geometric rotation: the center of 4 corner points is used ) To rotate the center point, the 4 corner points are rotated counterclockwise by a certain angle. The mathematical foundation for realizing the rotation of a certain point on a plane around a fixed point is that the point on the plane is set as [ ]) Around another point%) Counterclockwise rotation ofThe point after the angle is%) The following steps are:
Referring to fig. 2, a target object sample enhancement effect diagram is implemented for the present invention.
According to a second embodiment of the present invention, the present invention claims a deep learning sample enhancement device based on low-altitude photogrammetry, referring to fig. 3, comprising:
one or more processors;
And a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a deep learning sample enhancement method according to the one or more low-altitude photogrammetry-based methods.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and the patent scope of the application is not limited thereto, but is also covered by the patent protection scope of the application, as long as the equivalent structure or equivalent flow changes made by the description and the drawings of the application or the direct or indirect application in other related technical fields are adopted.
The embodiments of the application have been described in detail above, but they are merely examples, and the application is not limited to the above-described embodiments. It will be apparent to those skilled in the art that any equivalent modifications or substitutions to this application are within the scope of the application, and therefore, all equivalent changes and modifications, improvements, etc. that do not depart from the spirit and scope of the principles of the application are intended to be covered by this application.

Claims (9)

1. A deep learning sample enhancement method based on low-altitude photogrammetry, comprising:
Acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
and constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
2. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein when manually drawing a plurality of rectangular samples of the target object, in order to cope with the image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance of H/2 of the expansion is taken, and H is the average relative height of the target object.
3. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein when geometrically rotating and rotating the minimum bounding rectangle, there is a side collineation with the side of the original polygon according to a bounding rectangle of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected as the rotation basis.
4. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the capturing the low-altitude aerial image, the unmanned aerial vehicle capturing the low-altitude aerial image with the original external orientation element by changing the flying height, further comprises:
determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
5. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the step of importing the low-altitude aerial image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and deriving a digital elevation model and a digital orthophoto image, further comprises:
Importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
Deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein, the method comprises the steps of, wherein,As the coordinates of the location(s),Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
6. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and obtaining coordinates of corner points of the rectangular samples, further comprises:
drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
Acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,Wherein, the method comprises the steps of, wherein,AndIt may be directly read or converted by software,And (3) corresponding elevation values of coordinates for the digital orthophotos.
7. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 5, wherein the calculating the pixel coordinates corresponding to the corner coordinates using a photogrammetry collineation equation, determining whether the pixel coordinates are within a predetermined range, if so, retaining the pixel coordinates as a sample enhanced image of the rectangular sample, otherwise discarding the sample enhanced image, further comprising:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
According to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,Calculating the coordinates of the image points corresponding to the corner points P aiming at the rectangular sample by adopting a photogrammetry collineation equation,
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
8. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, further comprises:
Acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
Obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
and reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
9. A deep learning sample enhancement device based on low-altitude photogrammetry, comprising:
one or more processors;
A memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a low-altitude photogrammetry-based deep learning sample enhancement method according to any one of claims 1 to 8.
CN202410502536.7A 2024-04-25 2024-04-25 Deep learning sample enhancement method and device based on low-altitude photogrammetry Active CN118097339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410502536.7A CN118097339B (en) 2024-04-25 2024-04-25 Deep learning sample enhancement method and device based on low-altitude photogrammetry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410502536.7A CN118097339B (en) 2024-04-25 2024-04-25 Deep learning sample enhancement method and device based on low-altitude photogrammetry

Publications (2)

Publication Number Publication Date
CN118097339A CN118097339A (en) 2024-05-28
CN118097339B true CN118097339B (en) 2024-07-02

Family

ID=91155076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410502536.7A Active CN118097339B (en) 2024-04-25 2024-04-25 Deep learning sample enhancement method and device based on low-altitude photogrammetry

Country Status (1)

Country Link
CN (1) CN118097339B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106949880A (en) * 2017-03-10 2017-07-14 中国电建集团昆明勘测设计研究院有限公司 Method for processing overhigh local overlapping degree of unmanned aerial vehicle images in measurement area with large elevation fluctuation
CN110940318A (en) * 2019-10-22 2020-03-31 上海航遥信息技术有限公司 Aerial remote sensing real-time imaging method, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223090A (en) * 2022-06-22 2022-10-21 张亚峰 Airport clearance barrier period monitoring method based on multi-source remote sensing image
CN115294293B (en) * 2022-10-08 2023-03-24 速度时空信息科技股份有限公司 Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result
CN117292337A (en) * 2023-11-24 2023-12-26 中国科学院空天信息创新研究院 Remote sensing image target detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106949880A (en) * 2017-03-10 2017-07-14 中国电建集团昆明勘测设计研究院有限公司 Method for processing overhigh local overlapping degree of unmanned aerial vehicle images in measurement area with large elevation fluctuation
CN110940318A (en) * 2019-10-22 2020-03-31 上海航遥信息技术有限公司 Aerial remote sensing real-time imaging method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN118097339A (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US11887247B2 (en) Visual localization
CN110163064B (en) Method and device for identifying road marker and storage medium
US9519968B2 (en) Calibrating visual sensors using homography operators
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
US20080050011A1 (en) Modeling and texturing digital surface models in a mapping application
AU2011362799A1 (en) 3D streets
CN108305291B (en) Monocular vision positioning and attitude determination method utilizing wall advertisement containing positioning two-dimensional code
Han et al. CAD-based 3D objects recognition in monocular images for mobile augmented reality
CN108933902A (en) Panoramic picture acquisition device builds drawing method and mobile robot
US20220309708A1 (en) System and method for automated estimation of 3d orientation of a physical asset
CN112419460B (en) Method, apparatus, computer device and storage medium for baking model map
CN112348886A (en) Visual positioning method, terminal and server
CN112733641A (en) Object size measuring method, device, equipment and storage medium
CN117115243B (en) Building group outer facade window positioning method and device based on street view picture
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
JP7420815B2 (en) System and method for selecting complementary images from a plurality of images for 3D geometric extraction
CN118097339B (en) Deep learning sample enhancement method and device based on low-altitude photogrammetry
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN208638479U (en) Panoramic picture acquisition device and mobile robot
CN115376018A (en) Building height and floor area calculation method, device, equipment and storage medium
CN114693820A (en) Object extraction method and device, electronic equipment and storage medium
CN114416764A (en) Map updating method, device, equipment and storage medium
CN115836322A (en) Image cropping method and device, electronic equipment and storage medium
US20240242318A1 (en) Face deformation compensating method for face depth image, imaging device, and storage medium
JP7457844B2 (en) Information processing device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant