CN114998551B - Grid reconstruction quality optimization method, system, computer and readable storage medium - Google Patents

Grid reconstruction quality optimization method, system, computer and readable storage medium Download PDF

Info

Publication number
CN114998551B
CN114998551B CN202210925727.5A CN202210925727A CN114998551B CN 114998551 B CN114998551 B CN 114998551B CN 202210925727 A CN202210925727 A CN 202210925727A CN 114998551 B CN114998551 B CN 114998551B
Authority
CN
China
Prior art keywords
camera
image
initial
grid
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210925727.5A
Other languages
Chinese (zh)
Other versions
CN114998551A (en
Inventor
吕伟
曾江佑
周利
廖成慧
郝海风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Booway New Technology Co ltd
Original Assignee
Jiangxi Booway New Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Booway New Technology Co ltd filed Critical Jiangxi Booway New Technology Co ltd
Priority to CN202210925727.5A priority Critical patent/CN114998551B/en
Publication of CN114998551A publication Critical patent/CN114998551A/en
Application granted granted Critical
Publication of CN114998551B publication Critical patent/CN114998551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a system, a computer and a readable storage medium for optimizing grid reconstruction quality. The method comprises the following steps: acquiring a measurement image and point cloud of aerial photography of the unmanned aerial vehicle, and constructing an initial grid model; constructing an initial camera pair list, wherein the camera pair is a pair of cameras corresponding to two images; selecting an optimal and least number of preferred camera pairs from the initial camera pair list according to the indexes of the camera pairs paired on the grid surface of the initial grid model; calculating the variation amplitude of the grid vertex in the normal direction according to the similarity difference of the two cameras in the optimal camera pair on the images; at different image resolutions, the mesh vertices are updated by a gradient descent method. The beneficial effect of this application is: the method optimizes the model mesh through similarity difference of the model surface on the image, and solves the problems of low degree of detail and uneven distribution of detail areas of the mesh surface generated in the prior art.

Description

Grid reconstruction quality optimization method, system, computer and readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a system, a computer and a readable storage medium for optimizing grid reconstruction quality.
Background
With the high-speed development of the field of unmanned aerial vehicles, the large-scale scene aerial view visual construction can be realized by combining the method of shooting high-resolution images by the unmanned aerial vehicles and the multi-view stereo method, and based on the method, the large-scale model three-dimensional reconstruction is widely applied.
At present, the unmanned aerial vehicle shoots the shadowImage data is subjected to three-dimensional reconstruction, image processing is realized by adopting a grid reconstruction mode, the grid reconstruction refers to the operation of constructing a network by utilizing dense point clouds obtained by dense matching, however, in the process of grid reconstruction, under the condition of adopting an irregular camera pair shot by an unmanned aerial vehicle, in a grid model, a grid is continuous on an image but discontinuous on a real object, as shown in the attached figure 1 of the specification, due to the position structure particularity of a continuous part displayed in the image, the surface of the model is subjected to surface structure processing
Figure 475741DEST_PATH_IMAGE001
Is a part of
Figure 674641DEST_PATH_IMAGE002
Quilt-only camera
Figure 546782DEST_PATH_IMAGE003
It is observed that the camera pair shot by the unmanned aerial vehicle at the present stage can cause partial loss of the finally generated grid surface, and the problems of low detail degree and uneven distribution of detail areas exist.
Disclosure of Invention
Based on this, an object of the present invention is to provide a method, a system, a computer and a readable storage medium for optimizing mesh reconstruction quality, which are based on optimizing a camera pair using an approximate scene model, and optimizing a model mesh through similarity difference between model surfaces on an image, so as to solve the problems of low detail degree and uneven distribution of detail areas on the generated mesh surface in the prior art.
In a first aspect, the present application provides a method for optimizing quality of mesh reconstruction, where the method includes the following steps:
acquiring a measurement image and point cloud of aerial photography of the unmanned aerial vehicle, and constructing an initial grid model; wherein the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedral graph cutting method;
constructing an initial camera pair list, wherein the camera pair is a camera pair corresponding to two images;
selecting an optimal and least number of preferred camera pairs from the initial list of camera pairs according to the index of pairing of the camera pairs on the mesh surface of the initial mesh model;
calculating the variation amplitude of the grid vertex in the normal direction according to the similarity difference of the two cameras in the preferred camera pair on the images;
at different image resolutions, the mesh vertices are updated by a gradient descent method.
The mesh reconstruction quality optimization method provided by the invention comprises the steps of obtaining an unmanned aerial vehicle measurement image and point cloud, and generating an initial mesh model according to the point cloud; pairing images of the initial grid model, optimizing pairing and generating an optimal and minimum camera pair set (namely an optimal camera pair); on the basis of the initial grid, according to the similarity measurement of the grid surface between the camera pair images, listing an equation of converting the similarity change into the point gradient change on the grid surface, calculating to obtain the gradient change of the point on the surface in the normal direction, discretizing the gradient change to the grid vertex, and obtaining the gradient change value of the grid vertex; and dividing the image into a plurality of scales according to the resolution, wherein each scale uses the last scale output grid as input to carry out grid subdivision, and under each scale, the input grid iteratively updates grid vertexes by adopting a gradient descent method, and a grid model with higher detail degree is output. By the method, the problems of low surface detail degree and uneven distribution of detail areas of the generated grid in the prior art are effectively solved.
Preferably, in the method for optimizing quality of mesh reconstruction described in the present application, the step of constructing an initial camera pair list includes:
calculating the midpoint of the point cloud
Figure 274567DEST_PATH_IMAGE004
In the image
Figure 28896DEST_PATH_IMAGE005
Corresponding camera
Figure 234750DEST_PATH_IMAGE006
The number of visible points;
for any camera pair
Figure 23714DEST_PATH_IMAGE007
Is subject to conditional restrictions such that
Figure 906088DEST_PATH_IMAGE004
Before the most number of points they collectively observe
Figure 350976DEST_PATH_IMAGE008
A camera as
Figure 157258DEST_PATH_IMAGE009
The initial pairing of (1);
the conditions are limited to:
Figure 738412DEST_PATH_IMAGE010
Figure 604737DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 271342DEST_PATH_IMAGE012
representing three-dimensional points
Figure 881315DEST_PATH_IMAGE013
Camera with a camera module
Figure 831822DEST_PATH_IMAGE014
And with
Figure 603469DEST_PATH_IMAGE015
A reprojection pixel distance error therebetween;
Figure 22949DEST_PATH_IMAGE016
presentation camera
Figure 171033DEST_PATH_IMAGE017
Is centered to
Figure 726780DEST_PATH_IMAGE013
The angle between the rays;
final screening to obtain initial camera pair set
Figure 669328DEST_PATH_IMAGE018
Preferably, the step of selecting an optimal and least number of preferred camera pairs from the initial camera pair list according to the indexes of pairing of the camera pairs on the mesh surface of the initial mesh model specifically includes:
inputting the initial grid model and an image sequence;
defining a disparity quality evaluation with a reference disparity of 50 DEG, the disparity quality evaluation being:
Figure 310525DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 262300DEST_PATH_IMAGE020
(ii) a Average parallax:
Figure 193259DEST_PATH_IMAGE021
for model surface
Figure 509971DEST_PATH_IMAGE022
Upper point of
Figure 700781DEST_PATH_IMAGE023
Centering of camera
Figure 128351DEST_PATH_IMAGE024
Angle of parallax
Figure 721007DEST_PATH_IMAGE025
Figure 208620DEST_PATH_IMAGE026
Indicates the area of the image region, where the region is indicated as
Figure 621147DEST_PATH_IMAGE027
It is a surface
Figure 101675DEST_PATH_IMAGE028
On-camera
Figure 548837DEST_PATH_IMAGE029
In the area of the projection, the image is projected,
Figure 207352DEST_PATH_IMAGE030
representing pixels in an image
Figure 372754DEST_PATH_IMAGE031
Is back projected to the surface
Figure 142127DEST_PATH_IMAGE032
A point on;
defining a resolution quality assessment, the resolution quality assessment being:
Figure 647057DEST_PATH_IMAGE033
wherein, the first and the second end of the pipe are connected with each other,
Figure 538790DEST_PATH_IMAGE034
representing a 25% difference in resolution; is provided with
Figure 112860DEST_PATH_IMAGE035
Representing points on the surface of a model
Figure 482661DEST_PATH_IMAGE036
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 904415DEST_PATH_IMAGE037
which represents the focal length of the camera,
Figure 904732DEST_PATH_IMAGE038
as a camera center
Figure 779147DEST_PATH_IMAGE039
To point
Figure 890323DEST_PATH_IMAGE040
The vector of (a);
Figure 369846DEST_PATH_IMAGE041
indicating that the camera is relative to the two-camera resolution
Figure 603381DEST_PATH_IMAGE042
Normalized difference in length:
Figure 152043DEST_PATH_IMAGE043
about
Figure 129226DEST_PATH_IMAGE044
Average difference of (a):
Figure 463255DEST_PATH_IMAGE045
defining pixel overlap quality assessment:
Figure 867692DEST_PATH_IMAGE046
wherein, the first and the second end of the pipe are connected with each other,
Figure 388803DEST_PATH_IMAGE047
presentation camera
Figure 169677DEST_PATH_IMAGE048
The corresponding images are then displayed on the display screen,
Figure 358213DEST_PATH_IMAGE049
is the overall area of the image;
define camera pair evaluation based on symmetry on surface:
Figure 933551DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 191226DEST_PATH_IMAGE051
Figure 510212DEST_PATH_IMAGE052
representing points on the surface of a model
Figure 553254DEST_PATH_IMAGE053
The angular difference of the normal is averaged out,
Figure 299493DEST_PATH_IMAGE054
Figure 60776DEST_PATH_IMAGE055
is camera center to point
Figure 917873DEST_PATH_IMAGE056
Relative to its normal line
Figure 81001DEST_PATH_IMAGE057
The difference in the angle of (c),
Figure 998142DEST_PATH_IMAGE058
if, if
Figure 230409DEST_PATH_IMAGE059
On the same side of the normal line,
Figure 891197DEST_PATH_IMAGE060
the value is-1, otherwise the value is 1;
defining an energy equation:
Figure 908832DEST_PATH_IMAGE061
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 996874DEST_PATH_IMAGE062
represents the energy that measures the quality of the disparity,
Figure 467169DEST_PATH_IMAGE063
the energy representing the camera versus similar resolution quality,
Figure 931649DEST_PATH_IMAGE064
the energy representing the quality of the overlap of the camera to the pixel,
Figure 69369DEST_PATH_IMAGE065
an energy function representing the quality of symmetry of the camera image with respect to the model surface,
Figure 515263DEST_PATH_IMAGE066
the weight of the energy term is represented by,
Figure 269592DEST_PATH_IMAGE067
the camera pair that minimizes the energy equation calculation is taken as the preferred camera pair.
Preferably, in the method for optimizing mesh reconstruction quality according to the present application, the step of calculating a variation range of a mesh vertex in a normal direction thereof according to a similarity difference between two cameras in the preferred camera pair on the image specifically includes:
inputting the initial mesh model, the preferred camera pair;
computing optimized vertices in mesh vertices in the initial mesh model
Figure 537762DEST_PATH_IMAGE068
Figure 529989DEST_PATH_IMAGE069
Wherein the content of the first and second substances,
Figure 897517DEST_PATH_IMAGE070
is the magnitude of the change of the vertex in the direction of its normal,
Figure 404721DEST_PATH_IMAGE071
is the scale factor between the cameras and is,
Figure 148686DEST_PATH_IMAGE072
Figure 792157DEST_PATH_IMAGE073
presentation camera
Figure 845433DEST_PATH_IMAGE074
Observation of
Figure 574354DEST_PATH_IMAGE075
The average depth of the optical fiber,
Figure 122010DEST_PATH_IMAGE076
is the focal length of the camera and is,
Figure 619988DEST_PATH_IMAGE077
is the normal direction of the vertex;
calculating the variation amplitude of the vertex in the normal direction:
Figure 594897DEST_PATH_IMAGE078
wherein, the first and the second end of the pipe are connected with each other,
Figure 811115DEST_PATH_IMAGE079
representing barycentric coordinates of points within the triangle plane adjacent to the vertex with respect to the vertex,
Figure 162462DEST_PATH_IMAGE080
representing the variation amplitude of points in the triangular surface;
points within the triangle plane are represented as the initial mesh model surface
Figure 797554DEST_PATH_IMAGE081
To above is connectedContinuous spot
Figure 5681DEST_PATH_IMAGE082
Calculating continuous points on the initial mesh model
Figure 646878DEST_PATH_IMAGE083
The gradient of (c) is changed:
Figure 598654DEST_PATH_IMAGE084
wherein, the first and the second end of the pipe are connected with each other,
Figure 8906DEST_PATH_IMAGE085
is a coordinate of a pixel, and is,
Figure 387935DEST_PATH_IMAGE086
in order to be the size of the image,
Figure 516428DEST_PATH_IMAGE087
the coefficient of the coordinate of the center of gravity is expressed,
Figure 6315DEST_PATH_IMAGE088
representing the gradient of the change of the similarity measure of the image areas,
Figure 785921DEST_PATH_IMAGE089
representing images
Figure 70272DEST_PATH_IMAGE090
The gradient of (a) of (b) is,
Figure 686061DEST_PATH_IMAGE091
representing successive points
Figure 182902DEST_PATH_IMAGE092
Discretized to image and image points
Figure 630063DEST_PATH_IMAGE093
A jacobian matrix of the transformation relations of (c),
Figure 288578DEST_PATH_IMAGE094
from camera center to image point
Figure 188401DEST_PATH_IMAGE095
Vector of direction.
Preferably, in the method for optimizing quality of mesh reconstruction described in the present application, the step of updating mesh vertices by a gradient descent method at different image resolutions specifically includes:
dividing the measurement image and the corresponding camera into three resolution levels of 0.25, 0.5 and 1.0 according to the scaling;
taking the initial grid as an input, and carrying out triangular subdivision according to the level with the scaling of 0.25; wherein, the subdivision condition is that the number of the projected area pixels on the image exceeds 8 for any triangular surface;
taking the grid after the current level subdivision as an input grid of the next level;
sequentially processing according to the zoom level from small to large, and obtaining the zoom level by a calculation formula
Figure 207041DEST_PATH_IMAGE096
Iteratively calculating new positions of all vertexes of the mesh model by using all camera pairs;
when the vertex is iteratively updated, if the variation amplitude of the current vertex
Figure 774289DEST_PATH_IMAGE097
If the current vertex position is larger than or equal to the previous updating time, the current vertex position is not updated;
if the variation amplitude of the current vertex
Figure 603705DEST_PATH_IMAGE097
And if the current vertex position is smaller than the last updating time, updating the current vertex position.
In a second aspect, the present application provides a mesh reconstruction quality optimization system, including:
a model construction module: the method comprises the steps of obtaining a measurement image and point cloud of aerial photography of the unmanned aerial vehicle, and constructing an initial grid model; the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedral graph cutting method;
a camera pair list construction module: the method comprises the steps of constructing an initial camera pair list, wherein the camera pair is a camera pair corresponding to two images;
a screening module: means for selecting an optimal and least number of preferred camera pairs from the initial list of camera pairs according to an index of pairing of the camera pairs on a grid surface of the initial grid model;
the vertex change amplitude calculation module: the method is used for calculating the variation amplitude of the grid vertex in the normal direction according to the similarity difference of the two cameras in the preferred camera pair on the images;
an update module: and the method is used for updating the mesh vertexes through a gradient descent method under different image resolutions.
Preferably, the camera pair list building module specifically includes:
visible point calculation unit: for calculating the midpoint of a point cloud
Figure 990824DEST_PATH_IMAGE098
In the image
Figure 563887DEST_PATH_IMAGE005
Corresponding camera
Figure 985641DEST_PATH_IMAGE099
The number of visible points;
a condition limiting unit: for aiming at arbitrary camera pair
Figure 985958DEST_PATH_IMAGE100
Is subject to condition limitation such that
Figure 860374DEST_PATH_IMAGE098
Before the most number of points they collectively observe
Figure 220817DEST_PATH_IMAGE101
A camera as
Figure 497077DEST_PATH_IMAGE009
The initial pairing of (1);
the conditions are limited to:
Figure 668296DEST_PATH_IMAGE010
Figure 30007DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 944873DEST_PATH_IMAGE012
representing three-dimensional points
Figure 75640DEST_PATH_IMAGE013
Camera with a camera module
Figure 417760DEST_PATH_IMAGE014
And
Figure 266767DEST_PATH_IMAGE015
a reprojection pixel distance error therebetween;
Figure 234592DEST_PATH_IMAGE016
presentation camera
Figure 219866DEST_PATH_IMAGE102
Is centered to
Figure 732886DEST_PATH_IMAGE013
The angle between the rays;
initial camera pair set screening unit: obtaining initial set of camera pairs for final screening
Figure 69190DEST_PATH_IMAGE018
Preferably, the screening module specifically includes:
a first input unit: for inputting the initial mesh model, a sequence of images;
a parallax quality evaluation unit: for defining a disparity quality estimate for a reference disparity of 50 °, the disparity quality estimate being:
Figure 325859DEST_PATH_IMAGE103
wherein the content of the first and second substances,
Figure 431218DEST_PATH_IMAGE104
(ii) a Average parallax:
Figure 115140DEST_PATH_IMAGE105
for model surface
Figure 938740DEST_PATH_IMAGE022
Upper point of
Figure 982788DEST_PATH_IMAGE106
Centering of camera
Figure 942654DEST_PATH_IMAGE024
Angle of parallax
Figure 63057DEST_PATH_IMAGE107
Figure 108373DEST_PATH_IMAGE026
Indicates the area of the image region, where the region is indicated as
Figure 972424DEST_PATH_IMAGE027
It is a surface
Figure 786796DEST_PATH_IMAGE108
On-camera
Figure 812521DEST_PATH_IMAGE109
In the area of the projection, the image is projected,
Figure 345133DEST_PATH_IMAGE110
representing pixels in an image
Figure 996563DEST_PATH_IMAGE031
Is back projected to the surface
Figure 931021DEST_PATH_IMAGE032
A point on;
a resolution quality evaluation unit: for defining a resolution quality assessment of:
Figure 127648DEST_PATH_IMAGE111
wherein, the first and the second end of the pipe are connected with each other,
Figure 147556DEST_PATH_IMAGE112
representing a 25% difference in resolution; is provided with
Figure 87830DEST_PATH_IMAGE113
Representing points on the surface of a model
Figure 80057DEST_PATH_IMAGE036
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 775481DEST_PATH_IMAGE037
which represents the focal length of the camera,
Figure 17106DEST_PATH_IMAGE114
as a camera center
Figure 744760DEST_PATH_IMAGE039
To point
Figure 653810DEST_PATH_IMAGE040
The vector of (a);
Figure 192238DEST_PATH_IMAGE041
indicating that the camera is relative to the two-camera resolution
Figure 186739DEST_PATH_IMAGE042
Normalized difference in length:
Figure 468816DEST_PATH_IMAGE115
about
Figure 232373DEST_PATH_IMAGE044
Average difference of (a):
Figure 207282DEST_PATH_IMAGE116
a pixel quality evaluation unit: for defining pixel overlap quality assessment:
Figure 423500DEST_PATH_IMAGE046
wherein, the first and the second end of the pipe are connected with each other,
Figure 29974DEST_PATH_IMAGE047
presentation camera
Figure 585720DEST_PATH_IMAGE048
The corresponding images are then displayed on the display screen,
Figure 793847DEST_PATH_IMAGE049
is the overall area of the image;
a symmetry evaluation unit: for defining camera pair evaluation based on symmetry on surface:
Figure 435044DEST_PATH_IMAGE117
wherein the content of the first and second substances,
Figure 121240DEST_PATH_IMAGE051
Figure 797072DEST_PATH_IMAGE052
representing points on the surface of a model
Figure 176101DEST_PATH_IMAGE053
The angular difference of the normal is averaged out,
Figure 553862DEST_PATH_IMAGE118
Figure 43749DEST_PATH_IMAGE055
is camera center to point
Figure 308508DEST_PATH_IMAGE056
Relative to its normal line
Figure 858438DEST_PATH_IMAGE057
The difference in the angle of (c),
Figure 474227DEST_PATH_IMAGE119
if, if
Figure 767805DEST_PATH_IMAGE059
On the same side of the normal line,
Figure 418230DEST_PATH_IMAGE060
the value is-1, otherwise the value is 1;
the energy equation definition unit: for defining the energy equation:
Figure 60432DEST_PATH_IMAGE120
(ii) a Wherein the content of the first and second substances,
Figure 960255DEST_PATH_IMAGE062
represents the energy that measures the quality of the disparity,
Figure 791945DEST_PATH_IMAGE121
the energy representing the camera versus similar resolution quality,
Figure 296876DEST_PATH_IMAGE064
the energy representing the quality of the overlap of the camera to the pixel,
Figure 923029DEST_PATH_IMAGE122
an energy function representing the quality of symmetry of the camera image with respect to the model surface,
Figure 513410DEST_PATH_IMAGE123
the weight of the energy term is represented by,
Figure 148791DEST_PATH_IMAGE124
preferably, the camera contrast value unit: for taking the camera pair that minimizes the energy equation calculation as the preferred camera pair.
In a third aspect, the present application proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for optimizing the quality of mesh reconstruction as described in the first aspect above when executing the computer program.
In a fourth aspect, the present application proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the method for mesh reconstruction quality optimization as described in the first aspect above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a mesh model formed by mesh reconstruction using an irregular pair of cameras;
fig. 2 is a flowchart of a method for optimizing quality of mesh reconstruction according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for constructing an initial camera pair list in the method for optimizing quality of mesh reconstruction according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for screening a preferred camera pair from an initial camera pair list in a mesh reconstruction quality optimization method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of symmetry of a camera pair in the method for optimizing quality of mesh reconstruction according to the first embodiment of the present invention;
fig. 6 is a flowchart of a method for calculating a variation amplitude of a vertex of a mesh in a normal direction of the vertex in a method for optimizing quality of mesh reconstruction according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a structure of mesh vertex update according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an image re-projection in an embodiment of the present invention;
fig. 9 is a flowchart of a method for updating mesh vertices by a gradient descent method in the mesh reconstruction quality optimization method according to the first embodiment of the present invention;
fig. 10 is a schematic structural diagram of a mesh reconstruction quality optimization system according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, given the benefit of this disclosure, without departing from the scope of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless otherwise defined, technical or scientific terms referred to herein should have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The use of the terms "including," "comprising," "having," and any variations thereof herein, is meant to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
With the high-speed development of unmanned aerial vehicles, large-scale three-dimensional model reconstruction is widely applied by combining a method of shooting high-resolution images by the unmanned aerial vehicles and a multi-view stereo method. And the grid reconstruction refers to the network construction operation by using dense point cloud obtained by dense matching.
The invention provides a method for efficiently improving the quality of grid details, which is based on the use of an approximate scene model optimization camera pair and optimizes a model grid through similarity difference of a model surface on an image and is used for solving the problems of low degree of grid surface details and uneven distribution of detail areas generated in the prior art.
Referring to fig. 2, a method for optimizing mesh reconstruction quality according to a first embodiment of the present invention includes the following steps:
s11, obtaining a measurement image and point cloud of the unmanned aerial vehicle aerial photography, and constructing an initial grid model.
Wherein the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedral graph cutting method. The triangular mesh constructed by the Delaunay tetrahedron image segmentation method has the characteristics of a hollow circle (any circumscribed circle of 3 points does not contain the 4 th point, namely the circumscribed circle of any 3 points (triangles) is hollow) and the characteristics of the maximized minimum angle, so that the generation of long and narrow triangles in image segmentation is effectively avoided.
And S12, constructing an initial camera pair list.
The camera pair is a pair of two corresponding images. If a triangle of the model surface is observed by two cameras simultaneously, which are represented as images, the two images represent a camera pair of the triangle.
And S13, screening the optimal and least number of preferred camera pairs from the initial camera pair list according to the indexes matched by the camera pairs on the grid surface of the initial grid model.
In an embodiment of the present invention, the pair indicators specifically refer to a parallax quality, a resolution quality, a pixel overlapping parameter, and a symmetry indicator parameter of the camera pair. And calculating and screening the optimal and least camera pairs as the preferred camera pairs by calculating the parameter index values and performing weight distribution.
And S14, calculating the variation amplitude of the grid vertex in the normal direction according to the similarity difference of the two cameras in the optimal camera pair on the images.
In the embodiment of the invention, the calculation of the change amplitude of the grid vertex in the normal direction is to list an equation for converting the similarity change into the point gradient change on the grid surface according to the similarity measurement of the grid surface between the camera and the image, calculate the gradient change of the point on the surface in the normal direction by derivation calculation, discretize the gradient change on the grid vertex and obtain the gradient change value of the grid vertex.
And S15, under different image resolutions, updating the grid vertex by a gradient descent method.
In the embodiment of the invention, the image is divided into a plurality of scales by using the resolution, each scale uses the last scale output grid as input to carry out grid subdivision, and the input grid iteratively updates the grid vertex by adopting a gradient descent method under each scale so as to achieve the aim of outputting a grid model with higher detail degree.
In conclusion, according to the mesh reconstruction quality optimization method provided by the invention, the unmanned aerial vehicle measurement image and the point cloud are obtained, and the initial mesh model is generated according to the point cloud; pairing images of the initial grid model, optimizing pairing and generating an optimal and minimum camera pair set (namely an optimal camera pair); on the basis of the initial grid, according to the similarity measurement of the grid surface between the camera pair images, listing an equation of converting the similarity change into the point gradient change on the grid surface, calculating to obtain the gradient change of the point on the surface in the normal direction, discretizing the gradient change to the grid vertex, and obtaining the gradient change value of the grid vertex; and dividing the image into a plurality of scales according to the resolution, wherein each scale uses the last scale output grid as input to carry out grid subdivision, and under each scale, the input grid iteratively updates grid vertexes by adopting a gradient descent method, and a grid model with higher detail degree is output. By the method, the problems of low detail degree of the surface of the generated grid and uneven distribution of detail areas in the prior art are effectively solved.
Referring to fig. 3, a flowchart of a method for constructing an initial camera pair list in a method for optimizing quality of mesh reconstruction provided in an embodiment of the present invention is shown, where the method includes:
step S21, calculating the midpoint of the point cloud
Figure 242649DEST_PATH_IMAGE098
In the image
Figure 305283DEST_PATH_IMAGE005
Corresponding camera
Figure 366649DEST_PATH_IMAGE099
The number of visible points.
Step S22, for any camera pair
Figure 805720DEST_PATH_IMAGE017
Is subject to condition limitation such that
Figure 19664DEST_PATH_IMAGE098
Before the most number of points they collectively observe
Figure 253199DEST_PATH_IMAGE101
A camera as
Figure 552594DEST_PATH_IMAGE009
The initial pairing of (a).
Specifically, the limiting conditions are:
Figure 529777DEST_PATH_IMAGE010
Figure 598227DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 2663DEST_PATH_IMAGE012
representing three-dimensional points
Figure 304201DEST_PATH_IMAGE125
Camera with camera module
Figure 757179DEST_PATH_IMAGE014
And
Figure 8032DEST_PATH_IMAGE126
a reprojection pixel distance error therebetween;
Figure 521052DEST_PATH_IMAGE016
presentation camera
Figure 857356DEST_PATH_IMAGE102
Is centered to
Figure 910763DEST_PATH_IMAGE125
The angle between the rays.
And S23, finally screening to obtain an initial camera pair set.
Wherein the camera pair sets
Figure 953805DEST_PATH_IMAGE127
Preferably, please refer to fig. 4, which is a flowchart illustrating a method for screening a preferred camera pair from an initial camera pair list in a mesh reconstruction quality optimization method according to an embodiment of the present invention, wherein the method includes:
and S31, inputting an initial grid model and an image sequence.
The image sequence is a group of images, and may be a plurality of picture files, or a plurality of frame images in a video.
And step S32, defining parallax quality evaluation.
Based on the fact that camera pairs with smaller difference in reference parallax angles are more beneficial to model surface optimization, the reference parallax is taken as 50 degrees, parallax quality evaluation is defined, and the parallax quality evaluation is as follows:
Figure 700044DEST_PATH_IMAGE128
wherein the content of the first and second substances,
Figure 445015DEST_PATH_IMAGE104
(ii) a Average parallax
Figure 770954DEST_PATH_IMAGE129
For model surfaces
Figure 730820DEST_PATH_IMAGE022
Upper point of
Figure 585643DEST_PATH_IMAGE130
Centering of camera
Figure 896539DEST_PATH_IMAGE131
Angle of parallax
Figure 495011DEST_PATH_IMAGE132
Figure 309383DEST_PATH_IMAGE026
Indicates the area of the image region, where the region is indicated as
Figure 849954DEST_PATH_IMAGE027
It is a surface
Figure 116988DEST_PATH_IMAGE028
On-camera
Figure 581467DEST_PATH_IMAGE133
In the area of the projection, the image is projected,
Figure 188029DEST_PATH_IMAGE134
representing pixels in an image
Figure 712551DEST_PATH_IMAGE031
Is back projected to the surface
Figure 670143DEST_PATH_IMAGE135
A point on;
and step S33, defining resolution quality evaluation.
The resolution quality assessment is:
Figure 875996DEST_PATH_IMAGE136
wherein the content of the first and second substances,
Figure 664961DEST_PATH_IMAGE112
representing a resolution difference of 25%; is provided with
Figure 547335DEST_PATH_IMAGE137
Representing points on the surface of a model
Figure 788961DEST_PATH_IMAGE036
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 532926DEST_PATH_IMAGE138
which represents the focal length of the camera,
Figure 441976DEST_PATH_IMAGE139
as a camera center
Figure 777142DEST_PATH_IMAGE039
To a point
Figure 709326DEST_PATH_IMAGE040
The vector of (a);
Figure 53720DEST_PATH_IMAGE140
indicating that the camera is relative to the two-camera resolution
Figure 754960DEST_PATH_IMAGE042
Normalized difference in length:
Figure 526606DEST_PATH_IMAGE141
about
Figure 189495DEST_PATH_IMAGE142
Average difference of (a):
Figure 337579DEST_PATH_IMAGE143
and step S34, defining pixel overlapping quality evaluation.
The pixel overlap quality is evaluated as:
Figure 158905DEST_PATH_IMAGE144
wherein, the first and the second end of the pipe are connected with each other,
Figure 304715DEST_PATH_IMAGE047
presentation camera
Figure 8229DEST_PATH_IMAGE048
The corresponding images are then displayed on the display screen,
Figure 632108DEST_PATH_IMAGE049
is the overall area of the image.
In embodiments of the present invention, the above metric can handle most situations, but in some cases can cause problems. As shown in fig. 5, the camera
Figure 370257DEST_PATH_IMAGE145
All have complete observation of the surface
Figure 670658DEST_PATH_IMAGE146
I.e. the pixel overlap ratio is similar; for camera polar line
Figure 861467DEST_PATH_IMAGE147
And
Figure 289038DEST_PATH_IMAGE148
are similar, camera pair
Figure 881693DEST_PATH_IMAGE149
And with
Figure 369306DEST_PATH_IMAGE150
Are considered equally good (or equally bad). There is therefore a need to further define the symmetry of the camera pair based on the surface.
Step S35, defining the camera pair based on the symmetry on the surface.
The camera pair is evaluated based on the on-surface symmetry as:
Figure 47412DEST_PATH_IMAGE151
wherein the content of the first and second substances,
Figure 13094DEST_PATH_IMAGE051
Figure 460256DEST_PATH_IMAGE052
representing points on the surface of a model
Figure 368038DEST_PATH_IMAGE053
The angular difference of the normal is averaged out,
Figure 533440DEST_PATH_IMAGE118
Figure 302813DEST_PATH_IMAGE055
is camera center to point
Figure 604481DEST_PATH_IMAGE056
Relative to its normal line
Figure 433897DEST_PATH_IMAGE057
The difference in the angle of (c),
Figure 86595DEST_PATH_IMAGE152
if, if
Figure 394080DEST_PATH_IMAGE059
On the same side of the normal line,
Figure 815834DEST_PATH_IMAGE060
the value is-1, otherwise the value is 1;
and S36, defining an energy equation.
The energy equation is:
Figure 65419DEST_PATH_IMAGE153
wherein the content of the first and second substances,
Figure 939834DEST_PATH_IMAGE062
represents the energy that measures the quality of the disparity,
Figure 316588DEST_PATH_IMAGE121
the energy representing the camera pair for similar resolution qualities,
Figure 592849DEST_PATH_IMAGE064
the energy representing the quality of the overlap of the camera to the pixel,
Figure 764067DEST_PATH_IMAGE122
an energy function representing the quality of symmetry of the camera image to the model surface.
Figure 125778DEST_PATH_IMAGE154
The weight of the energy term is represented by,
Figure 775066DEST_PATH_IMAGE155
and step S37, taking the camera pair which minimizes the energy equation calculation result as a preferred camera pair.
In the embodiment of the invention, the preferred camera pair adopts
Figure 171412DEST_PATH_IMAGE156
And (4) showing.
Further, referring to fig. 6, in a mesh reconstruction quality optimization method provided in an embodiment of the present invention, a flow chart of a method for calculating a variation range of a mesh vertex in a normal direction of the mesh vertex is provided, where the method specifically includes:
step S51, input of an initial mesh model, preferably a camera pair.
And S52, calculating optimized vertexes in the grid vertexes in the initial grid model.
In particular, of said optimised vertex
Figure 762799DEST_PATH_IMAGE068
The calculation formula is as follows:
Figure 611806DEST_PATH_IMAGE069
wherein the content of the first and second substances,
Figure 64785DEST_PATH_IMAGE070
is the magnitude of the change of the vertex in the direction of its normal,
Figure 315637DEST_PATH_IMAGE071
is the inter-camera scale factor that is,
Figure 828658DEST_PATH_IMAGE157
Figure 164962DEST_PATH_IMAGE073
presentation camera
Figure 421631DEST_PATH_IMAGE074
Observation of
Figure 526990DEST_PATH_IMAGE075
The average depth of the optical fiber,
Figure 460180DEST_PATH_IMAGE076
is the focal length of the camera and is,
Figure 18200DEST_PATH_IMAGE077
is the normal direction of the vertex.
And step S53, calculating the variation range of the vertex in the normal direction.
In the embodiment of the invention, the vertex
Figure 78560DEST_PATH_IMAGE158
Can be decomposed into inner points of adjacent triangular surfaces
Figure 38426DEST_PATH_IMAGE159
The calculation formula of the variation amplitude of the vertex in the normal direction is as follows:
Figure 893249DEST_PATH_IMAGE160
wherein, the first and the second end of the pipe are connected with each other,
Figure 204145DEST_PATH_IMAGE079
representing barycentric coordinates of points within the triangle plane adjacent to the vertex with respect to the vertex,
Figure 802616DEST_PATH_IMAGE080
indicating the magnitude of the change of the points within the triangular surface.
Step S54, calculating continuous points on the initial grid model
Figure 616989DEST_PATH_IMAGE161
The gradient of (a) changes.
Taking fig. 7 as an example, a schematic structural diagram of mesh vertex update is shown. Variation of mesh vertices
Figure 157560DEST_PATH_IMAGE162
From the amplitude of variation of points in adjacent triangular faces
Figure 690173DEST_PATH_IMAGE163
And (4) acting together. Points within the triangle plane are represented as the initial mesh model surface
Figure 826756DEST_PATH_IMAGE081
Continuous point of
Figure 761214DEST_PATH_IMAGE082
. Surface of model
Figure 223419DEST_PATH_IMAGE164
Upper continuous point
Figure 977749DEST_PATH_IMAGE082
The gradient change of (d) can be expressed as:
Figure 183602DEST_PATH_IMAGE084
wherein, the first and the second end of the pipe are connected with each other,
Figure 972567DEST_PATH_IMAGE085
is a coordinate of a pixel, and is,
Figure 589362DEST_PATH_IMAGE086
in order to be the size of the image,
Figure 96566DEST_PATH_IMAGE087
the coefficient of the coordinates of the center of gravity is expressed,
Figure 840531DEST_PATH_IMAGE088
representing the gradient of the change of the similarity measure of the image areas,
Figure 218423DEST_PATH_IMAGE089
representing an image
Figure 84748DEST_PATH_IMAGE090
The gradient of (a) of (b) is,
Figure 16932DEST_PATH_IMAGE165
representing successive points
Figure 564588DEST_PATH_IMAGE092
Discretizing to image and image point
Figure 328144DEST_PATH_IMAGE093
A jacobian matrix of the transformation relations of (c),
Figure 292601DEST_PATH_IMAGE166
from camera center to image point
Figure 774398DEST_PATH_IMAGE095
Vector of direction.
The specific derivation is as follows:
1. camera pair
Figure 860166DEST_PATH_IMAGE167
Corresponding to the image being
Figure 478229DEST_PATH_IMAGE168
. Dot
Figure 358461DEST_PATH_IMAGE169
In that
Figure 61974DEST_PATH_IMAGE170
The similarity measure on can be regarded as
Figure 951433DEST_PATH_IMAGE171
Is recorded as the objective function of
Figure 424003DEST_PATH_IMAGE172
. Dot
Figure 989982DEST_PATH_IMAGE173
Amplitude of variation of
Figure 180792DEST_PATH_IMAGE174
Is that
Figure 608362DEST_PATH_IMAGE175
In that
Figure 201018DEST_PATH_IMAGE176
The gradient of (d) is noted
Figure 688631DEST_PATH_IMAGE177
. When the triangular mesh is optimal, the mesh is,
Figure 101157DEST_PATH_IMAGE178
2. image processing method
Figure 332419DEST_PATH_IMAGE179
In that
Figure 779581DEST_PATH_IMAGE180
To a similarity measure of
Figure 687363DEST_PATH_IMAGE181
The measure of the similarity of (a) to (b),
Figure 852765DEST_PATH_IMAGE182
is that
Figure 622138DEST_PATH_IMAGE183
Warp surface
Figure 923806DEST_PATH_IMAGE184
Induced reprojection to
Figure 549959DEST_PATH_IMAGE185
As shown in fig. 8.
3. Pixel point
Figure 140341DEST_PATH_IMAGE186
In that
Figure 713404DEST_PATH_IMAGE187
The similarity between can be used with respect to
Figure 135158DEST_PATH_IMAGE188
Is expressed as
Figure 384743DEST_PATH_IMAGE189
. According to the derived chain rule, the similarity measure function is related to points on the surface of the model
Figure 259158DEST_PATH_IMAGE191
The gradient of (a) is:
Figure 370334DEST_PATH_IMAGE192
4. image point
Figure 381015DEST_PATH_IMAGE193
For points on the surface of the model
Figure 614550DEST_PATH_IMAGE194
Can be expressed in terms of the camera center to image point direction
Figure 913945DEST_PATH_IMAGE195
Expression (2)
Figure 891128DEST_PATH_IMAGE196
Wherein
Figure 225157DEST_PATH_IMAGE197
A jacobian matrix is represented that,
Figure 629594DEST_PATH_IMAGE198
a matrix of projections of the camera is represented,
Figure 399972DEST_PATH_IMAGE199
representing pixel points
Figure 180847DEST_PATH_IMAGE200
Depth in camera coordinates.
Figure 634962DEST_PATH_IMAGE201
5、
Figure 147983DEST_PATH_IMAGE202
Representing images
Figure 218707DEST_PATH_IMAGE203
Gradient at pixel points, processing the image by checking it using gradient convolution, using
Figure 272114DEST_PATH_IMAGE204
The convolution kernel is expressed as follows:
Figure 315156DEST_PATH_IMAGE205
6、
Figure 61395DEST_PATH_IMAGE206
the degree of similarity change between images is represented, and the degree of similarity change is calculated by using an integral graph method, wherein the calculation method comprises the following steps:
Figure 71945DEST_PATH_IMAGE207
Figure 929043DEST_PATH_IMAGE208
Figure 92171DEST_PATH_IMAGE209
Figure 9311DEST_PATH_IMAGE210
respectively representing the calculation of variance, mean and integral graph of the image; r represents the similarity measure window size.
Preferably, referring to fig. 9, a flowchart of a method for updating a mesh vertex by a gradient descent method in a mesh reconstruction quality optimization method provided by an embodiment of the present invention is shown, where the method specifically includes:
and S81, dividing the measurement image and the corresponding camera into three resolution levels of 0.25, 0.5 and 1.0 according to the scaling.
Step S82, triangular subdivision is performed at a scale of 0.25 with the initial mesh as an input.
The subdivision condition is that the number of pixels of a projection area on an image exceeds 8 for any triangular plane.
And S83, taking the grid after the current level subdivision as an input grid of the next level.
And S84, sequentially processing according to the zoom level from small to large, and iteratively calculating new positions of all vertexes of the mesh model.
The calculation formula for calculating the new positions of all the vertexes of the mesh model is as follows:
Figure 257890DEST_PATH_IMAGE211
step S85, when the vertex is updated in the iteration, if the change range of the current vertex
Figure 856362DEST_PATH_IMAGE212
Larger than or the same as the last update, the current vertex position is not updated.
Step S86, if the variation range of the current vertex
Figure 936313DEST_PATH_IMAGE212
And if the current vertex position is smaller than the last updating time, updating the current vertex position.
Referring to fig. 10, a grid reconstruction quality optimization system according to a second embodiment of the present invention is specifically provided, which includes:
the model building module 101: the method is used for obtaining the measurement image and the point cloud of the unmanned aerial vehicle aerial photography and constructing an initial grid model.
Wherein the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedron graph cut method.
The camera pair list construction module 102: for constructing an initial list of camera pairs.
Wherein the camera pair is a camera pair corresponding to two images.
The screening module 103: means for selecting an optimal and least number of preferred camera pairs from the initial list of camera pairs according to an index of pairing of the camera pairs on a grid surface of the initial grid model.
Vertex change amplitude calculation module 104: and the method is used for calculating the variation amplitude of the grid vertex in the normal direction of the grid vertex according to the similarity difference of the two cameras in the preferred camera pair on the images.
The update module 105: and the method is used for updating the mesh vertexes through a gradient descent method under different image resolutions.
Preferably, in the system for optimizing quality of mesh reconstruction proposed by the present invention, the camera pair list building module specifically includes:
visible point calculation unit: for calculating the midpoint of a point cloud
Figure 211305DEST_PATH_IMAGE098
In the image
Figure 478339DEST_PATH_IMAGE005
Corresponding camera
Figure 942818DEST_PATH_IMAGE099
The number of visible points;
a condition limiting unit: for arbitrary camera pair
Figure 814959DEST_PATH_IMAGE213
Is subject to condition limitation such that
Figure 73902DEST_PATH_IMAGE098
Before the most number of points they have observed together
Figure 31494DEST_PATH_IMAGE101
A camera as
Figure 502927DEST_PATH_IMAGE009
The initial pairing of (a);
the conditions are limited to:
Figure 291891DEST_PATH_IMAGE214
Figure 908686DEST_PATH_IMAGE215
wherein the content of the first and second substances,
Figure 415891DEST_PATH_IMAGE012
representing three-dimensional points
Figure 159856DEST_PATH_IMAGE125
Camera with camera module
Figure 537748DEST_PATH_IMAGE014
And with
Figure 404072DEST_PATH_IMAGE126
A reprojection pixel distance error therebetween;
Figure 70677DEST_PATH_IMAGE216
presentation camera
Figure 680650DEST_PATH_IMAGE213
Center to
Figure 116311DEST_PATH_IMAGE125
The angle between the rays;
initial camera pair set screening unit: obtaining initial set of camera pairs for final screening
Figure 153537DEST_PATH_IMAGE127
Preferably, in the system for optimizing quality of mesh reconstruction proposed by the present invention, the screening module specifically includes:
a first input unit: for inputting the initial mesh model, a sequence of images;
a parallax quality evaluation unit: for defining a disparity quality estimate for a reference disparity of 50 °, the disparity quality estimate being:
Figure 839862DEST_PATH_IMAGE103
wherein, the first and the second end of the pipe are connected with each other,
Figure 925630DEST_PATH_IMAGE104
(ii) a Average parallax:
Figure 543693DEST_PATH_IMAGE217
for model surface
Figure 689504DEST_PATH_IMAGE022
Upper point of
Figure 393018DEST_PATH_IMAGE130
Centering of camera
Figure 79214DEST_PATH_IMAGE131
Angle of parallax
Figure 489467DEST_PATH_IMAGE218
Figure 868495DEST_PATH_IMAGE219
Indicates the area of the image region, where the region is indicated as
Figure 246256DEST_PATH_IMAGE027
It is a surface
Figure 736143DEST_PATH_IMAGE028
On-camera
Figure 266482DEST_PATH_IMAGE133
In the area of the projection, the image is projected,
Figure 754095DEST_PATH_IMAGE134
representing pixels in an image
Figure 432201DEST_PATH_IMAGE031
Is back projected to the surface
Figure 663462DEST_PATH_IMAGE135
A point on;
a resolution quality evaluation unit: for defining a resolution quality assessment, the resolution quality assessment being:
Figure 110624DEST_PATH_IMAGE220
wherein the content of the first and second substances,
Figure 18406DEST_PATH_IMAGE221
representing a resolution difference of 25%; is provided with
Figure 918229DEST_PATH_IMAGE222
Representing points on the surface of a model
Figure 484339DEST_PATH_IMAGE036
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 254849DEST_PATH_IMAGE223
which represents the focal length of the camera and,
Figure 84265DEST_PATH_IMAGE224
as a camera center
Figure 861597DEST_PATH_IMAGE039
To a point
Figure 496978DEST_PATH_IMAGE040
The vector of (a);
Figure 856415DEST_PATH_IMAGE140
indicating that the camera is relative to the two-camera resolution
Figure 122311DEST_PATH_IMAGE225
Normalized difference in length:
Figure 996726DEST_PATH_IMAGE226
about
Figure 357169DEST_PATH_IMAGE227
Average difference of (a):
Figure 633430DEST_PATH_IMAGE228
a pixel quality evaluation unit: for defining pixel overlap quality assessment:
Figure 804648DEST_PATH_IMAGE046
wherein, the first and the second end of the pipe are connected with each other,
Figure 166359DEST_PATH_IMAGE047
presentation camera
Figure 81226DEST_PATH_IMAGE048
The corresponding images are then displayed on the display screen,
Figure 211993DEST_PATH_IMAGE049
is the overall area of the image;
a symmetry evaluation unit: for defining camera pair evaluation based on symmetry on surface:
Figure 554112DEST_PATH_IMAGE229
wherein, the first and the second end of the pipe are connected with each other,
Figure 403120DEST_PATH_IMAGE051
Figure 370944DEST_PATH_IMAGE052
representing points on the surface of a model
Figure 356218DEST_PATH_IMAGE053
The angular difference of the normal is averaged out,
Figure 869239DEST_PATH_IMAGE230
Figure 205542DEST_PATH_IMAGE055
is camera center to point
Figure 462211DEST_PATH_IMAGE056
Relative to its normal line
Figure 567571DEST_PATH_IMAGE231
The difference in the angle of (c),
Figure 251493DEST_PATH_IMAGE232
if at all
Figure 75092DEST_PATH_IMAGE059
On the same side of the normal line,
Figure 119141DEST_PATH_IMAGE060
the value is-1, otherwise the value is 1;
the energy equation definition unit: for defining the energy equation:
Figure 79006DEST_PATH_IMAGE233
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 199409DEST_PATH_IMAGE234
represents the energy that measures the quality of the disparity,
Figure 244726DEST_PATH_IMAGE235
the energy representing the camera versus similar resolution quality,
Figure 843197DEST_PATH_IMAGE064
the energy representing the quality of the overlap of the camera to the pixel,
Figure 126411DEST_PATH_IMAGE236
an energy function representing the quality of symmetry of the camera pair to the model surface,
Figure 214453DEST_PATH_IMAGE123
the weight of the energy term is represented by,
Figure 934016DEST_PATH_IMAGE237
the preferred camera is as follows: for taking the camera pair that minimizes the energy equation calculation as the preferred camera pair.
By the grid reconstruction quality optimization system provided by the invention, an unmanned aerial vehicle measurement image and a point cloud are obtained by combining the grid reconstruction quality optimization method, and an initial grid model is generated according to the point cloud; pairing images of the initial grid model, optimizing pairing and generating an optimal and minimum camera pair set (namely an optimal camera pair); on the basis of the initial grid, according to the similarity measurement of the grid surface between the camera pair images, listing an equation of converting the similarity change into the point gradient change on the grid surface, calculating to obtain the gradient change of the point on the surface in the normal direction, discretizing the gradient change to the grid vertex, and obtaining the gradient change value of the grid vertex; and dividing the image into a plurality of scales according to the resolution, wherein each scale uses the last scale output grid as input to carry out grid subdivision, and under each scale, the input grid iteratively updates grid vertexes by adopting a gradient descent method, and a grid model with higher detail degree is output. By the method, the problems of low surface detail degree and uneven distribution of detail areas of the generated grid in the prior art are effectively solved.
It should be noted that the above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules may be located in different processors in any combination.
In addition, the mesh reconstruction quality optimization method of the embodiments of the present application described in conjunction with the drawings can be implemented by a computer device. The computer device may include a processor and a memory storing computer program instructions.
In particular, the processor may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The memory may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a Non-Volatile (Non-Volatile) memory. In particular embodiments, the Memory includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended Data Out Dynamic Random Access Memory (EDODRAM), a Synchronous Dynamic Random Access Memory (SDRAM), and the like.
The memory may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by the processor.
The processor may be configured to read and execute the computer program instructions stored in the memory to implement any one of the mesh reconstruction quality optimization methods in the above embodiments.
The computer device may also include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete mutual communication.
The communication interface is used for realizing communication among modules, devices, units and/or equipment in the embodiment of the application. The communication interface may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
A bus comprises hardware, software, or both that couple components of a computer device to one another. Buses include, but are not limited to, at least one of the following: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industrial Standard Architecture (EISA) Bus, a Front Side Bus (FSB), a Hypertransport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (LPC) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI-Express (PCI-interface) Bus, a PCI-Express (PCI-Express) Bus, a Serial Advanced Technology Attachment (vladvanced Technology, SATA) Bus, a Video Association (Video Association) Bus, or a combination of two or more of these or other suitable electronic buses. A bus may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The computer device may execute the mesh reconstruction quality optimization method in the embodiment of the present application based on the acquired data information, thereby implementing the mesh reconstruction quality optimization method described in conjunction with fig. 2.
In addition, in combination with the method for optimizing the reconstruction quality of the mesh in the foregoing embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the above-described embodiments of a method for mesh reconstruction quality optimization.
In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. A mesh reconstruction quality optimization method for optimizing images shot by an Unmanned Aerial Vehicle (UAV), the method comprising:
acquiring a measurement image and point cloud of aerial photography of the unmanned aerial vehicle, and constructing an initial grid model; wherein the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedral graph cutting method;
constructing an initial camera pair list, wherein the camera pair is a camera pair corresponding to two images;
selecting an optimal and least number of preferred camera pairs from the initial camera pair list according to the indexes of the camera pairs paired on the grid surface of the initial grid model;
calculating the variation amplitude of the grid vertex in the normal direction according to the similarity difference of the two cameras in the optimal camera pair on the images;
under different image resolutions, updating the grid vertex by a gradient descent method;
the step of building an initial list of camera pairs comprises:
calculating the midpoint of the point cloud
Figure 603549DEST_PATH_IMAGE001
In the image
Figure 104938DEST_PATH_IMAGE002
Corresponding camera
Figure 318881DEST_PATH_IMAGE003
The number of visible points;
for any camera pair
Figure 552416DEST_PATH_IMAGE004
Is subject to condition limitation such that
Figure 242024DEST_PATH_IMAGE001
Before the most number of points they collectively observe
Figure 156890DEST_PATH_IMAGE005
A camera as
Figure 553236DEST_PATH_IMAGE006
The initial pairing of (1);
the conditions are limited to:
Figure 19990DEST_PATH_IMAGE007
Figure 603418DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 321975DEST_PATH_IMAGE009
representing three-dimensional points
Figure 572828DEST_PATH_IMAGE010
Camera with a camera module
Figure 225131DEST_PATH_IMAGE011
And
Figure 499118DEST_PATH_IMAGE012
a reprojection pixel distance error therebetween;
Figure 818104DEST_PATH_IMAGE013
presentation camera
Figure 985780DEST_PATH_IMAGE014
Center to
Figure 732019DEST_PATH_IMAGE010
The angle between the rays;
final screening to obtain initial camera pair set
Figure 493301DEST_PATH_IMAGE015
The step of selecting an optimal and least number of preferred camera pairs from the initial camera pair list according to the indexes of pairing of the camera pairs on the mesh surface of the initial mesh model specifically comprises:
inputting the initial grid model and an image sequence;
taking the reference parallax as 50 ° to define a parallax quality evaluation, wherein the parallax quality evaluation is as follows:
Figure 350399DEST_PATH_IMAGE016
wherein, the first and the second end of the pipe are connected with each other,
Figure 638161DEST_PATH_IMAGE017
(ii) a Average parallax:
Figure 555301DEST_PATH_IMAGE018
for model surface
Figure 803880DEST_PATH_IMAGE019
Upper point of
Figure 526985DEST_PATH_IMAGE020
Centering of camera
Figure 606937DEST_PATH_IMAGE021
Angle of parallax
Figure 898241DEST_PATH_IMAGE022
Figure 493170DEST_PATH_IMAGE023
Indicates the area of the image region, where the region is indicated as
Figure 957650DEST_PATH_IMAGE024
It is a surface
Figure 95370DEST_PATH_IMAGE025
On-camera
Figure 416630DEST_PATH_IMAGE026
In the area of the projection, the image is projected,
Figure 170959DEST_PATH_IMAGE027
representing pixels in an image
Figure 642392DEST_PATH_IMAGE028
Is back projected to the surface
Figure 431356DEST_PATH_IMAGE029
A point on;
defining a resolution quality assessment, the resolution quality assessment being:
Figure 923518DEST_PATH_IMAGE030
wherein, the first and the second end of the pipe are connected with each other,
Figure 633985DEST_PATH_IMAGE031
representing a 25% difference in resolution; is provided with
Figure 768163DEST_PATH_IMAGE032
Representing points on the surface of a model
Figure 411634DEST_PATH_IMAGE033
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 215642DEST_PATH_IMAGE034
which represents the focal length of the camera and,
Figure 944563DEST_PATH_IMAGE035
as a camera center
Figure 885362DEST_PATH_IMAGE036
To a point
Figure 321022DEST_PATH_IMAGE037
The vector of (a);
Figure 358249DEST_PATH_IMAGE038
indicating that the camera is relative to the two-camera resolution
Figure 636783DEST_PATH_IMAGE039
Normalized difference in length:
Figure 784868DEST_PATH_IMAGE040
about
Figure 340614DEST_PATH_IMAGE041
Average difference of (a):
Figure 548741DEST_PATH_IMAGE042
defining pixel overlap quality assessment:
Figure 314572DEST_PATH_IMAGE043
wherein, the first and the second end of the pipe are connected with each other,
Figure 266348DEST_PATH_IMAGE044
presentation camera
Figure 676600DEST_PATH_IMAGE045
The corresponding images are then displayed on the display screen,
Figure 383525DEST_PATH_IMAGE046
is the overall area of the image;
defining an evaluation of camera pairs based on symmetry on a surface:
Figure 574335DEST_PATH_IMAGE047
wherein, the first and the second end of the pipe are connected with each other,
Figure 267485DEST_PATH_IMAGE048
Figure 922457DEST_PATH_IMAGE049
representing points on the surface of a model
Figure 206808DEST_PATH_IMAGE050
The angular difference of the normal is averaged out,
Figure 88176DEST_PATH_IMAGE051
Figure 444071DEST_PATH_IMAGE052
is camera center to point
Figure 891233DEST_PATH_IMAGE053
Relative to its normal line
Figure 549747DEST_PATH_IMAGE054
The difference in the angle of (c),
Figure 777466DEST_PATH_IMAGE055
if, if
Figure 609156DEST_PATH_IMAGE056
On the same side of the normal line,
Figure 114087DEST_PATH_IMAGE057
the value is-1, otherwise the value is 1;
defining an energy equation:
Figure 5819DEST_PATH_IMAGE058
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 455255DEST_PATH_IMAGE059
represents the energy that measures the quality of the disparity,
Figure 90636DEST_PATH_IMAGE060
the energy representing the camera pair for similar resolution qualities,
Figure 450073DEST_PATH_IMAGE061
the energy representing the quality of the overlap of the camera to the pixel,
Figure 512707DEST_PATH_IMAGE062
an energy function representing the quality of symmetry of the camera image with respect to the model surface,
Figure 717948DEST_PATH_IMAGE063
the weight of the energy term is represented by,
Figure 829123DEST_PATH_IMAGE064
taking the camera pair which minimizes the energy equation calculation result as a preferred camera pair;
the step of calculating the variation range of the grid vertex in the normal direction thereof according to the similarity difference of the two cameras in the preferred camera pair on the image specifically comprises:
inputting the initial mesh model, the preferred camera pair;
computing optimized vertices in mesh vertices in the initial mesh model
Figure 105384DEST_PATH_IMAGE065
Figure 401236DEST_PATH_IMAGE066
Wherein the content of the first and second substances,
Figure 762947DEST_PATH_IMAGE067
is the magnitude of the change of the vertex in the direction of its normal,
Figure 943393DEST_PATH_IMAGE068
is the scale factor between the cameras and is,
Figure 136477DEST_PATH_IMAGE069
is the vertex of the mesh before optimization,
Figure 540913DEST_PATH_IMAGE070
Figure 327604DEST_PATH_IMAGE071
presentation camera
Figure 108478DEST_PATH_IMAGE072
Observation of
Figure 156069DEST_PATH_IMAGE073
The average depth of (a) is determined,
Figure 934669DEST_PATH_IMAGE074
presentation camera
Figure 270972DEST_PATH_IMAGE075
Observation of
Figure 652275DEST_PATH_IMAGE073
The average depth of (a) is determined,
Figure 757634DEST_PATH_IMAGE076
is the focal length of the camera and is,
Figure 441556DEST_PATH_IMAGE077
is the normal of the vertexDirection;
calculating the variation amplitude of the vertex in the normal direction:
Figure 265156DEST_PATH_IMAGE078
wherein, the first and the second end of the pipe are connected with each other,
Figure 184570DEST_PATH_IMAGE079
representing barycentric coordinates of points within the triangle plane adjacent to the vertex with respect to the vertex,
Figure 144436DEST_PATH_IMAGE080
representing the variation amplitude of points in the triangular surface;
points within the triangle plane are represented as the initial mesh model surface
Figure 530418DEST_PATH_IMAGE081
Continuous point of
Figure 638051DEST_PATH_IMAGE082
Calculating continuous points on the initial mesh model
Figure 298840DEST_PATH_IMAGE083
The gradient of (c) is changed:
Figure 316474DEST_PATH_IMAGE084
wherein the content of the first and second substances,
Figure 732412DEST_PATH_IMAGE085
is a coordinate of a pixel, and is,
Figure 265025DEST_PATH_IMAGE086
is the size of the image to be displayed,
Figure 667187DEST_PATH_IMAGE087
the coefficient of the coordinate of the center of gravity is expressed,
Figure 601645DEST_PATH_IMAGE088
representing the gradient of the change of the similarity measure of the image areas,
Figure 925835DEST_PATH_IMAGE089
representing an image
Figure 945743DEST_PATH_IMAGE090
The gradient of (a) of (b) is,
Figure 151597DEST_PATH_IMAGE091
representing successive points
Figure 2878DEST_PATH_IMAGE092
Discretized to image and image points
Figure 698302DEST_PATH_IMAGE093
A jacobian matrix of the transformation relations of (c),
Figure 143189DEST_PATH_IMAGE094
from camera center to image point
Figure 683892DEST_PATH_IMAGE095
Vector of direction.
2. The method for optimizing mesh reconstruction quality according to claim 1, wherein the step of updating mesh vertices by gradient descent method at different image resolutions specifically comprises:
dividing the measurement image and the corresponding camera into three resolution levels of 0.25, 0.5 and 1.0 according to the scaling;
taking the initial grid as input, and carrying out triangular subdivision according to the level with the scaling of 0.25; wherein, the subdivision condition is that the number of the projected area pixels on the image exceeds 8 for any triangular surface;
taking the grid after the current level subdivision as an input grid of the next level;
sequentially processing according to the zoom level from small to large, and obtaining the zoom level by a calculation formula
Figure 920839DEST_PATH_IMAGE096
Iteratively calculating new positions of all vertexes of the mesh model by using all the camera pairs;
when the vertex is updated in iteration, if the change amplitude of the current vertex
Figure 459267DEST_PATH_IMAGE097
If the current vertex position is larger than or equal to the previous updating time, the current vertex position is not updated;
if the variation amplitude of the current vertex
Figure 453768DEST_PATH_IMAGE097
And if the current vertex position is smaller than the last updating time, updating the current vertex position.
3. A system for optimizing quality of mesh reconstruction, the system comprising:
a model construction module: the method comprises the steps of obtaining a measurement image and point cloud of aerial photography of the unmanned aerial vehicle, and constructing an initial grid model; the initial mesh model is a mesh model with a triangular mesh constructed by a Delaunay tetrahedral graph cutting method;
a camera pair list construction module: the method comprises the steps of constructing an initial camera pair list, wherein the camera pair is a camera pair corresponding to two images;
a screening module: means for selecting an optimal and least number of preferred camera pairs from the initial list of camera pairs according to an index of pairing of the camera pairs on a grid surface of the initial grid model;
the vertex change amplitude calculation module: the method is used for calculating the variation amplitude of the grid vertex in the normal direction of the grid vertex according to the similarity difference of the two cameras in the preferred camera pair on the images;
an updating module: the method is used for updating the grid vertexes through a gradient descent method under different image resolutions;
the camera pair list building module specifically includes:
visible point calculation unit: for calculating the midpoint of a point cloud
Figure 860479DEST_PATH_IMAGE001
In the image
Figure 624035DEST_PATH_IMAGE002
Corresponding camera
Figure 598945DEST_PATH_IMAGE003
The number of visible points;
a condition limiting unit: for aiming at arbitrary camera pair
Figure 815162DEST_PATH_IMAGE004
Is subject to conditional restrictions such that
Figure 291143DEST_PATH_IMAGE001
Before the most number of points they have observed together
Figure 846889DEST_PATH_IMAGE005
A camera as
Figure 55017DEST_PATH_IMAGE006
The initial pairing of (1);
the conditions are limited to:
Figure 820847DEST_PATH_IMAGE007
Figure 507044DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 182876DEST_PATH_IMAGE009
representing three-dimensional points
Figure 561904DEST_PATH_IMAGE010
Camera with a camera module
Figure 815031DEST_PATH_IMAGE011
And with
Figure 304918DEST_PATH_IMAGE012
A reprojection pixel distance error therebetween;
Figure 835257DEST_PATH_IMAGE013
presentation camera
Figure 447504DEST_PATH_IMAGE004
Is centered to
Figure 125610DEST_PATH_IMAGE010
The angle between the rays;
initial camera pair set screening unit: obtaining initial set of camera pairs for final screening
Figure 356871DEST_PATH_IMAGE098
The screening module specifically comprises:
a first input unit: for inputting the initial mesh model, a sequence of images;
parallax quality evaluation unit: defining a disparity quality estimate for taking the reference disparity as 50 °, the disparity quality estimate being:
Figure 804033DEST_PATH_IMAGE099
wherein the content of the first and second substances,
Figure 312813DEST_PATH_IMAGE017
(ii) a Average parallax:
Figure 478215DEST_PATH_IMAGE100
for model surface
Figure 513167DEST_PATH_IMAGE019
Upper point of
Figure 814835DEST_PATH_IMAGE020
Centering of camera
Figure 503306DEST_PATH_IMAGE021
Angle of parallax
Figure 156004DEST_PATH_IMAGE022
Figure 729068DEST_PATH_IMAGE023
Indicates the area of the image region, where the region is indicated as
Figure 885242DEST_PATH_IMAGE024
It is a surface
Figure 10193DEST_PATH_IMAGE025
On-camera
Figure 884608DEST_PATH_IMAGE026
In the area of the projection, the image is projected,
Figure 261363DEST_PATH_IMAGE027
representing pixels in an image
Figure 865520DEST_PATH_IMAGE028
Is back projected to the surface
Figure 99055DEST_PATH_IMAGE029
A point on;
a resolution quality evaluation unit: for defining a resolution quality assessment, the resolution quality assessment being:
Figure 398449DEST_PATH_IMAGE101
wherein the content of the first and second substances,
Figure 375633DEST_PATH_IMAGE102
representing a resolution difference of 25%; is provided with
Figure 568716DEST_PATH_IMAGE103
Representing points on the surface of a model
Figure 973153DEST_PATH_IMAGE033
The distance to the center of the camera is normalized by the focal length of the camera,
Figure 25423DEST_PATH_IMAGE034
which represents the focal length of the camera,
Figure 275138DEST_PATH_IMAGE104
as a camera center
Figure 588308DEST_PATH_IMAGE036
To point
Figure 366908DEST_PATH_IMAGE037
The vector of (a);
Figure 703212DEST_PATH_IMAGE038
indicating that the camera is relative to the two-camera resolution
Figure 84514DEST_PATH_IMAGE039
Normalized difference in length:
Figure 189874DEST_PATH_IMAGE105
about
Figure 873796DEST_PATH_IMAGE041
Average difference of (a):
Figure 431816DEST_PATH_IMAGE106
a pixel quality evaluation unit: for defining pixel overlap quality assessment:
Figure 616810DEST_PATH_IMAGE043
wherein, the first and the second end of the pipe are connected with each other,
Figure 576676DEST_PATH_IMAGE107
presentation camera
Figure 431499DEST_PATH_IMAGE108
The image is to be mapped to a corresponding image,
Figure 742395DEST_PATH_IMAGE109
is the overall area of the image;
a symmetry evaluation unit: for defining camera pair evaluation based on symmetry on surface:
Figure 734009DEST_PATH_IMAGE110
wherein the content of the first and second substances,
Figure 486064DEST_PATH_IMAGE111
Figure 839685DEST_PATH_IMAGE049
representing points on the surface of a model
Figure 434615DEST_PATH_IMAGE050
The angular difference of the normal is averaged out,
Figure 899094DEST_PATH_IMAGE051
Figure 505656DEST_PATH_IMAGE052
is camera center to point
Figure 30178DEST_PATH_IMAGE112
Relative to its normal line
Figure 112404DEST_PATH_IMAGE054
The angle difference of (a) to (b),
Figure 114995DEST_PATH_IMAGE113
if, if
Figure 107222DEST_PATH_IMAGE056
On the same side of the normal line,
Figure 864962DEST_PATH_IMAGE057
the value is-1, otherwise the value is 1;
the energy equation definition unit: for defining the energy equation:
Figure 106587DEST_PATH_IMAGE114
(ii) a Wherein, the first and the second end of the pipe are connected with each other,
Figure 850553DEST_PATH_IMAGE115
represents the energy that measures the quality of the disparity,
Figure 759603DEST_PATH_IMAGE060
the energy representing the camera versus similar resolution quality,
Figure 422665DEST_PATH_IMAGE061
the energy representing the quality of the overlap of the camera to the pixel,
Figure 417166DEST_PATH_IMAGE116
an energy function representing the quality of symmetry of the camera pair to the model surface,
Figure 699243DEST_PATH_IMAGE063
the weight of the energy term is represented by,
Figure 462799DEST_PATH_IMAGE117
preferably, the camera contrast value unit: the method is used for taking the camera pair which minimizes the energy equation calculation result as a preferred camera pair;
the vertex change amplitude calculation module is used for: inputting the initial mesh model, the preferred camera pair;
calculating optimized vertexes of grid vertexes in the initial grid model
Figure 296763DEST_PATH_IMAGE065
Figure 981823DEST_PATH_IMAGE066
Wherein the content of the first and second substances,
Figure 129907DEST_PATH_IMAGE067
is the magnitude of the change of the vertex in the direction of its normal,
Figure 810287DEST_PATH_IMAGE068
is the inter-camera scale factor that is,
Figure 18415DEST_PATH_IMAGE069
is the vertex of the mesh before optimization,
Figure 659612DEST_PATH_IMAGE118
Figure 345808DEST_PATH_IMAGE119
presentation camera
Figure 146274DEST_PATH_IMAGE072
Observation of
Figure 259723DEST_PATH_IMAGE073
The average depth of (a) is determined,
Figure 388216DEST_PATH_IMAGE074
presentation camera
Figure 878103DEST_PATH_IMAGE075
Observation of
Figure 536005DEST_PATH_IMAGE073
The average depth of the optical fiber,
Figure 85935DEST_PATH_IMAGE076
is the focal length of the camera and is,
Figure 701724DEST_PATH_IMAGE120
is the normal direction of the vertex;
calculating the variation amplitude of the vertex in the normal direction:
Figure 729723DEST_PATH_IMAGE078
wherein, the first and the second end of the pipe are connected with each other,
Figure 239202DEST_PATH_IMAGE079
representing barycentric coordinates of points within the triangle plane adjacent to the vertex with respect to the vertex,
Figure 960033DEST_PATH_IMAGE080
representing the variation amplitude of points in the triangular surface;
triangular surfacePoints within are represented as the initial mesh model surface
Figure 63119DEST_PATH_IMAGE081
Continuous point of
Figure 894808DEST_PATH_IMAGE082
Calculating continuous points on the initial mesh model
Figure 258794DEST_PATH_IMAGE083
The gradient of (c) is changed:
Figure 150526DEST_PATH_IMAGE121
wherein the content of the first and second substances,
Figure 6487DEST_PATH_IMAGE085
is the coordinates of the pixels and is the coordinates of the pixels,
Figure 438605DEST_PATH_IMAGE086
is the size of the image to be displayed,
Figure 860359DEST_PATH_IMAGE122
the coefficient of the coordinate of the center of gravity is expressed,
Figure 860676DEST_PATH_IMAGE088
representing the gradient of the change of the similarity measure of the image areas,
Figure 735091DEST_PATH_IMAGE089
representing an image
Figure 236480DEST_PATH_IMAGE090
The gradient of (a) is determined,
Figure 512740DEST_PATH_IMAGE091
representing successive points
Figure 683959DEST_PATH_IMAGE092
Discretized to image and image points
Figure 45670DEST_PATH_IMAGE123
A jacobian matrix of the transformation relations of (c),
Figure 819591DEST_PATH_IMAGE124
from camera center to image point
Figure 419199DEST_PATH_IMAGE095
Vector of direction.
4. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the mesh reconstruction quality optimization method according to any one of claims 1 to 2 when executing the computer program.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of mesh reconstruction quality optimization according to any one of claims 1 to 2.
CN202210925727.5A 2022-08-03 2022-08-03 Grid reconstruction quality optimization method, system, computer and readable storage medium Active CN114998551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210925727.5A CN114998551B (en) 2022-08-03 2022-08-03 Grid reconstruction quality optimization method, system, computer and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210925727.5A CN114998551B (en) 2022-08-03 2022-08-03 Grid reconstruction quality optimization method, system, computer and readable storage medium

Publications (2)

Publication Number Publication Date
CN114998551A CN114998551A (en) 2022-09-02
CN114998551B true CN114998551B (en) 2022-11-18

Family

ID=83022563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210925727.5A Active CN114998551B (en) 2022-08-03 2022-08-03 Grid reconstruction quality optimization method, system, computer and readable storage medium

Country Status (1)

Country Link
CN (1) CN114998551B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007135659A2 (en) * 2006-05-23 2007-11-29 Elbit Systems Electro-Optics Elop Ltd. Clustering - based image registration
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
JP2019185070A (en) * 2018-03-30 2019-10-24 東京ガスiネット株式会社 Information processing system and program
CN114708375A (en) * 2022-06-06 2022-07-05 江西博微新技术有限公司 Texture mapping method, system, computer and readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4136611A4 (en) * 2020-05-18 2023-06-07 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image reconstruction
CN113393577B (en) * 2021-05-28 2023-04-07 中铁二院工程集团有限责任公司 Oblique photography terrain reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007135659A2 (en) * 2006-05-23 2007-11-29 Elbit Systems Electro-Optics Elop Ltd. Clustering - based image registration
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
JP2019185070A (en) * 2018-03-30 2019-10-24 東京ガスiネット株式会社 Information processing system and program
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN114708375A (en) * 2022-06-06 2022-07-05 江西博微新技术有限公司 Texture mapping method, system, computer and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
利用飞行时间三维相机的非刚体形状三维重建;童晶等;《计算机辅助设计与图形学学报》;20110315(第03期);3-10 *
宽基线视差图像的拼接算法;左森等;《计算机工程》;20070520(第10期);181-183 *
影像三维重建的网格自适应快速优化;张春森等;《武汉大学学报(信息科学版)》;20200305(第03期);98-105 *

Also Published As

Publication number Publication date
CN114998551A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
US8331615B2 (en) Match, expand, and filter technique for multi-view stereopsis
US8711143B2 (en) System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves
JP5778237B2 (en) Backfill points in point cloud
US9256980B2 (en) Interpolating oriented disks in 3D space for constructing high fidelity geometric proxies from point clouds
CN107909640B (en) Face relighting method and device based on deep learning
CN113393577B (en) Oblique photography terrain reconstruction method
CN114119853B (en) Image rendering method, device, equipment and medium
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN109816706A (en) A kind of smoothness constraint and triangulation network equal proportion subdivision picture are to dense matching method
CN111627119A (en) Texture mapping method, device, equipment and storage medium
CN111583398B (en) Image display method, device, electronic equipment and computer readable storage medium
CN114202632A (en) Grid linear structure recovery method and device, electronic equipment and storage medium
CN115631317A (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
CN114998551B (en) Grid reconstruction quality optimization method, system, computer and readable storage medium
CN106600691B (en) Fusion correction method and system of multi-channel two-dimensional video images in three-dimensional geographic space
CN115294277B (en) Three-dimensional reconstruction method and device of object, electronic equipment and storage medium
CN115994993A (en) Stylized face three-dimensional shape modeling method, system, equipment and storage medium
CN115421509B (en) Unmanned aerial vehicle flight shooting planning method, unmanned aerial vehicle flight shooting planning device and storage medium
WO2019144281A1 (en) Surface pattern determining method and device
CN115937395A (en) Electrical equipment model rendering method and device, computer equipment and storage medium
JP2020187626A (en) Image processing device, image processing method, and program
CN116295031B (en) Sag measurement method, sag measurement device, computer equipment and storage medium
TWI782806B (en) Point cloud rendering method
Wu et al. An accurate novel circular hole inspection method for sheet metal parts using edge-guided robust multi-view stereo
CN116863085B (en) Three-dimensional reconstruction system, three-dimensional reconstruction method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant