CN115457022A - Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image - Google Patents

Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image Download PDF

Info

Publication number
CN115457022A
CN115457022A CN202211211217.8A CN202211211217A CN115457022A CN 115457022 A CN115457022 A CN 115457022A CN 202211211217 A CN202211211217 A CN 202211211217A CN 115457022 A CN115457022 A CN 115457022A
Authority
CN
China
Prior art keywords
image
dimensional
points
point
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211211217.8A
Other languages
Chinese (zh)
Other versions
CN115457022B (en
Inventor
杨爱明
马能武
陶鹏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changjiang Spatial Information Technology Engineering Co ltd
Wuhan University WHU
Original Assignee
Changjiang Spatial Information Technology Engineering Co ltd
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changjiang Spatial Information Technology Engineering Co ltd, Wuhan University WHU filed Critical Changjiang Spatial Information Technology Engineering Co ltd
Priority to CN202211211217.8A priority Critical patent/CN115457022B/en
Publication of CN115457022A publication Critical patent/CN115457022A/en
Application granted granted Critical
Publication of CN115457022B publication Critical patent/CN115457022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional deformation detection method based on an orthographic image of a live-action three-dimensional model. The method comprises the steps of taking a real-scene three-dimensional model as a basic data source, utilizing real-scene three-dimensional models at different time phases to reduce dimensions to obtain an orthographic image, carrying out image matching under an image pyramid strategy to obtain a homonymy point, then increasing dimensions into the three-dimensional model, and interpolating to obtain a three-dimensional deformation vector field; the specific three-dimensional deformation detection method comprises the following steps: generating two-stage front-view images; step two: acquiring two stages of feature points with the same name of the front-view image through image matching; step three: converting the two-dimensional homonymous points into an object space coordinate system; step four: a three-dimensional change vector field is generated. The invention overcomes the defects that the prior art can not obtain reliable deformation information, has low detection precision and efficiency and high cost; the method has the advantages of reliable deformation information acquisition, high precision, high efficiency and low cost.

Description

Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image
Technical Field
The invention relates to the field of photogrammetry and remote sensing, in particular to a three-dimensional deformation detection method based on an orthographic image of a live-action three-dimensional model.
Background
Along with the change of external environments such as time, temperature, pressure and the like, the positions and the shapes of various terrains and large engineering bodies (dams, bridges) are subjected to potential change, and the three-dimensional deformation detection of the large engineering bodies is the routine work of engineering measurement and is vital to ensuring the safety of the terrains and the engineering bodies;
in a traditional method, sensors (such as a stay wire type displacement meter, an anchor cable meter and the like) are often erected in a possibly-changed area to measure the variable quantity such as displacement, so that three-dimensional deformation detection is realized. The method has high precision, but because the monitoring area is large, a large number of sensors are often required to be installed, and the production and field installation costs are high; in addition, since the accurate position of the change area is not determined in advance, the situation of 'measuring without changing, changing without measuring' is easy to occur, namely, the position where the sensor is installed has no deformation, and the position where the sensor is not installed has deformation but cannot be detected. The deformation monitoring method based on remote sensing image data (such as satellite images, SAR (synthetic Aperture Radar) data, inSAR data and the like) realizes deformation detection by comparing the change of three-dimensional terrains at different times of time sequence data recovery. However, the current method usually only uses elevation and LiDAR data to perform simple difference calculation, and does not consider the accurate corresponding relation between two periods of terrain or ground objects based on homonymy, so that the detected difference value is difficult to accurately reflect the real deformation quantity;
with the great progress of sensor technology and the development of photogrammetric processing technology, the fine three-dimensional real scene of the terrain and the ground object can be reconstructed by utilizing the unmanned aerial vehicle image; the three-dimensional live-action model can more comprehensively show the texture and the form of the ground object, and provides possibility for deformation detection with higher precision; however, existing deformation detection methods are classified into two types: direct three-dimensional deformation detection methods and indirect three-dimensional deformation detection methods. The direct three-dimensional deformation detection method is often used for extracting deformation information directly based on a three-dimensional model, for example, the elevation difference of the two-stage three-dimensional model or the three-dimensional coordinate difference of any point on the surface of the three-dimensional model in the normal direction is calculated, and the method cannot acquire reliable deformation information because the strict corresponding relation of the two-stage three-dimensional live-action data is not established definitely, and has low detection precision; the indirect three-dimensional deformation detection method comprises the steps of firstly projecting a three-dimensional real scene model to a horizontal plane to generate an orthophoto map, then matching same-name image points on two-stage orthophoto images, establishing a ground object corresponding relation, calculating respective three-dimensional coordinates of the same-name image points, and then calculating three-dimensional coordinate differences, wherein when a ground scene is steep (such as a steep slope), the orthophoto image deformation is large, so that the detection precision is low; in extreme cases, when the ground scene is a vertical slope, the orthographic projection of the method is linear, and the method is invalid;
therefore, it is necessary to develop a three-dimensional deformation detection method that can obtain reliable deformation information, and has high accuracy, high efficiency, and low cost.
Disclosure of Invention
The invention aims to provide a three-dimensional deformation detection method based on an actual three-dimensional model front-view image, which is a method for generating a two-dimensional front-view image based on a three-dimensional model (namely, reducing the dimension of the actual three-dimensional model to generate a two-dimensional front-view image, carrying out homonymy point matching and quality control), carrying out three-dimensional deformation monitoring by simultaneously combining three-dimensional and two-dimensional image information (namely, reducing the dimension of three-dimensional actual data into a two-dimensional image, then establishing an accurate corresponding relation through image matching, and calculating a deformation amount on the basis) to obtain a three-dimensional change vector field of a target, and being capable of obtaining reliable deformation information, high in precision, high in efficiency and low in cost; the method solves the problems that under the three-dimensional reconstruction model based on the unmanned aerial vehicle, deformation detection is directly carried out based on live-action data, a plurality of defects exist (for example, a strict corresponding relation of two-stage three-dimensional live-action data is not established definitely), and reliable deformation information cannot be obtained.
In order to achieve the purpose, the technical scheme of the invention is as follows: the three-dimensional deformation detection method based on the front-view image of the live-action three-dimensional model is characterized by comprising the following steps of: taking a live-action three-dimensional model as a basic data source, utilizing the live-action three-dimensional models at different time phases to reduce the dimension to obtain an orthographic image, carrying out image matching under an image pyramid strategy to obtain a homonymy point, then increasing the dimension to the three-dimensional model, and interpolating to obtain a three-dimensional deformation vector field; the live-action three-dimensional model scene comprises deformation monitoring areas such as landslides and dangerous rock masses;
the specific three-dimensional deformation detection method comprises the following steps,
the method comprises the following steps: generating a two-stage front view image (namely dimension reduction processing);
determining a normal vector of a space projection plane through plane fitting according to the input real-scene three-dimensional models of the front and rear time phases
Figure 677499DEST_PATH_IMAGE001
Wherein, a, b and c are normal vectors of a space plane, and then the real-scene three-dimensional model is projected to the calculated space projection plane to generate a corresponding front-view image;
step two: acquiring feature points with the same name of two-stage front-view images through image matching;
performing image matching on the front-rear time phase front-view image obtained in the step one by using an image pyramid strategy to obtain two-dimensional homonymy point coordinates of the homonymy ground object on the front-rear time phase front-view image
Figure 501099DEST_PATH_IMAGE002
(ii) a This formula represents a pair of homologous points whose coordinates are respectively
Figure 358196DEST_PATH_IMAGE003
And
Figure 849220DEST_PATH_IMAGE004
xis the abscissa of the coordinate system, and,yis the ordinate, and the units are pixels;
step three: converting the two-dimensional homonymous points into an object space coordinate system (namely, ascending dimension processing);
respectively converting front and rear time phase front-view image homonymous points into a three-dimensional model through the parameters in the front-view image obtained in the step one to obtain three-dimensional homonymous point coordinates
Figure 766361DEST_PATH_IMAGE005
The formula represents a pair of object party homonyms, and the coordinates of the object party homonyms are
Figure 811677DEST_PATH_IMAGE006
And
Figure 472466DEST_PATH_IMAGE007
the units are pixels, X is the horizontal coordinate of the plane, Y is the vertical coordinate of the plane, and Z is the elevation;
step four: generating a three-dimensional change vector field;
calculating the change vector of each homonymous point for the three-dimensional homonymous points obtained in the step three
Figure 552417DEST_PATH_IMAGE008
Interpolating the result to obtain a three-dimensional change vector field of the target;
the three-dimensional model is reduced into the two-dimensional front-view image for image matching, the strict corresponding relation of the same ground point in the two-stage data is established, and then the dimension is increased to the three-dimensional model, so that the three-dimensional homonymy point is determined, and more accurate deformation information can be extracted.
In the technical scheme, in the first step, the two-stage three-dimensional model needs to use the same space projection plane to generate an orthographic image so as to unify coordinate references; in addition, compared with an orthoimage with a projection plane being a horizontal plane, since the orthographic image projection plane is an optimal spatial plane (such as a tangent plane of a landslide surface, a vertical plane of a wall surface and the like) facing the ground scene, the projection deformation is small, and the ground information can be better represented.
In the above technical solution, in the first step, when the three-dimensional scene is a landslide area, the specific method for generating the two-stage orthophoto map includes:
firstly, calculating a proper plane of the landslide (namely a space plane of the best-fit landslide) through plane fitting according to the three-dimensional model of the anterior phase, and determining a normal vector of a space projection plane
Figure 640459DEST_PATH_IMAGE009
Calculating the coordinate system of the object space
Figure 438651DEST_PATH_IMAGE010
To projection space plane coordinate system
Figure 903130DEST_PATH_IMAGE011
Of the rotation matrix
Figure 837588DEST_PATH_IMAGE012
Then, calculating the coordinates of each vertex by calculating the bounding box of the three-dimensional model, and determining the coordinates of the starting point
Figure 96531DEST_PATH_IMAGE013
Then any spatial point in the three-dimensional model that is in
Figure 116440DEST_PATH_IMAGE014
Coordinates of (5)
Figure 384610DEST_PATH_IMAGE015
And in
Figure 173574DEST_PATH_IMAGE016
Coordinates of (5)
Figure 868998DEST_PATH_IMAGE017
The conversion relationship is as follows:
Figure 376203DEST_PATH_IMAGE018
and finally, converting the image into a local coordinate system, and then carrying out gridding processing to obtain a front time phase orthographic view image, and carrying out the same steps on a rear time phase image to obtain a rear time phase orthographic view image so as to obtain front and rear time phase orthographic view images.
In the technical scheme, in the second step, when the images with the original scale are directly matched, the homonymous image point search needs to be carried out in a larger image range, the efficiency is low, and the mismatching is easy to occur; according to the method, an image pyramid strategy is adopted, image matching is carried out from the minimum scale to the original scale, the image matching result of each scale is used for correcting the next layer of image matching prediction point, the searching range is reduced, and the precision is continuously improved;
for the topmost image pyramid, the resolution is low enough, and the image coordinate difference between the image points with the same name predicted by the geographic coordinates and the actual point positions is small (less than or equal to dozens of pixels), so that the matching is easy; and establishing an affine transformation relation between the geographic coordinates corresponding to the matched image points with the same name as an initial corresponding model, and definitely establishing a strict corresponding relation of the two-stage three-dimensional live-action data to acquire reliable deformation information.
In the above technical solution, in the second step, as shown in fig. 2, the specific image matching method includes:
step 21: constructing an image pyramid according to the front-time phase orthographic images and the rear-time phase orthographic images;
step 22: determining an affine transformation initial model;
step 23: the pyramid top layer is currently in place;
step 24: for the previous phase image I (x, y), firstly, extracting the characteristic points of the previous phase image through a Harris operator;
step 241: calculating the image (i.e. the anterior phase image) I (x, y) atxAndygradient in two directionsI x 、I y
Figure 182485DEST_PATH_IMAGE019
(1)
Figure 825956DEST_PATH_IMAGE020
(2)
In formula (1):
Figure 957860DEST_PATH_IMAGE021
represents a convolution;I x represent an image inxGradient in direction, pure numerical value, no unit;
in the formula (2):
Figure 686781DEST_PATH_IMAGE021
Represents a convolution;I y represent an image inyGradient values in direction, pure numerical values, no units;
step 242: calculate the image atxAndyproduct of two directional gradients
Figure 296754DEST_PATH_IMAGE022
And
Figure 794732DEST_PATH_IMAGE023
Figure 831958DEST_PATH_IMAGE024
(3)
Figure 313755DEST_PATH_IMAGE025
(4)
Figure 461839DEST_PATH_IMAGE026
(5)
step 243: gaussian weighting the result of the previous step by using a Gaussian function
Figure 79902DEST_PATH_IMAGE027
Generating a covariance matrix M of each pixel point gradient;
Figure 288030DEST_PATH_IMAGE028
(6)
in formula (6): each I in the matrix is a product of corresponding gradient values and is used for constructing a covariance matrix M of the gradient;wrepresenting a gaussian kernel function with sigma =1,
Figure 991544DEST_PATH_IMAGE021
represents a convolution; all numerical values in the formula are calculated, and no unit exists;
A. b and C are letters for simply describing matrix values, and have no specific significance;
step 244: calculating a corner response value of each pixel;
Figure 208898DEST_PATH_IMAGE029
(7)
in formula (7): m is a covariance matrix;
equation (7) is here an approximate calculation equation; det (M) represents a determinant of a matrix M, trace (M) represents a Trace of the matrix, and k represents an empirical constant, and the value of k is usually 0.04 to 0.06; the whole process is still numerical calculation and has no unit;
step 245: setting a threshold value to find out possible points and carrying out non-maximum suppression, wherein the local maximum point is a final characteristic point;
step 25: with respect to feature points in front-time phase orthophoto image
Figure 681468DEST_PATH_IMAGE030
Through affine transformation model, the predicted points in the rear time phase orthophoto image are carried out
Figure 60497DEST_PATH_IMAGE031
The calculation of (2):
Figure 251307DEST_PATH_IMAGE032
(8)
Figure 741194DEST_PATH_IMAGE033
(9)
in formulas (8) and (9):abcdefthe coefficients are coefficients of an affine transformation model, and have no specific meaning or unit;
x 1 y 1 the coordinates of the characteristic points in the front time phase orthophoto image are free of units;
x 2 andy 2 representing the corresponding characteristic point coordinates obtained by prediction in the rear time phase orthophoria image without a unit;
it should be noted that in the initial model (i.e., the affine transformation model which has not been fitted for the first time is used), the parameters a =1 and e =1, and the other parameters are all 0;
step 26: for each feature point
Figure 599428DEST_PATH_IMAGE030
At the predicted point of step 25
Figure 883779DEST_PATH_IMAGE031
Constructing a search window with radius r as the center, adopting a correlation coefficient method, taking m multiplied by m as the size of a fixed template, calculating the correlation coefficient of each pixel in the window, and obtaining the pixel point with the maximum correlation coefficient
Figure 561885DEST_PATH_IMAGE034
I.e. as corresponding homologous points:
Figure 855463DEST_PATH_IMAGE035
(10)
in formula (10):mncolumn and row, i.e. number of rows and columns, where n = m, is the length and width of the template window; j is a count variable of the accumulated symbols; g represents a gray image of the anterior phase; g' represents the gray level image of the later time phase, and the upper and lower subscripts represent the gray level values of the corresponding position points; each parameter has no unit;
step 27: for the obtained coordinates of the same-name points, in order to prevent the image of the gross error points with the maximum value and the minimum value from existing, the gross error points are removed through the probability distribution of the standard normal distribution;
step 28: if the image is in a scale of 1; otherwise, for all the coordinates of the same-name points, the front-time phase orthographic view image coordinates
Figure 302625DEST_PATH_IMAGE030
And rear time phase orthographic view image coordinate
Figure 23456DEST_PATH_IMAGE034
Fitting the affine transformation model by the least square method, updating the affine transformation model parameters of the equations (8) and (9), reducing the hierarchy of the image pyramid (as shown in fig. 3), and recalculating from step 24; the invention adopts an image pyramid strategy and an affine transformation model to determine the prediction point with higher precision, greatly reduces the search window range in image matching, and continuously optimizes the model layer by layer, thereby not only improving the image matching efficiency, but also improving the precision of the homonymy point.
In the above technical solution, in step 27, the method for removing gross error points includes the following steps:
step 271: first calculating the homologous points separatelydxNamely, the corresponding coordinates before and after the homonymous point are subtracted:
Figure 923279DEST_PATH_IMAGE036
(11)
step 272: for the results obtained, the centering was performed by subtracting the median value:
Figure 20548DEST_PATH_IMAGE037
(12)
step 273: probability distribution according to standard normal distribution, over
Figure 587796DEST_PATH_IMAGE038
The distribution probability of (2) is 68.3%; therefore, the results of step 272 are converted into absolute values, arranged in descending order, and the probabilities are accumulated from the first point, and the corresponding points with the same name and the accumulated probability of being out of 68.3% are removed:
Figure 479529DEST_PATH_IMAGE039
(13)
step 274: for the result obtained in the step 273, carrying out similar step elimination on the y coordinate to obtain a coordinate of a same-name point within an error range; according to the probability distribution characteristic of standard normal distribution, the gross error points are removed by centralizing the coordinates of the homonymous points, so that the reliability of the homonymous points is further improved, the tolerance to homonymous image points of the changed ground object points is increased, the method is suitable for matching the homonymous image points of images with local ground object movement and even movement, and the usability of the deformation detection method is improved; the method solves the problem that the existing gross error elimination method is only suitable for matching images of unchanged ground objects and is very easy to judge the correct homonymous image points of the changed ground objects as gross errors.
In the above technical solution, in the third step, the same-name point is converted to a three-dimensional coordinate system, and the specific method includes:
for the characteristic points obtained in the second step, the depth data of the front-view image produced before (for a certain time phase image, the front-view image obtained after a three-dimensional model is constructed, and corresponding depth data can be generated at the same time) and the coordinates of the starting point determined in the first step are further used
Figure 132227DEST_PATH_IMAGE040
And computing a rotation matrix by projecting the normal vector of the spatial plane, i.e. the homonym point (i.e. the point of identity)
Figure 767607DEST_PATH_IMAGE041
) Coordinate conversion to object coordinate system
Figure 189362DEST_PATH_IMAGE042
Among them:
Figure 251995DEST_PATH_IMAGE043
in formula (14): x, Y and Z are coordinates of the ground object point under a geocentric space rectangular coordinate system, and the unit is meter;
wherein R is a rotation matrix from a corresponding coordinate system to a geocentric space rectangular coordinate system when the front-view image is constructed, and T represents transposition, a numerical matrix and no unit;
x ', Y ' and Z ' represent object coordinates of the ground object points in a unit of meter corresponding to a coordinate system when the front-view image is constructed;
xs, ys, and Zs are coordinates of a starting point of a corresponding coordinate system when the front view image is constructed, or coordinates of an origin of the coordinate system under a geocentric space rectangular coordinate system, and the unit is meter.
Compared with the prior art, the invention has the following advantages:
(1) The three-dimensional model is reduced into a two-dimensional front-view image to carry out image matching, the strict corresponding relation of the same ground point in two-stage data is established, and then the dimension is increased to the three-dimensional model, so that the three-dimensional homonymy point is determined, and more accurate deformation information can be extracted; according to the method, the two-dimensional front-view image corresponding to the target is established through the three-dimensional model, so that the information of complex targets such as a slope, a cliff and a bridge can be acquired to a greater extent, and the method has great significance in deformation detection research; in addition, compared with the deformation detection method based on the orthoimage, the method adopts the orthoimage to overcome the problem that the deformation detection precision is reduced because the slope scene is compressed due to the orthoimage (as shown in fig. 4, the orthoimage is a high-position steep slope orthoimage with the slope close to 90 degrees and an orthoimage contrast image, and as can be seen, the texture information of the steep slope can hardly be expressed on the orthoimage, and the slope information on the orthoimage is rich);
(2) An image pyramid strategy and an affine transformation model are adopted to determine a prediction point with higher precision, the search window range in image matching is greatly reduced, and the model is continuously optimized layer by layer, so that the image matching efficiency is improved, and the precision of the homonymy point is also improved; meanwhile, by constructing affine transformation constraint relations corresponding to the homonymous image points on the same pyramid images by multiple times, coarse difference points in the homonymous image points obtained by image matching are eliminated, and the reliability of deformation detection is improved;
(3) The method for detecting the gross error of the image matching of the changing area based on the standard normal distribution probability distribution characteristic is provided, the tolerance of the homonymous image points of the changing ground feature points is increased, the problem that the correct homonymous image points of the changing ground features are easily judged to be the gross error only by applying the matching of the images of the unchanged ground features in the existing gross error removing method is solved, and the usability of the deformation detection method is improved.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a flowchart of image matching according to the present invention;
FIG. 3 is a schematic diagram of an image pyramid in image matching according to the present invention;
FIG. 4 is a comparison graph of the application of an orthophoto map and an orthophoto map;
FIG. 5 is a three-dimensional model diagram of a landslide in accordance with an embodiment of the present invention;
FIG. 6 is a two-stage orthographic view of a certain landslide in an embodiment of the invention;
FIG. 7 is a diagram illustrating image matching results of two forward-looking images of a landslide in an embodiment of the present invention;
FIG. 8 is a result diagram of the superposition of two-phase three-dimensional models of a certain landslide according to the geographic location in the embodiment of the present invention;
fig. 9 is an exemplary diagram of a three-dimensional change vector field for a certain landslide in an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail with reference to the accompanying drawings, which are not intended to limit the present invention, but are merely exemplary. While the advantages of the invention will be clear and readily understood by the description.
The method comprises the steps of taking a real-scene three-dimensional model of a monitored scene as a basic data source, adopting different time-phase real-scene three-dimensional models to reduce dimensions to manufacture two-dimensional front-view images (the three-dimensional models are used for establishing the two-dimensional front-view images corresponding to targets, information of complex targets such as slopes, cliffs, bridges and the like can be obtained to a greater extent, carrying out image matching under an image pyramid strategy to obtain the same-name feature points, utilizing the front-view images to constrain the relative geometric relationship so as to eliminate wrong matching points, then upgrading the correct same-name feature points into the three-dimensional model, calculating the three-dimensional variable quantity of each same-name feature point, finally interpolating to obtain the three-dimensional deformation vector field of the monitored scene, obtaining reliable deformation information, having high detection precision, high efficiency and low cost, overcoming the defects that the prior art cannot obtain the reliable deformation information, having low detection precision and low cost, combining geographic coordinates and prediction point constraint of a transformation model, adopting the image pyramid strategy to multiply constrain the same-name point matching and quality control of the front-view images, obtaining the same-name points, upgrading the three-dimensional models, and constructing the vector field of the real-name three-dimensional models according to the true coordinate change.
Example (b): the invention will be described in detail by taking the embodiment of the invention for testing the three-dimensional deformation of the landslide of a certain terrace as an example, and has a guiding function for the three-dimensional deformation testing of the invention applied to other geographic environments.
In the embodiment, a landslide of a large terrace is taken as an example, and the landslide is positioned about 20 kilometers in the northwest upstream of a hydropower station; for the landslide, two-stage approach photography is performed respectively on 29 days in 3 months in 2021 and 25 days in 5 months in 2021, 3500 images in two stages are obtained in total, and a two-stage live-action three-dimensional model obtained through stage reconstruction is used as input data of the implementation case, as shown in fig. 5;
the general steps of deformation detection in this embodiment are shown in fig. 1 (in fig. 1, 2D represents two dimensions; and 3D represents three dimensions), and the specific steps are as follows:
step 1, generating a two-stage front view image (dimension reduction); determining the normal vector of the space projection plane by plane fitting according to the input three-dimensional models of the front and rear time phases
Figure 126411DEST_PATH_IMAGE044
Generating a corresponding front-view image;
for a landslide area, firstly, according to a three-dimensional model of a front time phase, selecting a proper plane of a landslide for plane fitting, and determining a normal vector of a space projection plane as
Figure 565482DEST_PATH_IMAGE045
(ii) a Calculating object space coordinate system
Figure 841743DEST_PATH_IMAGE046
To projection space plane coordinate system
Figure 75278DEST_PATH_IMAGE047
The rotation matrix of (2):
Figure 436989DEST_PATH_IMAGE048
counting each vertex by the bounding box of the three-dimensional model, and determining the coordinates of the starting point as follows:
Figure 414172DEST_PATH_IMAGE049
then any spatial point in the three-dimensional model that is in
Figure 544939DEST_PATH_IMAGE050
Coordinates of (5)
Figure 214955DEST_PATH_IMAGE051
And is shown in
Figure 63963DEST_PATH_IMAGE052
Coordinates of (5)
Figure 844837DEST_PATH_IMAGE053
The conversion relationship is as follows:
Figure 830110DEST_PATH_IMAGE054
the front-view image can be obtained by converting the image into a local coordinate system and then performing gridding processing, the rear time phase image is processed in the same step, and the front-view image and the rear time phase front-view image are obtained as shown in fig. 6;
fig. 4 (a) shows a photograph of an arrow-through hole scene; fig. 4 (b) shows an arrow hole (inside a white circle) on an orthographic image; fig. 4 (c) is a front view of the arrow hole; as can be seen from the diagrams (b) and (c) in fig. 4, the slope scene caused by the orthographic image is compressed, while the information of complex targets such as a slope, a cliff, a bridge and the like can be acquired to a greater extent by the orthographic image, the slope scene is not compressed and is basically consistent with an arrow-through hole live-action photograph (i.e., the diagram (a) in fig. 4), and more accurate deformation information can be extracted by the orthographic image in the invention;
step 2, performing image matching on the two-stage front-view image; carrying out image matching on the front and rear time phase front-view image obtained in the step 1 to obtain corresponding two-dimensional homonymy point coordinates
Figure 671027DEST_PATH_IMAGE055
For a certain large plateau land landslide area, the variation amount of image points of deformation points in a two-stage front-view image is about 60 pixels, the resolution of the image is reduced by establishing an image pyramid as shown in fig. 3, and the scale of the topmost layer is 1;
in this case, in the top image, because the deformation amplitude is small, the coordinates of the same-name points of the front and rear time phases are very similar, the geographic coordinates of the local coordinate system are directly used as the prediction points, that is, the prediction points are the 'same-name points' of the same geographic coordinates while the features are extracted; therefore, the initial affine change model is an equivalent model, and the predicted point calculation is performed by calculating the coordinates of the homonymous points of each layer and then re-fitting the affine transformation model; for example, for an image of a scale 1; however, since the feature points extracted from the images at different scales are different, the feature points need to be continuously re-extracted when recursive image matching is performed;
the following is the process flow starting from the topmost image:
(1) For anterior phase image
Figure 7331DEST_PATH_IMAGE056
Firstly, extracting characteristic points of a front time phase image through a Harris operator;
Figure 326317DEST_PATH_IMAGE057
calculating the image I (x, y) atxAndygradient in two directions I x 、I y
Figure 431676DEST_PATH_IMAGE058
(1)
Figure 177915DEST_PATH_IMAGE059
(2)
Figure 1514DEST_PATH_IMAGE060
Calculating the product of two directional gradients of an image
Figure 858612DEST_PATH_IMAGE022
And
Figure 818478DEST_PATH_IMAGE023
Figure 266777DEST_PATH_IMAGE024
(3)
Figure 312093DEST_PATH_IMAGE025
(4)
Figure 972882DEST_PATH_IMAGE026
(5)
Figure 52833DEST_PATH_IMAGE061
gaussian weighting the result of the previous step by using a Gaussian function
Figure 140875DEST_PATH_IMAGE062
Generating a moment M of each pixel point;
Figure 673487DEST_PATH_IMAGE028
(6)
Figure 137967DEST_PATH_IMAGE063
calculating the corner response of each pixel;
Figure 72425DEST_PATH_IMAGE029
(7)
Figure 331368DEST_PATH_IMAGE064
setting a threshold value to find out possible points and carrying out non-maximum suppression, wherein the local maximum point is a final characteristic point;
(2) With respect to feature points in front-time phase orthophoto image
Figure 616855DEST_PATH_IMAGE030
Through affine transformation model, the predicted points in the rear time phase orthophoto image are carried out
Figure 619447DEST_PATH_IMAGE031
The calculation of (2):
Figure 408411DEST_PATH_IMAGE032
(8)
Figure 103835DEST_PATH_IMAGE033
(9)
it should be noted that in the initial model, the above parameters
Figure 611039DEST_PATH_IMAGE065
All other parameters are
Figure 151742DEST_PATH_IMAGE066
(3) For each feature point
Figure 60792DEST_PATH_IMAGE030
With the predicted point of step (2)
Figure 661538DEST_PATH_IMAGE031
Constructing a 17 × 17 search window with the radius of 8 as the center, calculating the correlation coefficient of each pixel in the window by adopting a correlation coefficient method and taking 33 × 33 as a fixed template, wherein the pixel point with the maximum correlation coefficient is taken as the corresponding coordinate of the point with the same name
Figure 656039DEST_PATH_IMAGE034
Figure 266011DEST_PATH_IMAGE067
(4) For the obtained coordinates of the same-name points, in order to prevent the image of the gross error points with the maximum value and the minimum value from existing, the gross error points are removed through the probability distribution of the standard normal distribution;
Figure 29568DEST_PATH_IMAGE057
first calculating the homologous points separatelydx,Namely, the corresponding coordinates before and after the homonymous point are subtracted:
Figure 66794DEST_PATH_IMAGE036
(11)
Figure 283012DEST_PATH_IMAGE060
for the results obtained, the centering was performed by subtracting the median value:
Figure 431097DEST_PATH_IMAGE037
(12)
Figure 49160DEST_PATH_IMAGE061
probability distribution according to standard normal distribution, over
Figure 257287DEST_PATH_IMAGE038
Has a distribution probability of 68.3%; thus, the step of
Figure 960801DEST_PATH_IMAGE060
The results are subjected to absolute value conversion, are arranged from small to large, accumulate the probability from the first point, and eliminate the corresponding points with the same name and the accumulated probability of being 68.3 percent:
Figure 646997DEST_PATH_IMAGE068
(13)
Figure 385146DEST_PATH_IMAGE063
for the step
Figure 764175DEST_PATH_IMAGE061
The y coordinate is subjected to similarity step elimination to obtain a coordinate of a same name point within an error range;
(5) If the image is in a scale of 1; otherwise, for all the coordinates of the same-name points, the front-time phase orthographic view image coordinates
Figure 220564DEST_PATH_IMAGE030
And rear time phase orthographic view image coordinate
Figure 722170DEST_PATH_IMAGE034
Fitting an affine transformation model by a least square method, updating parameters of the affine transformation model, reducing the level of an image pyramid, and recalculating from the step (1); the image matching result is shown in fig. 7;
fig. 7 (a) shows a local area image matching result map (in fig. 7 (a), the left image is the rear time phase, and the right image is the front time phase), in which a small white circle is a matching homonymous point, the number beside the small white circle is the number thereof, and the dots encircled by the small white circles having the same number in the left image and the right image in fig. 7 (a) are a pair of homonymous image points; the graph (b) in fig. 7 is a schematic diagram of image matching with a single homonymous point, the left graph of the graph (b) in fig. 7 is a rear time phase, the right graph is a front time phase, the white small circles on the left graph and the right graph in the graph (b) in fig. 7 are positions of matched homonymous image points, and the change value of the geographic position of the homonymous image point in the left graph and the right graph in the graph (b) in fig. 7 is about 0.6 m, so that the homonymous image point is easy to be marked as an error match by a general matching algorithm, but the matching algorithm of the invention can still identify the homonymous image point as a correct homonymous image point, so that the tolerance of the homonymous image point of a changed place is increased, the usability of a deformation detection method is improved, and the deformation detection accuracy and reliability are improved; the problems that the existing gross error elimination method is only suitable for matching images of unchanged ground objects, and the correct image points with the same name of the changed ground objects are easily judged to be gross errors and eliminated, so that the deformation detection precision and reliability are reduced and the like are solved;
step 3, converting the same-name points into a three-dimensional coordinate system (ascending dimension); respectively converting corresponding homonymous points of front and rear time phases into a three-dimensional model through the parameters in the front-view image obtained in the step 1 to obtain three-dimensional homonymous point coordinates
Figure 49246DEST_PATH_IMAGE069
The result of the two-phase three-dimensional model of a certain landslide according to the geographical position in this embodiment is shown in fig. 8; FIG. 8 is a registration diagram of the two-phase three-dimensional model of the pier shown in the diagram (b) of FIG. 7 (FIG. 8 is a diagram of displacement detected by absolute coordinate registration, and the deformation can be calculated according to the respective coordinates of the corresponding image points of the same name), wherein the diagram (a) of FIG. 8 is a side view and the diagram (b) of FIG. 8 is a top view; in fig. 8 (a), the black circles circle the same-name image points which have been displaced, and after the two-stage three-dimensional models are superimposed according to the geographic positions, when the same-name image points are displaced, the same-name image points are not in a registration state; as can be seen from the graph (b) in fig. 8, the displacement change of the image point of the same name displaced in the black circle is 0.61 m; the black color mark and 0.61 m in fig. 8 (b) represent the same-name image with displacement circled in fig. 8 (a)The displacement variation distance of the point is 0.61 meter;
in this case, it is only necessary to use the feature points obtained in step 2, and then to generate the depth data of the front-view image based on the depth data of the front-view image previously generated and the coordinates of the starting point previously determined
Figure 599176DEST_PATH_IMAGE040
And calculating a rotation matrix by a normal vector of the projection space plane, namely converting the coordinate of the same-name point into the coordinate system of the object space
Figure 277282DEST_PATH_IMAGE070
Among them:
Figure 570860DEST_PATH_IMAGE071
step 4, generating a three-dimensional change vector field; calculating the change vector of each homonymous point of the three-dimensional homonymous points obtained in the step (3), and interpolating the result to obtain a target three-dimensional change vector field; fig. 9 shows a three-dimensional change vector field of a large plateau landslide, and it can be seen from fig. 9 that in this embodiment, the three-dimensional live-action data is reduced to two-dimensional images by using the method of the present invention, and then an accurate corresponding relationship is established by image matching, and a deformation amount is calculated on the basis to perform three-dimensional deformation monitoring, so as to obtain a three-dimensional change vector field of a target (as shown by an arrow in fig. 9), which can obtain reliable deformation information, and is high in accuracy and efficiency.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments, or alternatives may be employed, by those skilled in the art, without departing from the spirit or ambit of the invention as defined in the appended claims.
Other parts not described belong to the prior art.

Claims (7)

1. The three-dimensional deformation detection method based on the front-view image of the live-action three-dimensional model is characterized by comprising the following steps of: taking the live-action three-dimensional model as a basic data source, utilizing the live-action three-dimensional model at different time phases to reduce the dimension to obtain an orthographic image, carrying out image matching under an image pyramid strategy to obtain a homonymy point, then increasing the dimension to the three-dimensional model, and interpolating to obtain a three-dimensional deformation vector field;
the specific three-dimensional deformation detection method comprises the following steps,
the method comprises the following steps: generating a two-stage front-view image;
determining a normal vector of a space projection plane through plane fitting according to an input real three-dimensional model of a front time phase and a rear time phase
Figure 254298DEST_PATH_IMAGE001
Generating a corresponding front-view image;
step two: acquiring feature points with the same name of two-stage front-view images through image matching;
performing image matching on the front-rear time phase front-view image obtained in the step one by using an image pyramid strategy to obtain two-dimensional homonymy point coordinates of the homonymy ground object on the front-rear time phase front-view image
Figure 129850DEST_PATH_IMAGE002
Step three: converting the two-dimensional homonymous points into an object space coordinate system;
respectively converting front and rear time phase orthophoria image homonymy points into a three-dimensional model through the parameters in the orthophoria image obtained in the step one to obtain three-dimensional homonymy point coordinates
Figure 893406DEST_PATH_IMAGE003
Step four: generating a three-dimensional change vector field;
calculating the change vector of each homonymous point for the three-dimensional homonymous points obtained in the step three
Figure 665053DEST_PATH_IMAGE004
And interpolating the result to obtain a three-dimensional change vector field of the target.
2. The method for detecting three-dimensional deformation based on the front-view image of the live-action three-dimensional model according to claim 1, wherein: in step one, the two-stage three-dimensional model uses the same space projection plane to generate an orthographic image so as to unify coordinate references.
3. The method for detecting three-dimensional deformation based on the orthographic image of the live-action three-dimensional model according to claim 2, wherein: in the first step, when the three-dimensional scene is a landslide area, the specific method for generating the two-stage front video image comprises the following steps:
firstly, according to the three-dimensional model of the anterior time phase, calculating a proper plane of the landslide through plane fitting, and determining a normal vector of a space projection plane
Figure 146850DEST_PATH_IMAGE001
Calculating the coordinate system of the object space
Figure 294935DEST_PATH_IMAGE005
To projection space plane coordinate system
Figure 912998DEST_PATH_IMAGE006
Of the rotation matrix
Figure 855546DEST_PATH_IMAGE007
Then, the coordinates of the initial point are determined by calculating the coordinates of each vertex of the bounding box of the three-dimensional model
Figure 824639DEST_PATH_IMAGE008
Then any spatial point in the three-dimensional model that is in
Figure 776415DEST_PATH_IMAGE009
Coordinates of (5)
Figure 248984DEST_PATH_IMAGE010
And in
Figure 628013DEST_PATH_IMAGE011
Coordinates of (5)
Figure 818823DEST_PATH_IMAGE012
The conversion relation of (1) is as follows:
Figure 308710DEST_PATH_IMAGE013
and finally, converting the image into a local coordinate system, performing grid processing to obtain an orthographic image, and performing the same step processing on the rear time phase image to obtain the front and rear time phase orthographic images.
4. The method for detecting three-dimensional deformation based on the front-view image of the live-action three-dimensional model according to claim 3, wherein: in the second step, an image pyramid strategy is adopted, image matching is carried out from the minimum scale to the original scale, and the image matching result of each scale is used for correcting the next layer of image matching prediction points;
and determining the image pyramid at the topmost layer as an initial affine transformation model by taking the same geographic coordinates as homonyms.
5. The method for detecting three-dimensional deformation based on the orthographic image of the live-action three-dimensional model according to claim 4, wherein the method comprises the following steps: in the second step, the specific image matching method is as follows:
step 21: constructing an image pyramid according to the front-time phase orthographic images and the rear-time phase orthographic images;
step 22: determining an affine transformation initial model;
step 23: the pyramid top layer is currently in place;
and step 24: for the previous phase image I (x, y), firstly, extracting the characteristic points of the previous phase image through a Harris operator;
step 241: calculating the image I (x, y) atxAndygradient in two directionsI x 、I y
Figure 166945DEST_PATH_IMAGE014
(1)
Figure 716875DEST_PATH_IMAGE015
(2)
In formula (1):
Figure 129402DEST_PATH_IMAGE016
the representation represents a convolution;I x represent an image inxA gradient in direction;
in formula (2): to represent
Figure 422980DEST_PATH_IMAGE016
Represents a convolution;I y represent an image inyA gradient value in direction;
step 242: calculate the image atxAndyproduct of two directional gradients
Figure 870141DEST_PATH_IMAGE017
And
Figure 590973DEST_PATH_IMAGE018
Figure 756375DEST_PATH_IMAGE019
(3)
Figure 588065DEST_PATH_IMAGE020
(4)
Figure 889733DEST_PATH_IMAGE021
(5)
step 243: performing Gaussian weighting on the result of the previous step by using a Gaussian function to generate a covariance matrix M of the gradient of each pixel point;
Figure 781466DEST_PATH_IMAGE022
(6)
in formula (6): each I in the matrix is a product of corresponding gradient values and is used for constructing a covariance matrix M of the gradient;wrepresenting a gaussian kernel function with sigma =1,
Figure 434164DEST_PATH_IMAGE016
represents a convolution;
step 244: calculating a corner response value of each pixel;
Figure 69545DEST_PATH_IMAGE023
(7)
in formula (7): m is a covariance matrix; det (M) represents a determinant of a matrix M, trace (M) represents a Trace of the matrix, and k represents an empirical constant, and the value of k is usually 0.04 to 0.06;
step 245: setting a threshold value to find out possible points and carrying out non-maximum suppression, wherein the local maximum point is a final characteristic point;
step 25: feature points in front-time phase orthophoto image
Figure 491299DEST_PATH_IMAGE024
Through affine transformation model, the predicted points in the rear time phase orthophoto image are carried out
Figure 553933DEST_PATH_IMAGE025
The calculation of (2):
Figure 428348DEST_PATH_IMAGE026
(8)
Figure 601840DEST_PATH_IMAGE027
(9)
in formulas (8) and (9):abcdefcoefficients of an affine transformation model;
x 1 y 1 coordinates of characteristic points in the front time phase orthophoto image;
x 2 andy 2 representing the corresponding characteristic point coordinates obtained by prediction in the rear time phase orthophoria image;
in the initial model, the parameters a =1 and e =1, and the other parameters are all 0;
step 26: for each feature point
Figure 878101DEST_PATH_IMAGE024
At the predicted point of step 25
Figure 111636DEST_PATH_IMAGE025
Build a radius of as a centerrThe search window of (2) calculates the correlation coefficient of each pixel in the window by adopting a correlation coefficient method and taking m multiplied by m as the size of a fixed template, and the pixel point with the maximum correlation coefficient
Figure 473347DEST_PATH_IMAGE028
I.e. as corresponding homologous points:
Figure 450530DEST_PATH_IMAGE029
(10)
in formula (10):mncolumn and row, i.e. number of rows and columns, where n = m, is the length and width of the template window;ja count variable that is an accumulated symbol;ga gray-scale image representing a front time phase;g’representing the gray level image of the rear time phase, and the upper and lower marks on the belt represent the gray level values of the corresponding position points;
step 27: for the obtained coordinates of the same-name points, in order to prevent the image of the gross error points with the maximum value and the minimum value from existing, the gross error points are removed through the probability distribution of the standard normal distribution;
step 28: if the image is 1; otherwise, for all the coordinates of the same-name points, the front-time phase orthographic view image coordinates
Figure 112456DEST_PATH_IMAGE024
Orthographic viewing image coordinate with rear time phase
Figure 516892DEST_PATH_IMAGE028
The affine transformation model is fitted by the least square method, the affine transformation model parameters of the equations (8) and (9) are updated, the image pyramid level is reduced, and the calculation is performed again from step 24.
6. The method for detecting three-dimensional deformation based on the front-view image of the live-action three-dimensional model according to claim 5, wherein: in step 27, the method for coarse difference elimination includes the following steps:
step 271: first calculating the homologous points separatelydxNamely, the corresponding coordinates before and after the homonymy point are subtracted:
Figure 100320DEST_PATH_IMAGE030
(11)
step 272: for the results obtained, the centering was performed by subtracting the median value:
Figure 881195DEST_PATH_IMAGE031
(12)
step 273: probability distribution according to standard normal distribution, over
Figure 132047DEST_PATH_IMAGE032
The distribution probability of (2) is 68.3%;therefore, the results of step 272 are absolute-valued and arranged in descending order, and the probabilities are accumulated from the first point, and the corresponding homologous points with the accumulated probability of being 68.3% are removed:
Figure 707385DEST_PATH_IMAGE033
(13)
step 274: and (5) carrying out similar step elimination on the y coordinate according to the result obtained in the step 273 to obtain the coordinate of the same-name point within the error range.
7. The method for detecting three-dimensional deformation based on the front-view image of the live-action three-dimensional model according to claim 6, wherein: in the third step, the same-name point is converted into a three-dimensional coordinate system, and the specific method comprises the following steps:
for the characteristic points obtained in the step two, the depth data of the front-view image produced before and the coordinates of the starting point determined in the step one are used
Figure 778109DEST_PATH_IMAGE034
And calculating a rotation matrix by projecting a normal vector of the space plane, i.e. converting the coordinates of the same-name point to an object coordinate system
Figure 97095DEST_PATH_IMAGE035
Among them:
Figure 202455DEST_PATH_IMAGE036
CN202211211217.8A 2022-09-30 2022-09-30 Three-dimensional deformation detection method based on live-action three-dimensional model front-view image Active CN115457022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211211217.8A CN115457022B (en) 2022-09-30 2022-09-30 Three-dimensional deformation detection method based on live-action three-dimensional model front-view image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211211217.8A CN115457022B (en) 2022-09-30 2022-09-30 Three-dimensional deformation detection method based on live-action three-dimensional model front-view image

Publications (2)

Publication Number Publication Date
CN115457022A true CN115457022A (en) 2022-12-09
CN115457022B CN115457022B (en) 2023-11-10

Family

ID=84309488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211211217.8A Active CN115457022B (en) 2022-09-30 2022-09-30 Three-dimensional deformation detection method based on live-action three-dimensional model front-view image

Country Status (1)

Country Link
CN (1) CN115457022B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311047A (en) * 2023-03-01 2023-06-23 四川省公路规划勘察设计研究院有限公司 Landslide monitoring method, device, medium and server for air-space-ground multisource fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857246A (en) * 2021-02-05 2021-05-28 中国矿业大学(北京) Strip mine slope deformation online monitoring method utilizing ground three-eye video matching
CN113776451A (en) * 2021-11-11 2021-12-10 长江空间信息技术工程有限公司(武汉) Deformation monitoring automation method based on unmanned aerial vehicle photogrammetry
CN114627237A (en) * 2022-02-16 2022-06-14 武汉大学 Real-scene three-dimensional model-based front video image generation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112857246A (en) * 2021-02-05 2021-05-28 中国矿业大学(北京) Strip mine slope deformation online monitoring method utilizing ground three-eye video matching
CN113776451A (en) * 2021-11-11 2021-12-10 长江空间信息技术工程有限公司(武汉) Deformation monitoring automation method based on unmanned aerial vehicle photogrammetry
CN114627237A (en) * 2022-02-16 2022-06-14 武汉大学 Real-scene three-dimensional model-based front video image generation method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311047A (en) * 2023-03-01 2023-06-23 四川省公路规划勘察设计研究院有限公司 Landslide monitoring method, device, medium and server for air-space-ground multisource fusion
CN116311047B (en) * 2023-03-01 2023-09-05 四川省公路规划勘察设计研究院有限公司 Landslide monitoring method, device, medium and server for air-space-ground multisource fusion

Also Published As

Publication number Publication date
CN115457022B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US9378585B2 (en) System and method for automatic geometric correction using RPC
CN106960174B (en) Height control point extraction and auxiliary positioning method for high resolution image laser radar
CN105046251B (en) A kind of automatic ortho-rectification method based on environment No.1 satellite remote-sensing image
CN101826157B (en) Ground static target real-time identifying and tracking method
CN111126148A (en) DSM (digital communication system) generation method based on video satellite images
CN104574347A (en) On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN105352509A (en) Unmanned aerial vehicle motion target tracking and positioning method under geographic information space-time constraint
Di et al. Coastal mapping and change detection using high-resolution IKONOS satellite imagery
CN108562900B (en) SAR image geometric registration method based on elevation correction
CN115457022A (en) Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image
CN108876829B (en) SAR high-precision registration method based on nonlinear scale space and radial basis function
Feng et al. A hierarchical network densification approach for reconstruction of historical ice velocity fields in East Antarctica
CN116310901A (en) Debris flow material source dynamic migration identification method based on low-altitude remote sensing
Crespi et al. DSM generation from very high optical and radar sensors: Problems and potentialities along the road from the 3D geometric modeling to the Surface Model
CN113850864B (en) GNSS/LIDAR loop detection method for outdoor mobile robot
Jang et al. Topographic information extraction from KOMPSAT satellite stereo data using SGM
Arief et al. Quality assessment of DEM generated from SAR radargrammetry based on cross-correlation and spatial resolution setting
Sadeq Using total probability in image template matching.
Alsubaie et al. The feasibility of 3D point cloud generation from smartphones
Bagheri et al. Exploring the applicability of semi-global matching for SAR-optical stereogrammetry of urban scenes
Sefercik et al. Quality analysis of Worldview-4 DSMs generated by least squares matching and semiglobal matching
Zaletelj Reliable subpixel ground control point estimation algorithm using vector roads
CN113280789B (en) Method for taking laser height measurement points of relief area as image elevation control points
KR102275168B1 (en) Vehicle navigation method based on vision sensor
Ye et al. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Yang Aiming

Inventor after: Shi Zhongyu

Inventor after: Ke Tao

Inventor after: Duan Yansong

Inventor after: Qin Wei

Inventor after: Ma Nengwu

Inventor after: Tao Pengjie

Inventor after: Zhang Zuxun

Inventor after: Zhong Liang

Inventor after: Wei Lingyun

Inventor after: Zhang Xin

Inventor after: Yang Jun

Inventor after: Yang Yang

Inventor before: Yang Aiming

Inventor before: Ma Nengwu

Inventor before: Tao Pengjie

GR01 Patent grant
GR01 Patent grant